ERIC Educational Resources Information Center
Skalli, Ali
2007-01-01
Most of the studies that account for the endogeneity bias when estimating the returns to schooling assume that the relationship between education and earnings is linear. Studies that assume the latter relationship to be non-linear simply ignore the endogeneity bias. Moreover, they either assume an ad-hoc non-linear relationship or argue that…
ERIC Educational Resources Information Center
Bowman, Nicholas A.; Trolian, Teniell L.
2017-01-01
Many higher education studies have examined linear relationships between student experiences and outcomes, but this assumption may be questionable. In two notable examples previous research that assumed a linear relationship reached different substantive conclusions and implications than did research that explored non-linear associations among the…
Hunsicker, Mary E; Kappel, Carrie V; Selkoe, Kimberly A; Halpern, Benjamin S; Scarborough, Courtney; Mease, Lindley; Amrhein, Alisan
2016-04-01
Scientists and resource managers often use methods and tools that assume ecosystem components respond linearly to environmental drivers and human stressors. However, a growing body of literature demonstrates that many relationships are-non-linear, where small changes in a driver prompt a disproportionately large ecological response. We aim to provide a comprehensive assessment of the relationships between drivers and ecosystem components to identify where and when non-linearities are likely to occur. We focused our analyses on one of the best-studied marine systems, pelagic ecosystems, which allowed us to apply robust statistical techniques on a large pool of previously published studies. In this synthesis, we (1) conduct a wide literature review on single driver-response relationships in pelagic systems, (2) use statistical models to identify the degree of non-linearity in these relationships, and (3) assess whether general patterns exist in the strengths and shapes of non-linear relationships across drivers. Overall we found that non-linearities are common in pelagic ecosystems, comprising at least 52% of all driver-response relation- ships. This is likely an underestimate, as papers with higher quality data and analytical approaches reported non-linear relationships at a higher frequency (on average 11% more). Consequently, in the absence of evidence for a linear relationship, it is safer to assume a relationship is non-linear. Strong non-linearities can lead to greater ecological and socioeconomic consequences if they are unknown (and/or unanticipated), but if known they may provide clear thresholds to inform management targets. In pelagic systems, strongly non-linear relationships are often driven by climate and trophodynamic variables but are also associated with local stressors, such as overfishing and pollution, that can be more easily controlled by managers. Even when marine resource managers cannot influence ecosystem change, they can use information about threshold responses to guide how other stressors are managed and to adapt to new ocean conditions. As methods to detect and reduce uncertainty around threshold values improve, managers will be able to better understand and account for ubiquitous non-linear relationships.
Using nonlinear quantile regression to estimate the self-thinning boundary curve
Quang V. Cao; Thomas J. Dean
2015-01-01
The relationship between tree size (quadratic mean diameter) and tree density (number of trees per unit area) has been a topic of research and discussion for many decades. Starting with Reineke in 1933, the maximum size-density relationship, on a log-log scale, has been assumed to be linear. Several techniques, including linear quantile regression, have been employed...
Hannigan, Ailish; Bargary, Norma; Kinsella, Anthony; Clarke, Mary
2017-06-14
Although the relationships between duration of untreated psychosis (DUP) and outcomes are often assumed to be linear, few studies have explored the functional form of these relationships. The aim of this study is to demonstrate the potential of recent advances in curve fitting approaches (splines) to explore the form of the relationship between DUP and global assessment of functioning (GAF). Curve fitting approaches were used in models to predict change in GAF at long-term follow-up using DUP for a sample of 83 individuals with schizophrenia. The form of the relationship between DUP and GAF was non-linear. Accounting for non-linearity increased the percentage of variance in GAF explained by the model, resulting in better prediction and understanding of the relationship. The relationship between DUP and outcomes may be complex and model fit may be improved by accounting for the form of the relationship. This should be routinely assessed and new statistical approaches for non-linear relationships exploited, if appropriate. © 2017 John Wiley & Sons Australia, Ltd.
Xiaoqiu Zuo; Urs Buehlmann; R. Edward Thomas
2004-01-01
Solving the least-cost lumber grade mix problem allows dimension mills to minimize the cost of dimension part production. This problem, due to its economic importance, has attracted much attention from researchers and industry in the past. Most solutions used linear programming models and assumed that a simple linear relationship existed between lumber grade mix and...
Computation of the shock-wave boundary layer interaction with flow separation
NASA Technical Reports Server (NTRS)
Ardonceau, P.; Alziary, T.; Aymer, D.
1980-01-01
The boundary layer concept is used to describe the flow near the wall. The external flow is approximated by a pressure displacement relationship (tangent wedge in linearized supersonic flow). The boundary layer equations are solved in finite difference form and the question of the presence and unicity of the solution is considered for the direct problem (assumed pressure) or converse problem (assumed displacement thickness, friction ratio). The coupling algorithm presented implicitly processes the downstream boundary condition necessary to correctly define the interacting boundary layer problem. The algorithm uses a Newton linearization technique to provide a fast convergence.
Wu, Lingtao; Lord, Dominique
2017-05-01
This study further examined the use of regression models for developing crash modification factors (CMFs), specifically focusing on the misspecification in the link function. The primary objectives were to validate the accuracy of CMFs derived from the commonly used regression models (i.e., generalized linear models or GLMs with additive linear link functions) when some of the variables have nonlinear relationships and quantify the amount of bias as a function of the nonlinearity. Using the concept of artificial realistic data, various linear and nonlinear crash modification functions (CM-Functions) were assumed for three variables. Crash counts were randomly generated based on these CM-Functions. CMFs were then derived from regression models for three different scenarios. The results were compared with the assumed true values. The main findings are summarized as follows: (1) when some variables have nonlinear relationships with crash risk, the CMFs for these variables derived from the commonly used GLMs are all biased, especially around areas away from the baseline conditions (e.g., boundary areas); (2) with the increase in nonlinearity (i.e., nonlinear relationship becomes stronger), the bias becomes more significant; (3) the quality of CMFs for other variables having linear relationships can be influenced when mixed with those having nonlinear relationships, but the accuracy may still be acceptable; and (4) the misuse of the link function for one or more variables can also lead to biased estimates for other parameters. This study raised the importance of the link function when using regression models for developing CMFs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lee, Kyung Hee; Kang, Seung Kwan; Goo, Jin Mo; Lee, Jae Sung; Cheon, Gi Jeong; Seo, Seongho; Hwang, Eui Jin
2017-03-01
To compare the relationship between K trans from DCE-MRI and K 1 from dynamic 13 N-NH 3 -PET, with simultaneous and separate MR/PET in the VX-2 rabbit carcinoma model. MR/PET was performed simultaneously and separately, 14 and 15 days after VX-2 tumor implantation at the paravertebral muscle. The K trans and K 1 values were estimated using an in-house software program. The relationships between K trans and K 1 were analyzed using Pearson's correlation coefficients and linear/non-linear regression function. Assuming a linear relationship, K trans and K 1 exhibited a moderate positive correlations with both simultaneous (r=0.54-0.57) and separate (r=0.53-0.69) imaging. However, while the K trans and K 1 from separate imaging were linearly correlated, those from simultaneous imaging exhibited a non-linear relationship. The amount of change in K 1 associated with a unit increase in K trans varied depending on K trans values. The relationship between K trans and K 1 may be mis-interpreted with separate MR and PET acquisition. Copyright© 2017, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.
Taylor, Malcolm; Elliott, Herschel A; Navitsky, Laura O
2018-05-01
The production of hydraulic fracturing fluids (HFFs) in natural gas extraction and their subsequent management results in waste streams highly variable in total dissolved solids (TDS). Because TDS measurement is time-consuming, it is often estimated from electrical conductivity (EC) assuming dissolved solids are predominantly ionic species of low enough concentration to yield a linear TDS-EC relationship: TDS (mg/L) = k e × EC (μS/cm) where k e is a constant of proportionality. HHFs can have TDS levels from 20,000 to over 300,000 mg/L wherein ion-pair formation and non-ionized solutes invalidate a simple TDS-EC relationship. Therefore, the composition and TDS-EC relationship of several fluids from Marcellus gas wells in Pennsylvania were assessed. Below EC of 75,000 μS/cm, TDS (mg/L) can be estimated with little error assuming k e = 0.7. For more concentrated HFFs, a curvilinear relationship (R 2 = 0.99) is needed: TDS = 27,078e 1.05 × 10 -5 *EC . For hypersaline HFFs, the use of an EC/TDS meter underestimates TDS by as much as 50%. A single linear relationship is unreliable as a predictor of brine strength and, in turn, potential water quality and soil impacts from accidental releases or the suitability of HFFs for industrial wastewater treatment.
Staley, James R; Burgess, Stephen
2017-05-01
Mendelian randomization, the use of genetic variants as instrumental variables (IV), can test for and estimate the causal effect of an exposure on an outcome. Most IV methods assume that the function relating the exposure to the expected value of the outcome (the exposure-outcome relationship) is linear. However, in practice, this assumption may not hold. Indeed, often the primary question of interest is to assess the shape of this relationship. We present two novel IV methods for investigating the shape of the exposure-outcome relationship: a fractional polynomial method and a piecewise linear method. We divide the population into strata using the exposure distribution, and estimate a causal effect, referred to as a localized average causal effect (LACE), in each stratum of population. The fractional polynomial method performs metaregression on these LACE estimates. The piecewise linear method estimates a continuous piecewise linear function, the gradient of which is the LACE estimate in each stratum. Both methods were demonstrated in a simulation study to estimate the true exposure-outcome relationship well, particularly when the relationship was a fractional polynomial (for the fractional polynomial method) or was piecewise linear (for the piecewise linear method). The methods were used to investigate the shape of relationship of body mass index with systolic blood pressure and diastolic blood pressure. © 2017 The Authors Genetic Epidemiology Published by Wiley Periodicals, Inc.
Staley, James R.
2017-01-01
ABSTRACT Mendelian randomization, the use of genetic variants as instrumental variables (IV), can test for and estimate the causal effect of an exposure on an outcome. Most IV methods assume that the function relating the exposure to the expected value of the outcome (the exposure‐outcome relationship) is linear. However, in practice, this assumption may not hold. Indeed, often the primary question of interest is to assess the shape of this relationship. We present two novel IV methods for investigating the shape of the exposure‐outcome relationship: a fractional polynomial method and a piecewise linear method. We divide the population into strata using the exposure distribution, and estimate a causal effect, referred to as a localized average causal effect (LACE), in each stratum of population. The fractional polynomial method performs metaregression on these LACE estimates. The piecewise linear method estimates a continuous piecewise linear function, the gradient of which is the LACE estimate in each stratum. Both methods were demonstrated in a simulation study to estimate the true exposure‐outcome relationship well, particularly when the relationship was a fractional polynomial (for the fractional polynomial method) or was piecewise linear (for the piecewise linear method). The methods were used to investigate the shape of relationship of body mass index with systolic blood pressure and diastolic blood pressure. PMID:28317167
ERIC Educational Resources Information Center
Mairesse, Olivier; Hofmans, Joeri; Neu, Daniel; Dinis Monica de Oliveira, Armando Luis; Cluydts, Raymond; Theuns, Peter
2010-01-01
The present studies were conducted to contribute to the debate on the interaction between circadian (C) and homeostatic (S) processes in models of sleep regulation. The Two-Process Model of Sleep Regulation assumes a linear relationship between processes S and C. However, recent elaborations of the model, based on data from forced desynchrony…
The Relationship Between X-Ray Radiance and Magnetic Flux
NASA Astrophysics Data System (ADS)
Pevtsov, Alexei A.; Fisher, George H.; Acton, Loren W.; Longcope, Dana W.; Johns-Krull, Christopher M.; Kankelborg, Charles C.; Metcalf, Thomas R.
2003-12-01
We use soft X-ray and magnetic field observations of the Sun (quiet Sun, X-ray bright points, active regions, and integrated solar disk) and active stars (dwarf and pre-main-sequence) to study the relationship between total unsigned magnetic flux, Φ, and X-ray spectral radiance, LX. We find that Φ and LX exhibit a very nearly linear relationship over 12 orders of magnitude, albeit with significant levels of scatter. This suggests a universal relationship between magnetic flux and the power dissipated through coronal heating. If the relationship can be assumed linear, it is consistent with an average volumetric heating rate Q~B/L, where B is the average field strength along a closed field line and L is its length between footpoints. The Φ-LX relationship also indicates that X-rays provide a useful proxy for the magnetic flux on stars when magnetic measurements are unavailable.
Yan Sun; Matthew F. Bekker; R. Justin DeRose; Roger Kjelgren; S. -Y. Simon Wang
2017-01-01
Dendroclimatic research has long assumed a linear relationship between tree-ring increment and climate variables. However, ring width frequently underestimates extremely wet years, a phenomenon we refer to as âwet biasâ. In this paper, we present statistical evidence for wet bias that is obscured by the assumption of linearity. To improve tree-ring-climate modeling, we...
Incorporating nonlinearity into mediation analyses.
Knafl, George J; Knafl, Kathleen A; Grey, Margaret; Dixon, Jane; Deatrick, Janet A; Gallo, Agatha M
2017-03-21
Mediation is an important issue considered in the behavioral, medical, and social sciences. It addresses situations where the effect of a predictor variable X on an outcome variable Y is explained to some extent by an intervening, mediator variable M. Methods for addressing mediation have been available for some time. While these methods continue to undergo refinement, the relationships underlying mediation are commonly treated as linear in the outcome Y, the predictor X, and the mediator M. These relationships, however, can be nonlinear. Methods are needed for assessing when mediation relationships can be treated as linear and for estimating them when they are nonlinear. Existing adaptive regression methods based on fractional polynomials are extended here to address nonlinearity in mediation relationships, but assuming those relationships are monotonic as would be consistent with theories about directionality of such relationships. Example monotonic mediation analyses are provided assessing linear and monotonic mediation of the effect of family functioning (X) on a child's adaptation (Y) to a chronic condition by the difficulty (M) for the family in managing the child's condition. Example moderated monotonic mediation and simulation analyses are also presented. Adaptive methods provide an effective way to incorporate possibly nonlinear monotonicity into mediation relationships.
Intensity Biased PSP Measurement
NASA Technical Reports Server (NTRS)
Subramanian, Chelakara S.; Amer, Tahani R.; Oglesby, Donald M.; Burkett, Cecil G., Jr.
2000-01-01
The current pressure sensitive paint (PSP) technique assumes a linear relationship (Stern-Volmer Equation) between intensity ratio (I(sub 0)/I) and pressure ratio (P/P(sub 0)) over a wide range of pressures (vacuum to ambient or higher). Although this may be valid for some PSPs, in most PSPs the relationship is nonlinear, particularly at low pressures (less than 0.2 psia when the oxygen level is low). This non-linearity can be attributed to variations in the oxygen quenching (de-activation) rates (which otherwise is assumed constant) at these pressures. Other studies suggest that some paints also have non-linear calibrations at high pressures; because of heterogeneous (non-uniform) oxygen diffusion and c quenching. Moreover, pressure sensitive paints require correction for the output intensity due to light intensity variation, paint coating variation, model dynamics, wind-off reference pressure variation, and temperature sensitivity. Therefore to minimize the measurement uncertainties due to these causes, an in- situ intensity correction method was developed. A non-oxygen quenched paint (which provides a constant intensity at all pressures, called non-pressure sensitive paint, NPSP) was used for the reference intensity (I(sub NPSP)) with respect to which all the PSP intensities (I) were measured. The results of this study show that in order to fully reap the benefits of this technique, a totally oxygen impermeable NPSP must be available.
Intensity Biased PSP Measurement
NASA Technical Reports Server (NTRS)
Subramanian, Chelakara S.; Amer, Tahani R.; Oglesby, Donald M.; Burkett, Cecil G., Jr.
2000-01-01
The current pressure sensitive paint (PSP) technique assumes a linear relationship (Stern-Volmer Equation) between intensity ratio (I(sub o)/I) and pressure ratio (P/P(sub o)) over a wide range of pressures (vacuum to ambient or higher). Although this may be valid for some PSPs, in most PSPs the relationship is nonlinear, particularly at low pressures (less than 0.2 psia when the oxygen level is low). This non-linearity can be attributed to variations in the oxygen quenching (de-activation) rates (which otherwise is assumed constant) at these pressures. Other studies suggest that some paints also have non-linear calibrations at high pressures; because of heterogeneous (non-uniform) oxygen diffusion and quenching. Moreover, pressure sensitive paints require correction for the output intensity due to light intensity variation, paint coating variation, model dynamics, wind-off reference pressure variation, and temperature sensitivity. Therefore to minimize the measurement uncertainties due to these causes, an insitu intensity correction method was developed. A non-oxygen quenched paint (which provides a constant intensity at all pressures, called non-pressure sensitive paint, NPSP) was used for the reference intensity (I(sub NPSP) with respect to which all the PSP intensities (I) were measured. The results of this study show that in order to fully reap the benefits of this technique, a totally oxygen impermeable NPSP must be available.
The power-proportion method for intracranial volume correction in volumetric imaging analysis.
Liu, Dawei; Johnson, Hans J; Long, Jeffrey D; Magnotta, Vincent A; Paulsen, Jane S
2014-01-01
In volumetric brain imaging analysis, volumes of brain structures are typically assumed to be proportional or linearly related to intracranial volume (ICV). However, evidence abounds that many brain structures have power law relationships with ICV. To take this relationship into account in volumetric imaging analysis, we propose a power law based method-the power-proportion method-for ICV correction. The performance of the new method is demonstrated using data from the PREDICT-HD study.
Kruse, Julie A; Low, Lisa Kane; Seng, Julia S
2013-07-01
To test alternatives to the current research and clinical practice of assuming that married or partnered status is a proxy for positive social support. Having a partner is assumed to relate to better health status via the intermediary process of social support. However, women's health research indicates that having a partner is not always associated with positive social support. An exploratory post hoc analysis focused on posttraumatic stress and childbearing was conducted using a large perinatal database from 2005-2009. To operationalize partner relationship, four variables were analysed: partner ('yes' or 'no'), intimate partner violence ('yes' or 'no'), the combination of those two factors, and the woman's appraisal of the quality of her partner relationship via a single item. Construct validity of these four alternative variables was assessed in relation to appraisal of the partner's social support in labour and the postpartum using linear regression standardized betas and adjusted R-squares. Predictive validity was assessed using unadjusted and adjusted linear regression modelling. Four groups were compared. Married, abused women differed most from married, not abused women in relation to the social support, and depression outcomes used for validity checks. The variable representing the women's appraisals of their partner relationships accounts for the most variance in predicting depression scores. Our results support the validity of operationalizing the impact of the partner relationship on outcomes using a combination of partnered status and abuse status or using a subjective rating of quality of the partner relationship. © 2012 Blackwell Publishing Ltd.
Rothenberg, Stephen J; Rothenberg, Jesse C
2005-09-01
Statistical evaluation of the dose-response function in lead epidemiology is rarely attempted. Economic evaluation of health benefits of lead reduction usually assumes a linear dose-response function, regardless of the outcome measure used. We reanalyzed a previously published study, an international pooled data set combining data from seven prospective lead studies examining contemporaneous blood lead effect on IQ (intelligence quotient) of 7-year-old children (n = 1,333). We constructed alternative linear multiple regression models with linear blood lead terms (linear-linear dose response) and natural-log-transformed blood lead terms (log-linear dose response). We tested the two lead specifications for nonlinearity in the models, compared the two lead specifications for significantly better fit to the data, and examined the effects of possible residual confounding on the functional form of the dose-response relationship. We found that a log-linear lead-IQ relationship was a significantly better fit than was a linear-linear relationship for IQ (p = 0.009), with little evidence of residual confounding of included model variables. We substituted the log-linear lead-IQ effect in a previously published health benefits model and found that the economic savings due to U.S. population lead decrease between 1976 and 1999 (from 17.1 microg/dL to 2.0 microg/dL) was 2.2 times (319 billion dollars) that calculated using a linear-linear dose-response function (149 billion dollars). The Centers for Disease Control and Prevention action limit of 10 microg/dL for children fails to protect against most damage and economic cost attributable to lead exposure.
Accounting For Nonlinearity In A Microwave Radiometer
NASA Technical Reports Server (NTRS)
Stelzried, Charles T.
1991-01-01
Simple mathematical technique found to account adequately for nonlinear component of response of microwave radiometer. Five prescribed temperatures measured to obtain quadratic calibration curve. Temperature assumed to vary quadratically with reading. Concept not limited to radiometric application; applicable to other measuring systems in which relationships between quantities to be determined and readings of instruments differ slightly from linearity.
ERIC Educational Resources Information Center
Hayes, Andrew F.; Preacher, Kristopher J.
2010-01-01
Most treatments of indirect effects and mediation in the statistical methods literature and the corresponding methods used by behavioral scientists have assumed linear relationships between variables in the causal system. Here we describe and extend a method first introduced by Stolzenberg (1980) for estimating indirect effects in models of…
Roberts, Steven; Martin, Michael A
2007-06-01
The majority of studies that have investigated the relationship between particulate matter (PM) air pollution and mortality have assumed a linear dose-response relationship and have used either a single-day's PM or a 2- or 3-day moving average of PM as the measure of PM exposure. Both of these modeling choices have come under scrutiny in the literature, the linear assumption because it does not allow for non-linearities in the dose-response relationship, and the use of the single- or multi-day moving average PM measure because it does not allow for differential PM-mortality effects spread over time. These two problems have been dealt with on a piecemeal basis with non-linear dose-response models used in some studies and distributed lag models (DLMs) used in others. In this paper, we propose a method for investigating the shape of the PM-mortality dose-response relationship that combines a non-linear dose-response model with a DLM. This combined model will be shown to produce satisfactory estimates of the PM-mortality dose-response relationship in situations where non-linear dose response models and DLMs alone do not; that is, the combined model did not systemically underestimate or overestimate the effect of PM on mortality. The combined model is applied to ten cities in the US and a pooled dose-response model formed. When fitted with a change-point value of 60 microg/m(3), the pooled model provides evidence for a positive association between PM and mortality. The combined model produced larger estimates for the effect of PM on mortality than when using a non-linear dose-response model or a DLM in isolation. For the combined model, the estimated percentage increase in mortality for PM concentrations of 25 and 75 microg/m(3) were 3.3% and 5.4%, respectively. In contrast, the corresponding values from a DLM used in isolation were 1.2% and 3.5%, respectively.
Prediction of free turbulent mixing using a turbulent kinetic energy method
NASA Technical Reports Server (NTRS)
Harsha, P. T.
1973-01-01
Free turbulent mixing of two-dimensional and axisymmetric one- and two-stream flows is analyzed by a relatively simple turbulent kinetic energy method. This method incorporates a linear relationship between the turbulent shear and the turbulent kinetic energy and an algebraic relationship for the length scale appearing in the turbulent kinetic energy equation. Good results are obtained for a wide variety of flows. The technique is shown to be especially applicable to flows with heat and mass transfer, for which nonunity Prandtl and Schmidt numbers may be assumed.
Full characterization of modular values for finite-dimensional systems
NASA Astrophysics Data System (ADS)
Ho, Le Bin; Imoto, Nobuyuki
2016-06-01
Kedem and Vaidman obtained a relationship between the spin-operator modular value and its weak value for specific coupling strengths [14]. Here we give a general expression for the modular value in the n-dimensional Hilbert space using the weak values up to (n - 1)th order of an arbitrary observable for any coupling strength, assuming non-degenerated eigenvalues. For two-dimensional case, it shows a linear relationship between the weak value and the modular value. We also relate the modular value of the sum of observables to the weak value of their product.
Skew-t partially linear mixed-effects models for AIDS clinical studies.
Lu, Tao
2016-01-01
We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.
DOSE-RESPONSE RELATIONSHIPS FOR MOLECULAR ALTERATIONS INDUCED BY B[ A ]P IN STRAIN A/J MOUSE LUNG
Benzo[a]pyrene (B[a]P) induces tumors in rodents at doses much higher than those found in the environment. Current cancer risk assessment of B[a]P assumes that the risk posed by low level exposure to B[a]P is linear with respect to dose. We have measur...
Rothenberg, Stephen J.; Rothenberg, Jesse C.
2005-01-01
Statistical evaluation of the dose–response function in lead epidemiology is rarely attempted. Economic evaluation of health benefits of lead reduction usually assumes a linear dose–response function, regardless of the outcome measure used. We reanalyzed a previously published study, an international pooled data set combining data from seven prospective lead studies examining contemporaneous blood lead effect on IQ (intelligence quotient) of 7-year-old children (n = 1,333). We constructed alternative linear multiple regression models with linear blood lead terms (linear–linear dose response) and natural-log–transformed blood lead terms (log-linear dose response). We tested the two lead specifications for nonlinearity in the models, compared the two lead specifications for significantly better fit to the data, and examined the effects of possible residual confounding on the functional form of the dose–response relationship. We found that a log-linear lead–IQ relationship was a significantly better fit than was a linear–linear relationship for IQ (p = 0.009), with little evidence of residual confounding of included model variables. We substituted the log-linear lead–IQ effect in a previously published health benefits model and found that the economic savings due to U.S. population lead decrease between 1976 and 1999 (from 17.1 μg/dL to 2.0 μg/dL) was 2.2 times ($319 billion) that calculated using a linear–linear dose–response function ($149 billion). The Centers for Disease Control and Prevention action limit of 10 μg/dL for children fails to protect against most damage and economic cost attributable to lead exposure. PMID:16140626
Markon, Kristian E
2010-08-01
The literature suggests that internalizing psychopathology relates to impairment incrementally and gradually. However, the form of this relationship has not been characterized. This form is critical to understanding internalizing psychopathology, as it is possible that internalizing may accelerate in effect at some level of severity, defining a natural boundary of abnormality. Here, a novel method-semiparametric structural equation modeling-was used to model the relationship between internalizing and impairment in a sample of 8,580 individuals from the 2000 British Office for National Statistics Survey of Psychiatric Morbidity, a large, population-representative study of psychopathology. This method allows one to model relationships between latent internalizing and impairment without assuming any particular form a priori and to compare models in which the relationship is constant and linear. Results suggest that the relationship between internalizing and impairment is in fact linear and constant across the entire range of internalizing variation and that it is impossible to nonarbitrarily define a specific level of internalizing beyond which consequences suddenly become catastrophic in nature. Results demonstrate the phenomenological continuity of internalizing psychopathology, highlight the importance of impairment as well as symptoms, and have clear implications for defining mental disorder. Copyright 2010 APA, all rights reserved
NASA Technical Reports Server (NTRS)
Ottander, John A.; Hall, Robert A.; Powers, J. F.
2018-01-01
A method is presented that allows for the prediction of the magnitude of limit cycles due to adverse control-slosh interaction in liquid propelled space vehicles using non-linear slosh damping. Such a method is an alternative to the industry practice of assuming linear damping and relying on: mechanical slosh baffles to achieve desired stability margins; accepting minimal slosh stability margins; or time domain non-linear analysis to accept time periods of poor stability. Sinusoidal input describing functional analysis is used to develop a relationship between the non-linear slosh damping and an equivalent linear damping at a given slosh amplitude. In addition, a more accurate analytical prediction of the danger zone for slosh mass locations in a vehicle under proportional and derivative attitude control is presented. This method is used in the control-slosh stability analysis of the NASA Space Launch System.
Laschober, Tanja C.; de Tormes Eby, Lillian Turner
2013-01-01
The main goals of the current study were to investigate whether there are linear or curvilinear relationships between substance use disorder counselors’ job performance and actual turnover after 1 year utilizing four indicators of job performance and three turnover statuses (voluntary, involuntary, and no turnover as the reference group). Using longitudinal data from 440 matched counselor-clinical supervisor dyads, results indicate that overall, counselors with lower job performance are more likely to turn over voluntarily and involuntarily than not to turn over. Further, one of the job performance measures shows a significant curvilinear effect. We conclude that the negative consequences often assumed to be “caused” by counselor turnover may be overstated because those who leave both voluntarily and involuntarily demonstrate generally lower performance than those who remain employed at their treatment program. PMID:22527711
Laschober, Tanja C; de Tormes Eby, Lillian Turner
2013-07-01
The main goals of the current study were to investigate whether there are linear or curvilinear relationships between substance use disorder counselors' job performance and actual turnover after 1 year utilizing four indicators of job performance and three turnover statuses (voluntary, involuntary, and no turnover as the reference group). Using longitudinal data from 440 matched counselor-clinical supervisor dyads, results indicate that overall, counselors with lower job performance are more likely to turn over voluntarily and involuntarily than not to turn over. Further, one of the job performance measures shows a significant curvilinear effect. We conclude that the negative consequences often assumed to be "caused" by counselor turnover may be overstated because those who leave both voluntarily and involuntarily demonstrate generally lower performance than those who remain employed at their treatment program.
Muleme, James; Kankya, Clovice; Ssempebwa, John C.; Mazeri, Stella; Muwonge, Adrian
2017-01-01
Knowledge, attitude, and practice (KAP) studies guide the implementation of public health interventions (PHIs), and they are important tools for political persuasion. The design and implementation of PHIs assumes a linear KAP relationship, i.e., an awareness campaign results in the desirable societal behavioral change. However, there is no robust framework for testing this relationship before and after PHIs. Here, we use qualitative and quantitative data on pesticide usage to test this linear relationship, identify associated context specific factors as well as assemble a framework that could be used to guide and evaluate PHIs. We used data from a cross-sectional mixed methods study on pesticide usage. Quantitative data were collected using a structured questionnaire from 167 households representing 1,002 individuals. Qualitative data were collected from key informants and focus group discussions. Quantitative and qualitative data analysis was done in R 3.2.0 as well as qualitative thematic analysis, respectively. Our framework shows that a KAP linear relationship only existed for households with a low knowledge score, suggesting that an awareness campaign would only be effective for ~37% of the households. Context specific socioeconomic factors explain why this relationship does not hold for households with high knowledge scores. These findings are essential for developing targeted cost-effective and sustainable interventions on pesticide usage and other PHIs with context specific modifications. PMID:29276703
Muleme, James; Kankya, Clovice; Ssempebwa, John C; Mazeri, Stella; Muwonge, Adrian
2017-01-01
Knowledge, attitude, and practice (KAP) studies guide the implementation of public health interventions (PHIs), and they are important tools for political persuasion. The design and implementation of PHIs assumes a linear KAP relationship, i.e., an awareness campaign results in the desirable societal behavioral change. However, there is no robust framework for testing this relationship before and after PHIs. Here, we use qualitative and quantitative data on pesticide usage to test this linear relationship, identify associated context specific factors as well as assemble a framework that could be used to guide and evaluate PHIs. We used data from a cross-sectional mixed methods study on pesticide usage. Quantitative data were collected using a structured questionnaire from 167 households representing 1,002 individuals. Qualitative data were collected from key informants and focus group discussions. Quantitative and qualitative data analysis was done in R 3.2.0 as well as qualitative thematic analysis, respectively. Our framework shows that a KAP linear relationship only existed for households with a low knowledge score, suggesting that an awareness campaign would only be effective for ~37% of the households. Context specific socioeconomic factors explain why this relationship does not hold for households with high knowledge scores. These findings are essential for developing targeted cost-effective and sustainable interventions on pesticide usage and other PHIs with context specific modifications.
Mino, H
2007-01-01
To estimate the parameters, the impulse response (IR) functions of some linear time-invariant systems generating intensity processes, in Shot-Noise-Driven Doubly Stochastic Poisson Process (SND-DSPP) in which multivariate presynaptic spike trains and postsynaptic spike trains can be assumed to be modeled by the SND-DSPPs. An explicit formula for estimating the IR functions from observations of multivariate input processes of the linear systems and the corresponding counting process (output process) is derived utilizing the expectation maximization (EM) algorithm. The validity of the estimation formula was verified through Monte Carlo simulations in which two presynaptic spike trains and one postsynaptic spike train were assumed to be observable. The IR functions estimated on the basis of the proposed identification method were close to the true IR functions. The proposed method will play an important role in identifying the input-output relationship of pre- and postsynaptic neural spike trains in practical situations.
System identification of analytical models of damped structures
NASA Technical Reports Server (NTRS)
Fuh, J.-S.; Chen, S.-Y.; Berman, A.
1984-01-01
A procedure is presented for identifying linear nonproportionally damped system. The system damping is assumed to be representable by a real symmetric matrix. Analytical mass, stiffness and damping matrices which constitute an approximate representation of the system are assumed to be available. Given also are an incomplete set of measured natural frequencies, damping ratios and complex mode shapes of the structure, normally obtained from test data. A method is developed to find the smallest changes in the analytical model so that the improved model can exactly predict the measured modal parameters. The present method uses the orthogonality relationship to improve mass and damping matrices and the dynamic equation to find the improved stiffness matrix.
Models of subjective response to in-flight motion data
NASA Technical Reports Server (NTRS)
Rudrapatna, A. N.; Jacobson, I. D.
1973-01-01
Mathematical relationships between subjective comfort and environmental variables in an air transportation system are investigated. As a first step in model building, only the motion variables are incorporated and sensitivities are obtained using stepwise multiple regression analysis. The data for these models have been collected from commercial passenger flights. Two models are considered. In the first, subjective comfort is assumed to depend on rms values of the six-degrees-of-freedom accelerations. The second assumes a Rustenburg type human response function in obtaining frequency weighted rms accelerations, which are used in a linear model. The form of the human response function is examined and the results yield a human response weighting function for different degrees of freedom.
Conceptual Analysis of System Average Water Stability
NASA Astrophysics Data System (ADS)
Zhang, H.
2016-12-01
Averaging over time and area, the precipitation in an ecosystem (SAP - system average precipitation) depends on the average surface temperature and relative humidity (RH) in the system if uniform convection is assumed. RH depends on the evapotranspiration of the system (SAE - system average evapotranspiration). There is a non-linear relationship between SAP and SAE. Studying this relationship can lead mechanistic understanding of the ecosystem health status and trend under different setups. If SAP is higher than SAE, the system will have a water runoff which flows out through rivers. If SAP is lower than SAE, irrigation is needed to maintain the vegetation status. This presentation will give a conceptual analysis of the stability in this relationship under different assumed areas, water or forest coverages, elevations and latitudes. This analysis shows that desert is a stable system. Water circulation in basins is also stabilized at a specific SAP based on the basin profile. It further shows that deforestation will reduce SAP, and can flip the system to an irrigation required status. If no irrigation is provided, the system will automatically reduce to its stable point - desert, which is extremely difficult to turn around.
Hysteresis between coral reef calcification and the seawater aragonite saturation state
NASA Astrophysics Data System (ADS)
McMahon, Ashly; Santos, Isaac R.; Cyronak, Tyler; Eyre, Bradley D.
2013-09-01
predictions of how ocean acidification (OA) will affect coral reefs assume a linear functional relationship between the ambient seawater aragonite saturation state (Ωa) and net ecosystem calcification (NEC). We quantified NEC in a healthy coral reef lagoon in the Great Barrier Reef during different times of the day. Our observations revealed a diel hysteresis pattern in the NEC versus Ωa relationship, with peak NEC rates occurring before the Ωa peak and relatively steady nighttime NEC in spite of variable Ωa. Net ecosystem production had stronger correlations with NEC than light, temperature, nutrients, pH, and Ωa. The observed hysteresis may represent an overlooked challenge for predicting the effects of OA on coral reefs. If widespread, the hysteresis could prevent the use of a linear extrapolation to determine critical Ωa threshold levels required to shift coral reefs from a net calcifying to a net dissolving state.
Synthesis procedure for linear time-varying feedback systems with large parameter ignorance
NASA Technical Reports Server (NTRS)
Mcdonald, T. E., Jr.
1972-01-01
The development of synthesis procedures for linear time-varying feedback systems is considered. It is assumed that the plant can be described by linear differential equations with time-varying coefficients; however, ignorance is associated with the plant in that only the range of the time-variations are known instead of exact functional relationships. As a result of this plant ignorance the use of time-varying compensation is ineffective so that only time-invariant compensation is employed. In addition, there is a noise source at the plant output which feeds noise through the feedback elements to the plant input. Because of this noise source the gain of the feedback elements must be as small as possible. No attempt is made to develop a stability criterion for time-varying systems in this work.
NASA Technical Reports Server (NTRS)
Suteau, A. M.; Whitcomb, J. H.
1977-01-01
A relationship was found between the seismic moment, M sub O, of shallow local earthquakes and the total duration of the signal, t, in seconds, measured from the earthquakes origin time, assuming that the end of the coda is composed of backscattering surface waves due to lateral heterogenity in the shallow crust following Aki. Using the linear relationship between the logarithm of M sub O and the local Richter magnitude M sub L, a relationship between M sub L and t, was found. This relationship was used to calculate a coda magnitude M sub C which was compared to M sub L for Southern California earthquakes which occurred during the period from 1972 to 1975.
The Relationship between Religious Coping and Self-Care Behaviors in Iranian Medical Students.
Sharif Nia, Hamid; Pahlevan Sharif, Saeed; Goudarzian, Amir Hossein; Allen, Kelly A; Jamali, Saman; Heydari Gorji, Mohammad Ali
2017-12-01
In recent years, researchers have identified that coping strategies are an important contributor to an individual's life satisfaction and ability to manage stress. The positive relationship between religious copings, specifically, with physical and mental health has also been identified in some studies. Spirituality and religion have been discussed rigorously in research, but very few studies exist on religious coping. The aim of this study was to determine the relationship between religious coping methods (i.e., positive and negative religious coping) and self-care behaviors in Iranian medical students. This study used a cross-sectional design of 335 randomly selected students from Mazandaran University of Medical Sciences, Iran. A data collection tool comprised of the standard questionnaire of religious coping methods and questionnaire of self-care behaviors assessment was utilized. Data were analyzed using a two-sample t test assuming equal variances. Adjusted linear regression was used to evaluate the independent association of religious copings with self-care. Adjusted linear regression model indicated an independent significant association between positive (b = 4.616, 95% CI 4.234-4.999) and negative (b = -3.726, 95% CI -4.311 to -3.141) religious coping with self-care behaviors. Findings showed a linear relationship between religious coping and self-care behaviors. Further research with larger sample sizes in diverse populations is recommended.
Qian, Jianjun; Yang, Jian; Xu, Yong
2013-09-01
This paper presents a robust but simple image feature extraction method, called image decomposition based on local structure (IDLS). It is assumed that in the local window of an image, the macro-pixel (patch) of the central pixel, and those of its neighbors, are locally linear. IDLS captures the local structural information by describing the relationship between the central macro-pixel and its neighbors. This relationship is represented with the linear representation coefficients determined using ridge regression. One image is actually decomposed into a series of sub-images (also called structure images) according to a local structure feature vector. All the structure images, after being down-sampled for dimensionality reduction, are concatenated into one super-vector. Fisher linear discriminant analysis is then used to provide a low-dimensional, compact, and discriminative representation for each super-vector. The proposed method is applied to face recognition and examined using our real-world face image database, NUST-RWFR, and five popular, publicly available, benchmark face image databases (AR, Extended Yale B, PIE, FERET, and LFW). Experimental results show the performance advantages of IDLS over state-of-the-art algorithms.
A Note on Multigrid Theory for Non-nested Grids and/or Quadrature
NASA Technical Reports Server (NTRS)
Douglas, C. C.; Douglas, J., Jr.; Fyfe, D. E.
1996-01-01
We provide a unified theory for multilevel and multigrid methods when the usual assumptions are not present. For example, we do not assume that the solution spaces or the grids are nested. Further, we do not assume that there is an algebraic relationship between the linear algebra problems on different levels. What we provide is a computationally useful theory for adaptively changing levels. Theory is provided for multilevel correction schemes, nested iteration schemes, and one way (i.e., coarse to fine grid with no correction iterations) schemes. We include examples showing the applicability of this theory: finite element examples using quadrature in the matrix assembly and finite volume examples with non-nested grids. Our theory applies directly to other discretizations as well.
Reliability Analysis of the Gradual Degradation of Semiconductor Devices.
1983-07-20
under the heading of linear models or linear statistical models . 3 ,4 We have not used this material in this report. Assuming catastrophic failure when...assuming a catastrophic model . In this treatment we first modify our system loss formula and then proceed to the actual analysis. II. ANALYSIS OF...Failure Time 1 Ti Ti 2 T2 T2 n Tn n and are easily analyzed by simple linear regression. Since we have assumed a log normal/Arrhenius activation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Otake, M.; Schull, W.J.
The occurrence of lenticular opacities among atomic bomb survivors in Hiroshima and Nagasaki detected in 1963-1964 has been examined in reference to their ..gamma.. and neutron doses. A lenticular opacity in this context implies an ophthalmoscopic and slit lamp biomicroscopic defect in the axial posterior aspect of the lens which may or may not interfere measureably with visual acuity. Several different dose-response models were fitted to the data after the effects of age at time of bombing (ATB) were examined. Some postulate the existence of a threshold(s), others do not. All models assume a ''background'' exists, that is, that somemore » number of posterior lenticular opacities are ascribable to events other than radiation exposure. Among these alternatives we can show that a simple linear ..gamma..-neutron relationship which assumes no threshold does not fit the data adequately under the T65 dosimetry, but does fit the recent Oak Ridge and Lawrence Livermore estimates. Other models which envisage quadratic terms in gamma and which may or may not assume a threshold are compatible with the data. The ''best'' fit, that is, the one with the smallest X/sup 2/ and largest tail probability, is with a ''linear gamma:linear neutron'' model which postulates a ..gamma.. threshold but no threshold for neutrons. It should be noted that the greatest difference in the dose-response models associated with the three different sets of doses involves the neutron component, as is, of course, to be expected. No effect of neutrons on the occurrence of lenticular opacities is demonstrable with either the Lawrence Livermore or Oak Ridge estimates.« less
Kendal, W S
2000-04-01
To illustrate how probability-generating functions (PGFs) can be employed to derive a simple probabilistic model for clonogenic survival after exposure to ionizing irradiation. Both repairable and irreparable radiation damage to DNA were assumed to occur by independent (Poisson) processes, at intensities proportional to the irradiation dose. Also, repairable damage was assumed to be either repaired or further (lethally) injured according to a third (Bernoulli) process, with the probability of lethal conversion being directly proportional to dose. Using the algebra of PGFs, these three processes were combined to yield a composite PGF that described the distribution of lethal DNA lesions in irradiated cells. The composite PGF characterized a Poisson distribution with mean, chiD+betaD2, where D was dose and alpha and beta were radiobiological constants. This distribution yielded the conventional linear-quadratic survival equation. To test the composite model, the derived distribution was used to predict the frequencies of multiple chromosomal aberrations in irradiated human lymphocytes. The predictions agreed well with observation. This probabilistic model was consistent with single-hit mechanisms, but it was not consistent with binary misrepair mechanisms. A stochastic model for radiation survival has been constructed from elementary PGFs that exactly yields the linear-quadratic relationship. This approach can be used to investigate other simple probabilistic survival models.
On the kinetics of anaerobic power
2012-01-01
Background This study investigated two different mathematical models for the kinetics of anaerobic power. Model 1 assumes that the work power is linear with the work rate, while Model 2 assumes a linear relationship between the alactic anaerobic power and the rate of change of the aerobic power. In order to test these models, a cross country skier ran with poles on a treadmill at different exercise intensities. The aerobic power, based on the measured oxygen uptake, was used as input to the models, whereas the simulated blood lactate concentration was compared with experimental results. Thereafter, the metabolic rate from phosphocreatine break down was calculated theoretically. Finally, the models were used to compare phosphocreatine break down during continuous and interval exercises. Results Good similarity was found between experimental and simulated blood lactate concentration during steady state exercise intensities. The measured blood lactate concentrations were lower than simulated for intensities above the lactate threshold, but higher than simulated during recovery after high intensity exercise when the simulated lactate concentration was averaged over the whole lactate space. This fit was improved when the simulated lactate concentration was separated into two compartments; muscles + internal organs and blood. Model 2 gave a better behavior of alactic energy than Model 1 when compared against invasive measurements presented in the literature. During continuous exercise, Model 2 showed that the alactic energy storage decreased with time, whereas Model 1 showed a minimum value when steady state aerobic conditions were achieved. During interval exercise the two models showed similar patterns of alactic energy. Conclusions The current study provides useful insight on the kinetics of anaerobic power. Overall, our data indicate that blood lactate levels can be accurately modeled during steady state, and suggests a linear relationship between the alactic anaerobic power and the rate of change of the aerobic power. PMID:22830586
NASA Astrophysics Data System (ADS)
Brandt, Jørgen; Silver, Jeremy David; Heile Christensen, Jesper; Skou Andersen, Mikael; Geels, Camilla; Gross, Allan; Buus Hansen, Ayoe; Mantzius Hansen, Kaj; Brandt Hedegaard, Gitte; Ambelas Skjøth, Carsten
2010-05-01
Air pollution has significant negative impacts on human health and well-being, which entail substantial economic consequences. We have developed an integrated model system, EVA (External Valuation of Air pollution), to assess health-related economic externalities of air pollution resulting from specific emission sources/sectors. The EVA system was initially developed to assess externalities from power production, but in this study it is extended to evaluate costs at the national level. The EVA system integrates a regional-scale atmospheric chemistry transport model (DEHM), address-level population data, exposure-response functions and monetary values applicable for Danish/European conditions. Traditionally, systems that assess economic costs of health impacts from air pollution assume linear approximations in the source-receptor relationships. However, atmospheric chemistry is non-linear and therefore the uncertainty involved in the linear assumption can be large. The EVA system has been developed to take into account the non-linear processes by using a comprehensive, state-of-the-art chemical transport model when calculating how specific changes to emissions affect air pollution levels and the subsequent impacts on human health and cost. Furthermore, we present a new "tagging" method, developed to examine how specific emission sources influence air pollution levels without assuming linearity of the non-linear behaviour of atmospheric chemistry. This method is more precise than the traditional approach based on taking the difference between two concentration fields. Using the EVA system, we have estimated the total external costs from the main emission sectors in Denmark, representing the ten major SNAP codes. Finally, we assess the impacts and external costs of emissions from international ship traffic around Denmark, since there is a high volume of ship traffic in the region.
Delbaere, Marjorie; Smith, Malcolm C
2014-01-01
This research examined differences between novices and experts in processing analogical metaphors appearing in prescription drug advertisements. In contrast to previous studies on knowledge transfer, no evidence of the superiority of experts in processing metaphors was found. The results from an experiment suggest that expert consumers were more likely to process a metaphor in an ad literally than novices. Our findings point to a condition in which the expertise effect with processing analogies is not the linear relationship assumed in previous studies.
2013-09-01
model , they are, for all intents and purposes, simply unit-less linear weights. Although this equation is technically valid for a Lambertian... modeled as a single flat facet, the same model cannot be assumed equally valid for the body. The body, after all, is a complex, three dimensional...facet (termed the “body”) and the solar tracking parts of the object as another facet (termed the solar panels). This comprises the two-facet model
Multimodal Deep Autoencoder for Human Pose Recovery.
Hong, Chaoqun; Yu, Jun; Wan, Jian; Tao, Dacheng; Wang, Meng
2015-12-01
Video-based human pose recovery is usually conducted by retrieving relevant poses using image features. In the retrieving process, the mapping between 2D images and 3D poses is assumed to be linear in most of the traditional methods. However, their relationships are inherently non-linear, which limits recovery performance of these methods. In this paper, we propose a novel pose recovery method using non-linear mapping with multi-layered deep neural network. It is based on feature extraction with multimodal fusion and back-propagation deep learning. In multimodal fusion, we construct hypergraph Laplacian with low-rank representation. In this way, we obtain a unified feature description by standard eigen-decomposition of the hypergraph Laplacian matrix. In back-propagation deep learning, we learn a non-linear mapping from 2D images to 3D poses with parameter fine-tuning. The experimental results on three data sets show that the recovery error has been reduced by 20%-25%, which demonstrates the effectiveness of the proposed method.
Grabell, Adam S; Li, Yanwei; Barker, Jeff W; Wakschlag, Lauren S; Huppert, Theodore J; Perlman, Susan B
2018-01-01
Burgeoning interest in early childhood irritability has recently turned toward neuroimaging techniques to better understand normal versus abnormal irritability using dimensional methods. Current accounts largely assume a linear relationship between poor frustration management, an expression of irritability, and its underlying neural circuitry. However, the relationship between these constructs may not be linear (i.e., operate differently at varying points across the irritability spectrum), with implications for how early atypical irritability is identified and treated. Our goal was to examine how the association between frustration-related lateral prefrontal cortex (LPFC) activation and irritability differs across the dimensional spectrum of irritability by testing for non-linear associations. Children (N = 92; ages 3-7) ranging from virtually no irritability to the upper end of the clinical range completed a frustration induction task while we recorded LPFC hemoglobin levels using fNIRS. Children self-rated their emotions during the task and parents rated their child's level of irritability. Whereas a linear model showed no relationship between frustration-related LPFC activation and irritability, a quadratic model revealed frustration-related LPFC activation increased as parent-reported irritability scores increased within the normative range of irritability but decreased with increasing irritability in the severe range, with an apex at the 91st percentile. Complementarily, we found children's self-ratings of emotion during frustration related to concurrent LPFC activation as an inverted U function, such that children who reported mild distress had greater activation than peers reporting no or high distress. Results suggest children with relatively higher irritability who are unimpaired may possess well-developed LPFC support, a mechanism that drops out in the severe end of the irritability dimension. Findings suggest novel avenues for understanding the heterogeneity of early irritability and its clinical sequelae.
Continental-Scale View of Bankfull Width Versus Drainage Area Relationship
NASA Astrophysics Data System (ADS)
Wilkerson, G. V.
2012-12-01
While recognizing that there are multiple variables that influence bankfull channel width (Wbf), this study explores the relationship between Wbf and drainage area (Ada) across a range of geologic, terrestrial, climatic, and botanical environments. The study aims to develop a foundational model that will facilitate developing a comprehensive multivariate model for predicting channel width. Data for this study was compiled from independent regional curve studies (i.e., studies in which Wbf vs. Ada relationships are developed). The data represent 1,018 sites that span 12 states in the continental U.S. The channels are alluvial and are such that 1 m ≤ Wbf ≤ 110 m and 0.50 km2 ≤ Ada ≤ 22,000 km2. For developing regional curves, the Wbf vs. Ada relationship is generally assumed to be log-linear. Also, past studies have indicated that the Wbf vs. Ada relationship differs for small basins (i.e., 10 to 100 km2) and large basins due to the effects of vegetation. Linear and nonlinear (i.e., sigmoidal) models were considered for this study. The best model relates ln(Wbf ) and ln(Ada) using a three-piece linear model (Figure 1). The value of dWbf /dAda is significantly greater (p < 0.001) for mid-size basins (5 km2 ≤ Ada ≤ 350 km2) than either small or large basins. The noted change in dWbf /dAda is likely in response to vegetation. Also, the change in dWbf /dAda is so abrupt that the three-piece linear model, fits the data better than any of the sigmoidal functions explored in this study. For every model evaluated in this study, the residuals were bi-modal (Figure 2). For the residuals to begin converging on a normal distribution, at least one other factor (probably precipitation) needs to be included in the model.
Threshold detection in an on-off binary communications channel with atmospheric scintillation
NASA Technical Reports Server (NTRS)
Webb, W. E.; Marino, J. T., Jr.
1974-01-01
The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-emperical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. Bit error probabilities for non-optimum threshold detection system were also investigated.
Threshold detection in an on-off binary communications channel with atmospheric scintillation
NASA Technical Reports Server (NTRS)
Webb, W. E.
1975-01-01
The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-empirical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. The bit error probabilities for nonoptimum threshold detection systems were also investigated.
Torres-Ruiz, José M; Sperry, John S; Fernández, José E
2012-10-01
Xylem hydraulic conductivity (K) is typically defined as K = F/(P/L), where F is the flow rate through a xylem segment associated with an applied pressure gradient (P/L) along the segment. This definition assumes a linear flow-pressure relationship with a flow intercept (F(0)) of zero. While linearity is typically the case, there is often a non-zero F(0) that persists in the absence of leaks or evaporation and is caused by passive uptake of water by the sample. In this study, we determined the consequences of failing to account for non-zero F(0) for both K measurements and the use of K to estimate the vulnerability to xylem cavitation. We generated vulnerability curves for olive root samples (Olea europaea) by the centrifuge technique, measuring a maximally accurate reference K(ref) as the slope of a four-point F vs P/L relationship. The K(ref) was compared with three more rapid ways of estimating K. When F(0) was assumed to be zero, K was significantly under-estimated (average of -81.4 ± 4.7%), especially when K(ref) was low. Vulnerability curves derived from these under-estimated K values overestimated the vulnerability to cavitation. When non-zero F(0) was taken into account, whether it was measured or estimated, more accurate K values (relative to K(ref)) were obtained, and vulnerability curves indicated greater resistance to cavitation. We recommend accounting for non-zero F(0) for obtaining accurate estimates of K and cavitation resistance in hydraulic studies. Copyright © Physiologia Plantarum 2012.
Non-linear assessment and deficiency of linear relationship for healthcare industry
NASA Astrophysics Data System (ADS)
Nordin, N.; Abdullah, M. M. A. B.; Razak, R. C.
2017-09-01
This paper presents the development of the non-linear service satisfaction model that assumes patients are not necessarily satisfied or dissatisfied with good or poor service delivery. With that, compliment and compliant assessment is considered, simultaneously. Non-linear service satisfaction instrument called Kano-Q and Kano-SS is developed based on Kano model and Theory of Quality Attributes (TQA) to define the unexpected, hidden and unspoken patient satisfaction and dissatisfaction into service quality attribute. A new Kano-Q and Kano-SS algorithm for quality attribute assessment is developed based satisfaction impact theories and found instrumentally fit the reliability and validity test. The results were also validated based on standard Kano model procedure before Kano model and Quality Function Deployment (QFD) is integrated for patient attribute and service attribute prioritization. An algorithm of Kano-QFD matrix operation is developed to compose the prioritized complaint and compliment indexes. Finally, the results of prioritized service attributes are mapped to service delivery category to determine the most prioritized service delivery that need to be improved at the first place by healthcare service provider.
Moliner, Carolina; Martínez-Tur, Vicente; Peiró, José M; Ramos, José; Cropanzano, Russell
2013-02-01
This article assesses the links between non-professional employees' perceptions of reciprocity in their relationships with their supervisors and the positive and negative sides of employees' well-being at work: burnout and engagement. Two hypotheses were explored. First, the fairness hypothesis assumes a curvilinear relationship where balanced reciprocity (when the person perceives that there is equilibrium between his/her efforts and the benefits he/she receives) presents the highest level of well-being. Second, the self-interest hypothesis proposes a linear pattern where over-benefitted situations for employees (when the person perceives that he/she is receiving more than he/she deserves) increase well-being. One study with two independent samples was conducted. The participants were 349 employees in 59 hotels (sample 1) and 690 employees in 89 centres providing attention to people with mental disabilities (sample 2). Linear and curvilinear regression models supported the self-interest hypothesis for the links from reciprocity to burnout and engagement. We conclude with theoretical implications and opportunities for future research. Copyright © 2012 John Wiley & Sons, Ltd.
Saturation current and collection efficiency for ionization chambers in pulsed beams.
DeBlois, F; Zankowski, C; Podgorsak, E B
2000-05-01
Saturation currents and collection efficiencies in ionization chambers exposed to pulsed megavoltage photon and electron beams are determined assuming a linear relationship between 1/I and 1/V in the extreme near-saturation region, with I and V the chamber current and polarizing voltage, respectively. Careful measurements of chamber current against polarizing voltage in the extreme near-saturation region reveal a current rising faster than that predicted by the linear relationship. This excess current combined with conventional "two-voltage" technique for determination of collection efficiency may result in an up to 0.7% overestimate of the saturation current for standard radiation field sizes of 10X10 cm2. The measured excess current is attributed to charge multiplication in the chamber air volume and to radiation-induced conductivity in the stem of the chamber (stem effect). These effects may be accounted for by an exponential term used in conjunction with Boag's equation for collection efficiency in pulsed beams. The semiempirical model follows the experimental data well and accounts for both the charge recombination as well as for the charge multiplication effects and the chamber stem effect.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Audren, Benjamin; Lesgourgues, Julien; Bird, Simeon
2013-01-01
We present forecasts for the accuracy of determining the parameters of a minimal cosmological model and the total neutrino mass based on combined mock data for a future Euclid-like galaxy survey and Planck. We consider two different galaxy surveys: a spectroscopic redshift survey and a cosmic shear survey. We make use of the Monte Carlo Markov Chains (MCMC) technique and assume two sets of theoretical errors. The first error is meant to account for uncertainties in the modelling of the effect of neutrinos on the non-linear galaxy power spectrum and we assume this error to be fully correlated in Fouriermore » space. The second error is meant to parametrize the overall residual uncertainties in modelling the non-linear galaxy power spectrum at small scales, and is conservatively assumed to be uncorrelated and to increase with the ratio of a given scale to the scale of non-linearity. It hence increases with wavenumber and decreases with redshift. With these two assumptions for the errors and assuming further conservatively that the uncorrelated error rises above 2% at k = 0.4 h/Mpc and z = 0.5, we find that a future Euclid-like cosmic shear/galaxy survey achieves a 1-σ error on M{sub ν} close to 32 meV/25 meV, sufficient for detecting the total neutrino mass with good significance. If the residual uncorrelated errors indeed rises rapidly towards smaller scales in the non-linear regime as we have assumed here then the data on non-linear scales does not increase the sensitivity to the total neutrino mass. Assuming instead a ten times smaller theoretical error with the same scale dependence, the error on the total neutrino mass decreases moderately from σ(M{sub ν}) = 18 meV to 14 meV when mildly non-linear scales with 0.1 h/Mpc < k < 0.6 h/Mpc are included in the analysis of the galaxy survey data.« less
Learning quadratic receptive fields from neural responses to natural stimuli.
Rajan, Kanaka; Marre, Olivier; Tkačik, Gašper
2013-07-01
Models of neural responses to stimuli with complex spatiotemporal correlation structure often assume that neurons are selective for only a small number of linear projections of a potentially high-dimensional input. In this review, we explore recent modeling approaches where the neural response depends on the quadratic form of the input rather than on its linear projection, that is, the neuron is sensitive to the local covariance structure of the signal preceding the spike. To infer this quadratic dependence in the presence of arbitrary (e.g., naturalistic) stimulus distribution, we review several inference methods, focusing in particular on two information theory-based approaches (maximization of stimulus energy and of noise entropy) and two likelihood-based approaches (Bayesian spike-triggered covariance and extensions of generalized linear models). We analyze the formal relationship between the likelihood-based and information-based approaches to demonstrate how they lead to consistent inference. We demonstrate the practical feasibility of these procedures by using model neurons responding to a flickering variance stimulus.
Visual exploration of high-dimensional data through subspace analysis and dynamic projections
Liu, S.; Wang, B.; Thiagarajan, J. J.; ...
2015-06-01
Here, we introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that createmore » smooth animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.« less
Visual Exploration of High-Dimensional Data through Subspace Analysis and Dynamic Projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, S.; Wang, B.; Thiagarajan, Jayaraman J.
2015-06-01
We introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that create smoothmore » animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.« less
A geometric approach to failure detection and identification in linear systems
NASA Technical Reports Server (NTRS)
Massoumnia, M. A.
1986-01-01
Using concepts of (C,A)-invariant and unobservability (complementary observability) subspaces, a geometric formulation of the failure detection and identification filter problem is stated. Using these geometric concepts, it is shown that it is possible to design a causal linear time-invariant processor that can be used to detect and uniquely identify a component failure in a linear time-invariant system, assuming: (1) The components can fail simultaneously, and (2) The components can fail only one at a time. In addition, a geometric formulation of Beard's failure detection filter problem is stated. This new formulation completely clarifies of output separability and mutual detectability introduced by Beard and also exploits the dual relationship between a restricted version of the failure detection and identification problem and the control decoupling problem. Moreover, the frequency domain interpretation of the results is used to relate the concepts of failure sensitive observers with the generalized parity relations introduced by Chow. This interpretation unifies the various failure detection and identification concepts and design procedures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Otake, M.; Schull, W.J.
This paper investigates the quantitative relationship of ionizing radiation to the occurrence of posterior lenticular opacities among the survivors of the atomic bombings of Hiroshima and Nagasaki suggested by the DS86 dosimetry system. DS86 doses are available for 1983 (93.4%) of the 2124 atomic bomb survivors analyzed in 1982. The DS86 kerma neutron component for Hiroshima survivors is much smaller than its comparable T65DR component, but still 4.2-fold higher (0.38 Gy at 6 Gy) than that in Nagasaki (0.09 Gy at 6 Gy). Thus, if the eye is especially sensitive to neutrons, there may yet be some useful information onmore » their effects, particularly in Hiroshima. The dose-response relationship has been evaluated as a function of the separately estimated gamma-ray and neutron doses. Among several different dose-response models without and with two thresholds, we have selected as the best model the one with the smallest x2 or the largest log likelihood value associated with the goodness of fit. The best fit is a linear gamma-linear neutron relationship which assumes different thresholds for the two types of radiation. Both gamma and neutron regression coefficients for the best fitting model are positive and highly significant for the estimated DS86 eye organ dose.« less
Predicting rates of inbreeding in populations undergoing selection.
Woolliams, J A; Bijma, P
2000-01-01
Tractable forms of predicting rates of inbreeding (DeltaF) in selected populations with general indices, nonrandom mating, and overlapping generations were developed, with the principal results assuming a period of equilibrium in the selection process. An existing theorem concerning the relationship between squared long-term genetic contributions and rates of inbreeding was extended to nonrandom mating and to overlapping generations. DeltaF was shown to be approximately (1)/(4)(1 - omega) times the expected sum of squared lifetime contributions, where omega is the deviation from Hardy-Weinberg proportions. This relationship cannot be used for prediction since it is based upon observed quantities. Therefore, the relationship was further developed to express DeltaF in terms of expected long-term contributions that are conditional on a set of selective advantages that relate the selection processes in two consecutive generations and are predictable quantities. With random mating, if selected family sizes are assumed to be independent Poisson variables then the expected long-term contribution could be substituted for the observed, providing (1)/(4) (since omega = 0) was increased to (1)/(2). Established theory was used to provide a correction term to account for deviations from the Poisson assumptions. The equations were successfully applied, using simple linear models, to the problem of predicting DeltaF with sib indices in discrete generations since previously published solutions had proved complex. PMID:10747074
Advanced Twisted Pair Cables for Distributed Local Area Networks in Intelligent Structure Systems
NASA Astrophysics Data System (ADS)
Semenov, Andrey
2018-03-01
The possibility of a significant increase in the length of cable communication channels of local area networks of automation and engineering support systems of buildings in the case of their implementation on balanced twisted pair cables is shown. Assuming a direct connection scheme and an effective speed of 100 Mbit/s, analytical relationships are obtained for the calculation of the maximum communication distance. The necessity of using in the linear part of such systems of twisted pair cables with U/UTP structure and interference parameters at the level of category 5e is grounded.
Changes in Cirrus Cloudiness and their Relationship to Contrails
NASA Technical Reports Server (NTRS)
Minnis, Patrick; Ayers, J. Kirk; Palikonda, Rabindra; Doelling, David R.; Schumann, Ulrich; Gierens, Klaus
2001-01-01
Condensation trails, or contrails, formed in the wake of high-altitude aircraft have long been suspected of causing the formation of additional cirrus cloud cover. More cirrus is possible because 10 - 20% of the atmosphere at typical commercial flight altitudes is clear but ice-saturated. Since they can affect the radiation budget like natural cirrus clouds of equivalent optical depth and microphysical properties, contrail -generated cirrus clouds are another potential source of anthropogenic influence on climate. Initial estimates of contrail radiative forcing (CRF) were based on linear contrail coverage and optical depths derived from a limited number of satellite observations. Assuming that such estimates are accurate, they can be considered as the minimum possible CRF because contrails often develop into cirrus clouds unrecognizable as contrails. These anthropogenic cirrus are not likely to be identified as contrails from satellites and would, therefore, not contribute to estimates of contrail coverage. The mean lifetime and coverage of spreading contrails relative to linear contrails are needed to fully assess the climatic effect of contrails, but are difficult to measure directly. However, the maximum possible impact can be estimated using the relative trends in cirrus coverage over regions with and without air traffic. In this paper, the upper bound of CRF is derived by first computing the change in cirrus coverage over areas with heavy air traffic relative to that over the remainder of the globe assuming that the difference between the two trends is due solely to contrails. This difference is normalized to the corresponding linear contrail coverage for the same regions to obtain an average spreading factor. The maximum contrail-cirrus coverage, estimated as the product of the spreading factor and the linear contrail coverage, is then used in the radiative model to estimate the maximum potential CRF for current air traffic.
Interpreting spectral unmixing coefficients: From spectral weights to mass fractions
NASA Astrophysics Data System (ADS)
Grumpe, Arne; Mengewein, Natascha; Rommel, Daniela; Mall, Urs; Wöhler, Christian
2018-01-01
It is well known that many common planetary minerals exhibit prominent absorption features. Consequently, the analysis of spectral reflectance measurements has become a major tool of remote sensing. Quantifying the mineral abundances, however, is not a trivial task. The interaction between the incident light rays and particulate surfaces, e.g., the lunar regolith, leads to a non-linear relationship between the reflectance spectra of the pure minerals, the so-called ;endmembers;, and the surface's reflectance spectrum. It is, however, possible to transform the non-linear reflectance mixture into a linear mixture of single-scattering albedos of the Hapke model. The abundances obtained by inverting the linear single-scattering albedo mixture may be interpreted as volume fractions which are weighted by the endmember's extinction coefficient. Commonly, identical extinction coefficients are assumed throughout all endmembers and the obtained volume fractions are converted to mass fractions using either measured or assumed densities. In theory, the proposed method may cover different grain sizes if each grain size range of a mineral is treated as a distinct endmember. Here, we present a method to transform the mixing coefficients to mass fractions for arbitrary combinations of extinction coefficients and densities. The required parameters are computed from reflectance measurements of well defined endmember mixtures. Consequently, additional measurements, e.g., the endmember density, are no longer required. We evaluate the method based on laboratory measurements and various results presented in the literature, respectively. It is shown that the procedure transforms the mixing coefficients to mass fractions yielding an accuracy comparable to carefully calibrated laboratory measurements without additional knowledge. For our laboratory measurements, the square root of the mean squared error is less than 4.82 wt%. In addition, the method corrects for systematic effects originating from mixtures of endmembers showing a highly varying albedo, e.g., plagioclase and pyroxene.
Variable-permittivity linear inverse problem for the H(sub z)-polarized case
NASA Technical Reports Server (NTRS)
Moghaddam, M.; Chew, W. C.
1993-01-01
The H(sub z)-polarized inverse problem has rarely been studied before due to the complicated way in which the unknown permittivity appears in the wave equation. This problem is equivalent to the acoustic inverse problem with variable density. We have recently reported the solution to the nonlinear variable-permittivity H(sub z)-polarized inverse problem using the Born iterative method. Here, the linear inverse problem is solved for permittivity (epsilon) and permeability (mu) using a different approach which is an extension of the basic ideas of diffraction tomography (DT). The key to solving this problem is to utilize frequency diversity to obtain the required independent measurements. The receivers are assumed to be in the far field of the object, and plane wave incidence is also assumed. It is assumed that the scatterer is weak, so that the Born approximation can be used to arrive at a relationship between the measured pressure field and two terms related to the spatial Fourier transform of the two unknowns, epsilon and mu. The term involving permeability corresponds to monopole scattering and that for permittivity to dipole scattering. Measurements at several frequencies are used and a least squares problem is solved to reconstruct epsilon and mu. It is observed that the low spatial frequencies in the spectra of epsilon and mu produce inaccuracies in the results. Hence, a regularization method is devised to remove this problem. Several results are shown. Low contrast objects for which the above analysis holds are used to show that good reconstructions are obtained for both permittivity and permeability after regularization is applied.
A Position Tracking System Using MARG Sensors
2007-12-01
42 3. Correcting the Angles by Removing Drifts .....................................44 4. Various...assumed to be zero. It was further assumed that the drift was linear. xiv Thus, the linear drift was removed from the computed velocity to achieve more...gate cycle was able to be analyzed. One of these concepts is the theory of an American prosthesis by A. A. Mark, in which he divided the gate in
Nonlinear Cross-Bridge Elasticity and Post-Power-Stroke Events in Fast Skeletal Muscle Actomyosin
Persson, Malin; Bengtsson, Elina; ten Siethoff, Lasse; Månsson, Alf
2013-01-01
Generation of force and movement by actomyosin cross-bridges is the molecular basis of muscle contraction, but generally accepted ideas about cross-bridge properties have recently been questioned. Of the utmost significance, evidence for nonlinear cross-bridge elasticity has been presented. We here investigate how this and other newly discovered or postulated phenomena would modify cross-bridge operation, with focus on post-power-stroke events. First, as an experimental basis, we present evidence for a hyperbolic [MgATP]-velocity relationship of heavy-meromyosin-propelled actin filaments in the in vitro motility assay using fast rabbit skeletal muscle myosin (28–29°C). As the hyperbolic [MgATP]-velocity relationship was not consistent with interhead cooperativity, we developed a cross-bridge model with independent myosin heads and strain-dependent interstate transition rates. The model, implemented with inclusion of MgATP-independent detachment from the rigor state, as suggested by previous single-molecule mechanics experiments, accounts well for the [MgATP]-velocity relationship if nonlinear cross-bridge elasticity is assumed, but not if linear cross-bridge elasticity is assumed. In addition, a better fit is obtained with load-independent than with load-dependent MgATP-induced detachment rate. We discuss our results in relation to previous data showing a nonhyperbolic [MgATP]-velocity relationship when actin filaments are propelled by myosin subfragment 1 or full-length myosin. We also consider the implications of our results for characterization of the cross-bridge elasticity in the filament lattice of muscle. PMID:24138863
Warped linear mixed models for the genetic analysis of transformed phenotypes
Fusi, Nicolo; Lippert, Christoph; Lawrence, Neil D.; Stegle, Oliver
2014-01-01
Linear mixed models (LMMs) are a powerful and established tool for studying genotype–phenotype relationships. A limitation of the LMM is that the model assumes Gaussian distributed residuals, a requirement that rarely holds in practice. Violations of this assumption can lead to false conclusions and loss in power. To mitigate this problem, it is common practice to pre-process the phenotypic values to make them as Gaussian as possible, for instance by applying logarithmic or other nonlinear transformations. Unfortunately, different phenotypes require different transformations, and choosing an appropriate transformation is challenging and subjective. Here we present an extension of the LMM that estimates an optimal transformation from the observed data. In simulations and applications to real data from human, mouse and yeast, we show that using transformations inferred by our model increases power in genome-wide association studies and increases the accuracy of heritability estimation and phenotype prediction. PMID:25234577
Warped linear mixed models for the genetic analysis of transformed phenotypes.
Fusi, Nicolo; Lippert, Christoph; Lawrence, Neil D; Stegle, Oliver
2014-09-19
Linear mixed models (LMMs) are a powerful and established tool for studying genotype-phenotype relationships. A limitation of the LMM is that the model assumes Gaussian distributed residuals, a requirement that rarely holds in practice. Violations of this assumption can lead to false conclusions and loss in power. To mitigate this problem, it is common practice to pre-process the phenotypic values to make them as Gaussian as possible, for instance by applying logarithmic or other nonlinear transformations. Unfortunately, different phenotypes require different transformations, and choosing an appropriate transformation is challenging and subjective. Here we present an extension of the LMM that estimates an optimal transformation from the observed data. In simulations and applications to real data from human, mouse and yeast, we show that using transformations inferred by our model increases power in genome-wide association studies and increases the accuracy of heritability estimation and phenotype prediction.
Comprehensive drought characteristics analysis based on a nonlinear multivariate drought index
NASA Astrophysics Data System (ADS)
Yang, Jie; Chang, Jianxia; Wang, Yimin; Li, Yunyun; Hu, Hui; Chen, Yutong; Huang, Qiang; Yao, Jun
2018-02-01
It is vital to identify drought events and to evaluate multivariate drought characteristics based on a composite drought index for better drought risk assessment and sustainable development of water resources. However, most composite drought indices are constructed by the linear combination, principal component analysis and entropy weight method assuming a linear relationship among different drought indices. In this study, the multidimensional copulas function was applied to construct a nonlinear multivariate drought index (NMDI) to solve the complicated and nonlinear relationship due to its dependence structure and flexibility. The NMDI was constructed by combining meteorological, hydrological, and agricultural variables (precipitation, runoff, and soil moisture) to better reflect the multivariate variables simultaneously. Based on the constructed NMDI and runs theory, drought events for a particular area regarding three drought characteristics: duration, peak, and severity were identified. Finally, multivariate drought risk was analyzed as a tool for providing reliable support in drought decision-making. The results indicate that: (1) multidimensional copulas can effectively solve the complicated and nonlinear relationship among multivariate variables; (2) compared with single and other composite drought indices, the NMDI is slightly more sensitive in capturing recorded drought events; and (3) drought risk shows a spatial variation; out of the five partitions studied, the Jing River Basin as well as the upstream and midstream of the Wei River Basin are characterized by a higher multivariate drought risk. In general, multidimensional copulas provides a reliable way to solve the nonlinear relationship when constructing a comprehensive drought index and evaluating multivariate drought characteristics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thayer, G.R.; Hardie, R.W.; Barrera-Roldan, A.
1993-12-31
This reports on the collection and preparation of data (costs and air quality improvement) for the strategic evaluation portion of the Mexico City Air Quality Research Initiative (MARI). Reports written for the Mexico City government by various international organizations were used to identify proposed options along with estimates of cost and emission reductions. Information from appropriate options identified by SCAQMD for Southem California were also used in the analysis. A linear optimization method was used to select a group of options or a strategy to be evaluated by decision analysis. However, the reduction of ozone levels is not a linearmore » function of the reduction of hydrocarbon and NO{sub x} emissions. Therefore, a more detailed analysis was required for ozone. An equation for a plane on an isopleth calculated with a trajectory model was obtained using two endpoints that bracket the expected total ozone precursor reductions plus the starting concentrations for hydrocarbons and NO{sub x}. The relationship between ozone levels and the hydrocarbon and NO{sub x} concentrations was assumed to lie on this plane. This relationship was used in the linear optimization program to select the options comprising a strategy.« less
NASA Technical Reports Server (NTRS)
Rengarajan, Govind; Aminpour, Mohammad A.; Knight, Norman F., Jr.
1992-01-01
An improved four-node quadrilateral assumed-stress hybrid shell element with drilling degrees of freedom is presented. The formulation is based on Hellinger-Reissner variational principle and the shape functions are formulated directly for the four-node element. The element has 12 membrane degrees of freedom and 12 bending degrees of freedom. It has nine independent stress parameters to describe the membrane stress resultant field and 13 independent stress parameters to describe the moment and transverse shear stress resultant field. The formulation encompasses linear stress, linear buckling, and linear free vibration problems. The element is validated with standard tests cases and is shown to be robust. Numerical results are presented for linear stress, buckling, and free vibration analyses.
Neophytou, Andreas M; Picciotto, Sally; Brown, Daniel M; Gallagher, Lisa E; Checkoway, Harvey; Eisen, Ellen A; Costello, Sadie
2018-02-13
Prolonged exposures can have complex relationships with health outcomes, as timing, duration, and intensity of exposure are all potentially relevant. Summary measures such as cumulative exposure or average intensity of exposure may not fully capture these relationships. We applied penalized and unpenalized distributed lag non-linear models (DLNMs) with flexible exposure-response and lag-response functions in order to examine the association between crystalline silica exposure and mortality from lung cancer and non-malignant respiratory disease in a cohort study of 2,342 California diatomaceous earth workers, followed 1942-2011. We also assessed associations using simple measures of cumulative exposure assuming linear exposure-response and constant lag-response. Measures of association from DLNMs were generally higher than from simpler models. Rate ratios from penalized DLNMs corresponding to average daily exposures of 0.4 mg/m3 during lag years 31-50 prior to the age of observed cases were 1.47 (95% confidence interval (CI) 0.92, 2.35) for lung cancer and 1.80 (95% CI: 1.14, 2.85) for non-malignant respiratory disease. Rate ratios from the simpler models for the same exposure scenario were 1.15 (95% CI: 0.89-1.48) and 1.23 (95% CI: 1.03-1.46) respectively. Longitudinal cohort studies of prolonged exposures and chronic health outcomes should explore methods allowing for flexibility and non-linearities in the exposure-lag-response. © The Author(s) 2018. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.
Schmidt, Benedikt R
2003-08-01
The evidence for amphibian population declines is based on count data that were not adjusted for detection probabilities. Such data are not reliable even when collected using standard methods. The formula C = Np (where C is a count, N the true parameter value, and p is a detection probability) relates count data to demography, population size, or distributions. With unadjusted count data, one assumes a linear relationship between C and N and that p is constant. These assumptions are unlikely to be met in studies of amphibian populations. Amphibian population data should be based on methods that account for detection probabilities.
An assessment of an F2 or N2O4 atmospheric injection from an aborted space shuttle mission
NASA Technical Reports Server (NTRS)
Watson, R. T.; Smokler, P. E.; Demore, W. B.
1978-01-01
Assuming a linear relationship between the stratosphere loading of NOx and the magnitude of the ozone perturbation, the change in ozone expected to result from space shuttle ejection of N2O4 was calculated based on the ozone change that is predicted for the (much greater) NOx input that would accompany large-scale operations of SSTs. Stratospheric fluorine reactions were critically reviewed to evaluate the magnitude of fluorine induced ozone destruction relative to the reduction that would be caused by addition of an equal amount of chlorine. The predicted effect on stratospheric ozone is vanishingly small.
Size and density avalanche scaling near jamming.
Arévalo, Roberto; Ciamarra, Massimo Pica
2014-04-28
The current microscopic picture of plasticity in amorphous materials assumes local failure events to produce displacement fields complying with linear elasticity. Indeed, the flow properties of nonaffine systems, such as foams, emulsions and granular materials close to jamming, that produce a fluctuating displacement field when failing, are still controversial. Here we show, via a thorough numerical investigation of jammed materials, that nonaffinity induces a critical scaling of the flow properties dictated by the distance to the jamming point. We rationalize this critical behavior by introducing a new universal jamming exponent and hyperscaling relationships, and we use these results to describe the volume fraction dependence of the friction coefficient.
Viscoelastic/damage modeling of filament-wound spherical pressure vessels
NASA Technical Reports Server (NTRS)
Hackett, Robert M.; Dozier, Jan D.
1987-01-01
A model of the viscoelastic/damage response of a filament-wound spherical vessel used for long-term pressure containment is developed. The matrix material of the composite system is assumed to be linearly viscoelastic. Internal accumulated damage based upon a quadratic relationship between transverse modulus and maximum circumferential strain is postulated. The resulting nonlinear problem is solved by an iterative routine. The elastic-viscoelastic correspondence is employed to produce, in the Laplace domain, the associated elastic solution for the maximum circumferential strain which is inverted by the method of collocation to yield the time-dependent solution. Results obtained with the model are compared to experimental observations.
Bissell, Paul; Peacock, Marian; Holdsworth, Michelle; Powell, Katie; Wilcox, John; Clonan, Angie
2018-06-19
This study explores the ways in which social networks might shape accounts about food practices. Drawing on insights from the work of Christakis and Fowler () whose claims about the linkages between obesity and social networks have been the subject of vigorous debate in the sociological literature, we present qualitative data from a study of women's' accounts of social networks and food practices, conducted in Nottingham, England. We tentatively suggest that whilst social networks in their broadest sense, might shape what was perceived to be normal and acceptable in relation to food practices (and provide everyday discursive resources which normalise practice), the relationship between the two is more complex than the linear relationship proposed by Christakis and Fowler. Here, we introduce the idea of assumed shared food narratives (ASFNs), which, we propose, sheds light on motive talk about food practices, and which also provide practical and discursive resources to actors seeking to protect and defend against 'untoward' behaviour, in the context of public health messages around food and eating. We suggest that understanding ASFNs and the ways in which they are embedded in social networks represents a novel way of understanding food and eating practices from a sociological perspective. © 2018 Foundation for the Sociology of Health & Illness.
Iwanaga, Akiko; Sasaki, Akira
2004-04-01
A striking linear dominance relationship for uniparental mitochondrial transmission is known between many mating types of plasmodial slime mold Physarum polycephalum. We herein examine how such hierarchical cytoplasmic inheritance evolves in isogamous organisms with many self-incompatible mating types. We assume that a nuclear locus determines the mating type of gametes and that another nuclear locus controls the digestion of mitochondria DNAs (mtDNAs) of the recipient gamete after fusion. We then examine the coupled genetic dynamics for the evolution of self-incompatible mating types and biased mitochondrial transmission between them. In Physarum, a multiallelic nuclear locus matA controls both the mating type of the gametes and the selective elimination of the mtDNA in the zygotes. We theoretically examine two potential mechanisms that might be responsible for the preferential digestion of mitochondria in the zygote. In the first model, the preferential digestion of mitochondria is assumed to be the outcome of differential expression levels of a suppressor gene carried by each gamete (suppression-power model). In the second model (site-specific nuclease model), the digestion of mtDNAs is assumed to be due to their cleavage by a site-specific nuclease that cuts the mtDNA at unmethylated recognition sites. Also assumed is that the mtDNAs are methylated at the same recognition site prior to the fusion, thereby being protected against the nuclease of the same gamete, and that the suppressor alleles convey information for the recognition sequences of nuclease and methylase. In both models, we found that a linear dominance hierarchy evolves as a consequence of the buildup of a strong linkage disequilibrium between the mating-type locus and the suppressor locus, though it fails to evolve if the recombination rate between the two loci is larger than a threshold. This threshold recombination rate depends on the number of mating types and the degree of fitness reduction in the heteroplasmic zygotes. If the recombination rate is above the threshold, suppressor alleles are equally distributed in each mating type at evolutionary equilibrium. Based on the theoretical results of the site-specific nuclease model, we propose that a nested subsequence structure in the recognition sequence should underlie the linear dominance hierarchy of mitochondrial transmission.
Chen, Chen; Xie, Yuanchang
2016-06-01
Annual Average Daily Traffic (AADT) is often considered as a main covariate for predicting crash frequencies at urban and suburban intersections. A linear functional form is typically assumed for the Safety Performance Function (SPF) to describe the relationship between the natural logarithm of expected crash frequency and covariates derived from AADTs. Such a linearity assumption has been questioned by many researchers. This study applies Generalized Additive Models (GAMs) and Piecewise Linear Negative Binomial (PLNB) regression models to fit intersection crash data. Various covariates derived from minor-and major-approach AADTs are considered. Three different dependent variables are modeled, which are total multiple-vehicle crashes, rear-end crashes, and angle crashes. The modeling results suggest that a nonlinear functional form may be more appropriate. Also, the results show that it is important to take into consideration the joint safety effects of multiple covariates. Additionally, it is found that the ratio of minor to major-approach AADT has a varying impact on intersection safety and deserves further investigations. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Yang; Zhou, Ping; Chu, You-Hua
2013-05-01
We find a linear relationship between the size of a massive star's main-sequence bubble in a molecular environment and the star's initial mass: R b ≈ 1.22 M/M ⊙ - 9.16 pc, assuming a constant interclump pressure. Since stars in the mass range of 8 to 25-30 M ⊙ will end their evolution in the red supergiant phase without launching a Wolf-Rayet wind, the main-sequence wind-blown bubbles are mainly responsible for the extent of molecular gas cavities, while the effect of the photoionization is comparatively small. This linear relation can thus be used to infer the masses of the massive star progenitors of supernova remnants (SNRs) that are discovered to evolve in molecular cavities, while few other means are available for inferring the properties of SNR progenitors. We have used this method to estimate the initial masses of the progenitors of eight SNRs: Kes 69, Kes 75, Kes 78, 3C 396, 3C 397, HC 40, Vela, and RX J1713-3946.
Cancer mortality among coke oven workers.
Redmond, C K
1983-01-01
The OSHA standard for coke oven emissions, which went into effect in January 1977, sets a permissible exposure limit to coke oven emissions of 150 micrograms/m3 benzene-soluble fraction of total particulate matter (BSFTPM). Review of the epidemiologic evidence for the standard indicates an excess relative risk for lung cancer as high as 16-fold in topside coke oven workers with 15 years of exposure or more. There is also evidence for a consistent dose-response relationship in lung cancer mortality when duration and location of employment at the coke ovens are considered. Dose-response models fitted to these same data indicate that, while excess risks may still occur under the OSHA standard, the predicted levels of increased relative risk would be about 30-50% if a linear dose-response model is assumed and 3-7% if a quadratic model is assumed. Lung cancer mortality data for other steelworkers suggest the predicted excess risk has probably been somewhat overestimated, but lack of information on important confounding factors limits further dose-response analysis. PMID:6653539
Neutral winds and electric fields from model studies using reduced ionograms
NASA Technical Reports Server (NTRS)
Baran, D. E.
1974-01-01
A relationship between the vertical component of the ion velocity and electron density profiles derived from reduced ionograms is developed. Methods for determining the horizontal components of the neutral winds and electric fields by using this relationship and making use of the variations of the inclinations and declinations of the earth's magnetic field are presented. The effects that electric fields have on the neutral wind calculations are estimated to be small but not second order. Seasonal and latitudinal variations of the calculated neutral winds are presented. From the calculated neutral winds a new set of neutral pressure gradients is determined. The new pressure gradients are compared with those generated from several static neutral atmospheric models. Sensitivity factors relating the pressure gradients and neutral winds are calculated and these indicate that mode coupling and harmonic generation are important to studies which assume linearized theories.
Microwave inversion of leaf area and inclination angle distributions from backscattered data
NASA Technical Reports Server (NTRS)
Lang, R. H.; Saleh, H. A.
1985-01-01
The backscattering coefficient from a slab of thin randomly oriented dielectric disks over a flat lossy ground is used to reconstruct the inclination angle and area distributions of the disks. The disks are employed to model a leafy agricultural crop, such as soybeans, in the L-band microwave region of the spectrum. The distorted Born approximation, along with a thin disk approximation, is used to obtain a relationship between the horizontal-like polarized backscattering coefficient and the joint probability density of disk inclination angle and disk radius. Assuming large skin depth reduces the relationship to a linear Fredholm integral equation of the first kind. Due to the ill-posed nature of this equation, a Phillips-Twomey regularization method with a second difference smoothing condition is used to find the inversion. Results are obtained in the presence of 1 and 10 percent noise for both leaf inclination angle and leaf radius densities.
Meta-regression analysis of the effect of trans fatty acids on low-density lipoprotein cholesterol.
Allen, Bruce C; Vincent, Melissa J; Liska, DeAnn; Haber, Lynne T
2016-12-01
We conducted a meta-regression of controlled clinical trial data to investigate quantitatively the relationship between dietary intake of industrial trans fatty acids (iTFA) and increased low-density lipoprotein cholesterol (LDL-C). Previous regression analyses included insufficient data to determine the nature of the dose response in the low-dose region and have nonetheless assumed a linear relationship between iTFA intake and LDL-C levels. This work contributes to the previous work by 1) including additional studies examining low-dose intake (identified using an evidence mapping procedure); 2) investigating a range of curve shapes, including both linear and nonlinear models; and 3) using Bayesian meta-regression to combine results across trials. We found that, contrary to previous assumptions, the linear model does not acceptably fit the data, while the nonlinear, S-shaped Hill model fits the data well. Based on a conservative estimate of the degree of intra-individual variability in LDL-C (0.1 mmoL/L), as an estimate of a change in LDL-C that is not adverse, a change in iTFA intake of 2.2% of energy intake (%en) (corresponding to a total iTFA intake of 2.2-2.9%en) does not cause adverse effects on LDL-C. The iTFA intake associated with this change in LDL-C is substantially higher than the average iTFA intake (0.5%en). Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Non-linear stochastic growth rates and redshift space distortions
Jennings, Elise; Jennings, David
2015-04-09
The linear growth rate is commonly defined through a simple deterministic relation between the velocity divergence and the matter overdensity in the linear regime. We introduce a formalism that extends this to a non-linear, stochastic relation between θ = ∇ ∙ v(x,t)/aH and δ. This provides a new phenomenological approach that examines the conditional mean , together with the fluctuations of θ around this mean. We also measure these stochastic components using N-body simulations and find they are non-negative and increase with decreasing scale from ~10 per cent at k < 0.2 h Mpc -1 to 25 per cent atmore » k ~ 0.45 h Mpc -1 at z = 0. Both the stochastic relation and non-linearity are more pronounced for haloes, M ≤ 5 × 10 12 M ⊙ h -1, compared to the dark matter at z = 0 and 1. Non-linear growth effects manifest themselves as a rotation of the mean away from the linear theory prediction -f LTδ, where f LT is the linear growth rate. This rotation increases with wavenumber, k, and we show that it can be well-described by second-order Lagrangian perturbation theory (2LPT) fork < 0.1 h Mpc -1. Furthermore, the stochasticity in the θ – δ relation is not so simply described by 2LPT, and we discuss its impact on measurements of f LT from two-point statistics in redshift space. Furthermore, given that the relationship between δ and θ is stochastic and non-linear, this will have implications for the interpretation and precision of f LT extracted using models which assume a linear, deterministic expression.« less
Balásházy, Imre; Farkas, Arpád; Madas, Balázs Gergely; Hofmann, Werner
2009-06-01
Cellular hit probabilities of alpha particles emitted by inhaled radon progenies in sensitive bronchial epithelial cell nuclei were simulated at low exposure levels to obtain useful data for the rejection or support of the linear-non-threshold (LNT) hypothesis. In this study, local distributions of deposited inhaled radon progenies in airway bifurcation models were computed at exposure conditions characteristic of homes and uranium mines. Then, maximum local deposition enhancement factors at bronchial airway bifurcations, expressed as the ratio of local to average deposition densities, were determined to characterise the inhomogeneity of deposition and to elucidate their effect on resulting hit probabilities. The results obtained suggest that in the vicinity of the carinal regions of the central airways the probability of multiple hits can be quite high, even at low average doses. Assuming a uniform distribution of activity there are practically no multiple hits and the hit probability as a function of dose exhibits a linear shape in the low dose range. The results are quite the opposite in the case of hot spots revealed by realistic deposition calculations, where practically all cells receive multiple hits and the hit probability as a function of dose is non-linear in the average dose range of 10-100 mGy.
Tanker Structural Analysis for Minor Collisions
1975-12-01
transverse deflections of the stiffened hull may be assumed to vary linearly from the elevation of the forefoot of the striking bow down to zero at the bilge...Transverse de- flections of the stiffened hull may be assumed to vary linearly from the elevation of the forefoot of the striking bow down to zero... Striking Ship CL L-CL - uer panel 0! Forefoot of Bow of Striking Ship C L,’/Y (Ier rane1) 3d Limit of Shearing Plastic Energy TRANSVERSE SECTION AT WEB
Wright, Paul J; Steffen, Nicola J; Sun, Chyng
2017-07-28
Several studies using different methods have found that pornography consumption is associated with lower sexual satisfaction. The language used by media-effects scholars in discussions of this association implies an expectation that lowered satisfaction is primarily due to frequent-but not infrequent-consumption. Actual analyses, however, have assumed linearity. Linear analyses presuppose that for each increase in the frequency of pornography consumption there is a correspondingly equivalent decrease in sexual satisfaction. The present brief report explored the possibility that the association is curvilinear. Survey data from two studies of heterosexual adults, one conducted in England and the other in Germany, were employed. Results were parallel in each country and were not moderated by gender. Quadratic analysis indicated a curvilinear relationship, in the form of a predominantly negative, concave downward curve. Simple slope analyses suggested that when the frequency of consumption reaches once a month, sexual satisfaction begins to decrease, and that the magnitude of the decrease becomes larger with each increase in the frequency of consumption. The observational nature of the data employed precludes any causal inferences. However, if an effects perspective was adopted, these results would suggest that low rates of pornography consumption have no impact on sexual satisfaction and that adverse effects initiate only after consumption reaches a certain frequency.
Zhao, Rui; Catalano, Paul; DeGruttola, Victor G.; Michor, Franziska
2017-01-01
The dynamics of tumor burden, secreted proteins or other biomarkers over time, is often used to evaluate the effectiveness of therapy and to predict outcomes for patients. Many methods have been proposed to investigate longitudinal trends to better characterize patients and to understand disease progression. However, most approaches assume a homogeneous patient population and a uniform response trajectory over time and across patients. Here, we present a mixture piecewise linear Bayesian hierarchical model, which takes into account both population heterogeneity and nonlinear relationships between biomarkers and time. Simulation results show that our method was able to classify subjects according to their patterns of treatment response with greater than 80% accuracy in the three scenarios tested. We then applied our model to a large randomized controlled phase III clinical trial of multiple myeloma patients. Analysis results suggest that the longitudinal tumor burden trajectories in multiple myeloma patients are heterogeneous and nonlinear, even among patients assigned to the same treatment cohort. In addition, between cohorts, there are distinct differences in terms of the regression parameters and the distributions among categories in the mixture. Those results imply that longitudinal data from clinical trials may harbor unobserved subgroups and nonlinear relationships; accounting for both may be important for analyzing longitudinal data. PMID:28723910
Campos, José N B; Lima, Iran E; Studart, Ticiana M C; Nascimento, Luiz S V
2016-05-31
This study investigates the relationships between yield and evaporation as a function of lake morphology in semi-arid Brazil. First, a new methodology was proposed to classify the morphology of 40 reservoirs in the Ceará State, with storage capacities ranging from approximately 5 to 4500 hm3. Then, Monte Carlo simulations were conducted to study the effect of reservoir morphology (including real and simplified conical forms) on the water storage process at different reliability levels. The reservoirs were categorized as convex (60.0%), slightly convex (27.5%) or linear (12.5%). When the conical approximation was used instead of the real lake form, a trade-off occurred between reservoir yield and evaporation losses, with different trends for the convex, slightly convex and linear reservoirs. Using the conical approximation, the water yield prediction errors reached approximately 5% of the mean annual inflow, which is negligible for large reservoirs. However, for smaller reservoirs, this error became important. Therefore, this paper presents a new procedure for correcting the yield-evaporation relationships that were obtained by assuming a conical approximation rather than the real reservoir morphology. The combination of this correction with the Regulation Triangle Diagram is useful for rapidly and objectively predicting reservoir yield and evaporation losses in semi-arid environments.
Towards bridging the gap between climate change projections and maize producers in South Africa
NASA Astrophysics Data System (ADS)
Landman, Willem A.; Engelbrecht, Francois; Hewitson, Bruce; Malherbe, Johan; van der Merwe, Jacobus
2018-05-01
Multi-decadal regional projections of future climate change are introduced into a linear statistical model in order to produce an ensemble of austral mid-summer maximum temperature simulations for southern Africa. The statistical model uses atmospheric thickness fields from a high-resolution (0.5° × 0.5°) reanalysis-forced simulation as predictors in order to develop a linear recalibration model which represents the relationship between atmospheric thickness fields and gridded maximum temperatures across the region. The regional climate model, the conformal-cubic atmospheric model (CCAM), projects maximum temperatures increases over southern Africa to be in the order of 4 °C under low mitigation towards the end of the century or even higher. The statistical recalibration model is able to replicate these increasing temperatures, and the atmospheric thickness-maximum temperature relationship is shown to be stable under future climate conditions. Since dry land crop yields are not explicitly simulated by climate models but are sensitive to maximum temperature extremes, the effect of projected maximum temperature change on dry land crops of the Witbank maize production district of South Africa, assuming other factors remain unchanged, is then assessed by employing a statistical approach similar to the one used for maximum temperature projections.
Xu, Xiaole; Chen, Shengyong
2014-01-01
This paper investigates the finite-time consensus problem of leader-following multiagent systems. The dynamical models for all following agents and the leader are assumed the same general form of linear system, and the interconnection topology among the agents is assumed to be switching and undirected. We mostly consider the continuous-time case. By assuming that the states of neighbouring agents are known to each agent, a sufficient condition is established for finite-time consensus via a neighbor-based state feedback protocol. While the states of neighbouring agents cannot be available and only the outputs of neighbouring agents can be accessed, the distributed observer-based consensus protocol is proposed for each following agent. A sufficient condition is provided in terms of linear matrix inequalities to design the observer-based consensus protocol, which makes the multiagent systems achieve finite-time consensus under switching topologies. Then, we discuss the counterparts for discrete-time case. Finally, we provide an illustrative example to show the effectiveness of the design approach. PMID:24883367
Imaging of voids by means of a physical-optics-based shape-reconstruction algorithm.
Liseno, Angelo; Pierri, Rocco
2004-06-01
We analyze the performance of a shape-reconstruction algorithm for the retrieval of voids starting from the electromagnetic scattered field. Such an algorithm exploits the physical optics (PO) approximation to obtain a linear unknown-data relationship and performs inversions by means of the singular-value-decomposition approach. In the case of voids, in addition to a geometrical optics reflection, the presence of the lateral wave phenomenon must be considered. We analyze the effect of the presence of lateral waves on the reconstructions. For the sake of shape reconstruction, we can regard the PO algorithm as one of assuming the electric and magnetic field on the illuminated side as constant in amplitude and linear in phase, as far as the dependence on the frequency is concerned. Therefore we analyze how much the lateral wave phenomenon impairs such an assumption, and we show inversions for both one single and two circular voids, for different values of the background permittivity.
Thakore, Nimish J; Lapin, Brittany R; Pioro, Erik P
2018-06-01
Rate of decline of the Amyotrophic Lateral Sclerosis Functional Rating Scale-Revised (ALSFRS-R) score is a common outcome measure and a powerful predictor of mortality in ALS. Observed rate of decline (postslope) of ALSFRS-R, its linearity, and its relationship to decline at first visit (preslope) were examined in the Pooled Resource Open-Access ALS Clinical Trials cohort by using longitudinal mixed effects models. Mean ALSFRS-R postslope in 3,367 patients was -0.99 points/month. Preslope and postslope were correlated and had powerful effects on survival. ALSFRS-R trajectories were slightly accelerated overall, but slope and direction/degree of curvature varied. Subscore decline was sequential by site of onset. Respiratory subscore decline was the least steep. Variable curvilinearity of ALSFRS-R trajectories confounds interpretation in clinical studies that assume linear decline. Subscore trajectories recapitulate phenotypic diversity and topographical progression of ALS. ALSFRS-R is better used as a multidimensional measure. Muscle Nerve 57: 937-945, 2018. © 2017 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Kuhlman, J. M.; Shu, J. Y.
1981-01-01
A subsonic, linearized aerodynamic theory, wing design program for one or two planforms was developed which uses a vortex lattice near field model and a higher order panel method in the far field. The theoretical development of the wake model and its implementation in the vortex lattice design code are summarized and sample results are given. Detailed program usage instructions, sample input and output data, and a program listing are presented in the Appendixes. The far field wake model assumes a wake vortex sheet whose strength varies piecewise linearly in the spanwise direction. From this model analytical expressions for lift coefficient, induced drag coefficient, pitching moment coefficient, and bending moment coefficient were developed. From these relationships a direct optimization scheme is used to determine the optimum wake vorticity distribution for minimum induced drag, subject to constraints on lift, and pitching or bending moment. Integration spanwise yields the bound circulation, which is interpolated in the near field vortex lattice to obtain the design camber surface(s).
Crump, Kenny; Van Landingham, Cynthia
2012-01-01
NIOSH/NCI (National Institute of Occupational Safety and Health and National Cancer Institute) developed exposure estimates for respirable elemental carbon (REC) as a surrogate for exposure to diesel exhaust (DE) for different jobs in eight underground mines by year beginning in the 1940s—1960s when diesel equipment was first introduced into these mines. These estimates played a key role in subsequent epidemiological analyses of the potential relationship between exposure to DE and lung cancer conducted in these mines. We report here on a reanalysis of some of the data from this exposure assessment. Because samples of REC were limited primarily to 1998–2001, NIOSH/NCI used carbon monoxide (CO) as a surrogate for REC. In addition, because CO samples were limited, particularly in the earlier years, they used the ratio of diesel horsepower (HP) to the mine air exhaust rate as a surrogate for CO. There are considerable uncertainties connected with each of these surrogate-based steps. The estimates of HP appear to involve considerable uncertainty, although we had no data upon which to evaluate the magnitude of this uncertainty. A sizable percentage (45%) of the CO samples used in the HP to CO model was below the detection limit which required NIOSH/NCI to assign CO values to these samples. In their preferred REC estimates, NIOSH/NCI assumed a linear relation between C0 and REC, although they provided no credible support for that assumption. Their assumption of a stable relationship between HP and CO also is questionable, and our reanalysis found a statistically significant relationship in only one-half of the mines. We re-estimated yearly REC exposures mainly using NIOSH/NCI methods but with some important differences: (i) rather than simply assuming a linear relationship, we used data from the mines to estimate the CO—REC relationship; (ii) we used a different method for assigning values to nondetect CO measurements; and (iii) we took account of statistical uncertainty to estimate bounds for REC exposures. This exercise yielded significantly different exposure estimates than estimated by NIOSH/NCI. However, this analysis did not incorporate the full range of uncertainty in REC exposures because of additional uncertainties in the assumptions underlying the modeling and in the underlying data (e.g. HP and mine exhaust rates). Estimating historical exposures in a cohort is generally a very difficult undertaking. However, this should not prevent one from recognizing the uncertainty in the resulting estimates in any use made of them. PMID:22594934
2011-01-01
Background A growing body of research emphasizes the importance of contextual factors on health outcomes. Using postcode sector data for Scotland (UK), this study tests the hypothesis of spatial heterogeneity in the relationship between area-level deprivation and mortality to determine if contextual differences in the West vs. the rest of Scotland influence this relationship. Research into health inequalities frequently fails to recognise spatial heterogeneity in the deprivation-health relationship, assuming that global relationships apply uniformly across geographical areas. In this study, exploratory spatial data analysis methods are used to assess local patterns in deprivation and mortality. Spatial regression models are then implemented to examine the relationship between deprivation and mortality more formally. Results The initial exploratory spatial data analysis reveals concentrations of high standardized mortality ratios (SMR) and deprivation (hotspots) in the West of Scotland and concentrations of low values (coldspots) for both variables in the rest of the country. The main spatial regression result is that deprivation is the only variable that is highly significantly correlated with all-cause mortality in all models. However, in contrast to the expected spatial heterogeneity in the deprivation-mortality relationship, this relation does not vary between regions in any of the models. This result is robust to a number of specifications, including weighting for population size, controlling for spatial autocorrelation and heteroskedasticity, assuming a non-linear relationship between mortality and socio-economic deprivation, separating the dependent variable into male and female SMRs, and distinguishing between West, North and Southeast regions. The rejection of the hypothesis of spatial heterogeneity in the relationship between socio-economic deprivation and mortality complements prior research on the stability of the deprivation-mortality relationship over time. Conclusions The homogeneity we found in the deprivation-mortality relationship across the regions of Scotland and the absence of a contextualized effect of region highlights the importance of taking a broader strategic policy that can combat the toxic impacts of socio-economic deprivation on health. Focusing on a few specific places (e.g. 15% of the poorest areas) to concentrate resources might be a good start but the impact of socio-economic deprivation on mortality is not restricted to a few places. A comprehensive strategy that can be sustained over time might be needed to interrupt the linkages between poverty and mortality. PMID:21569408
Low-frequency electrical properties of peat
NASA Astrophysics Data System (ADS)
Comas, Xavier; Slater, Lee
2004-12-01
Electrical resistivity/induced polarization (0.1-1000 Hz) and vertical hydraulic conductivity (Kv) measurements of peat samples extracted from different depths (0-11 m) in a peatland in Maine were obtained as a function of pore fluid conductivity (σw) between 0.001 and 2 S/m. Hydraulic conductivity increased with σw (Kv ∝ σw0.3 between 0.001 and 2 S/m), indicating that pore dilation occurs due to the reaction of NaCl with organic functional groups as postulated by previous workers. Electrical measurements were modeled by assuming that "bulk" electrolytic conduction through the interconnected pore space and surface conduction in the electrical double layer (EDL) at the organic sediment-fluid interface act in parallel. This analysis suggests that pore space dilation causes a nonlinear relationship between the "bulk" electrolytic conductivity (σel) and σw (σel ∝ σw1.3). The Archie equation predicts a linear dependence of σel on σw and thus appears inappropriate for organic sediments. Induced polarization (IP) measurements of the imaginary part (σ″surf) of the surface conductivity (σ*surf) show that σ″surf is greater and more strongly σw-dependent (σ″surf ∝ σw0.5 between 0.001 and 2 S/m) than observed for inorganic sediments. By assuming a linear relationship between the real (σ'surf) and the imaginary part (σ″surf) of the surface conductivity, we develop an empirical model relating the resistivity and induced polarization measurements to σw in peat. We demonstrate the use of this model to predict (a) σw and (b) the change in Kv due to an incremental change in σw from resistivity and induced polarization measurements on organic sediments. Our study has implications for noninvasive geophysical characterization of σw and Kv with potential to benefit studies of carbon cycling and greenhouse gas fluxes as well as nutrient supply dynamics in peatlands.
Experimental Investigation of Hysteretic Dynamic Capillarity Effect in Unsaturated Flow
Zhuang, Luwen; Qin, Chao‐Zhong; de Waal, Arjen
2017-01-01
Abstract The difference between average pressures of two immiscible fluids is commonly assumed to be the same as macroscopic capillary pressure, which is considered to be a function of saturation only. However, under transient conditions, a dependence of this pressure difference on the time rate of saturation change has been observed by many researchers. This is commonly referred to as dynamic capillarity effect. As a first‐order approximation, the dynamic term is assumed to be linearly dependent on the time rate of change of saturation, through a material coefficient denoted by τ. In this study, a series of laboratory experiments were carried out to quantify the dynamic capillarity effect in an unsaturated sandy soil. Primary, main, and scanning drainage experiments, under both static and dynamic conditions, were performed on a sandy soil in a small cell. The value of the dynamic capillarity coefficient τ was calculated from the air‐water pressure differences and average saturation values during static and dynamic drainage experiments. We found a dependence of τ on saturation, which showed a similar trend for all drainage conditions. However, at any given saturation, the value of τ for primary drainage was larger than the value for main drainage and that was in turn larger than the value for scanning drainage. Each data set was fit a simple log‐linear equation, with different values of fitting parameters. This nonuniqueness of the relationship between τ and saturation and possible causes is discussed. PMID:29398729
Experimental Investigation of Hysteretic Dynamic Capillarity Effect in Unsaturated Flow
NASA Astrophysics Data System (ADS)
Zhuang, Luwen; Hassanizadeh, S. Majid; Qin, Chao-Zhong; de Waal, Arjen
2017-11-01
The difference between average pressures of two immiscible fluids is commonly assumed to be the same as macroscopic capillary pressure, which is considered to be a function of saturation only. However, under transient conditions, a dependence of this pressure difference on the time rate of saturation change has been observed by many researchers. This is commonly referred to as dynamic capillarity effect. As a first-order approximation, the dynamic term is assumed to be linearly dependent on the time rate of change of saturation, through a material coefficient denoted by τ. In this study, a series of laboratory experiments were carried out to quantify the dynamic capillarity effect in an unsaturated sandy soil. Primary, main, and scanning drainage experiments, under both static and dynamic conditions, were performed on a sandy soil in a small cell. The value of the dynamic capillarity coefficient τ was calculated from the air-water pressure differences and average saturation values during static and dynamic drainage experiments. We found a dependence of τ on saturation, which showed a similar trend for all drainage conditions. However, at any given saturation, the value of τ for primary drainage was larger than the value for main drainage and that was in turn larger than the value for scanning drainage. Each data set was fit a simple log-linear equation, with different values of fitting parameters. This nonuniqueness of the relationship between τ and saturation and possible causes is discussed.
Visual effects in great bowerbird sexual displays and their implications for signal design.
Endler, John A; Gaburro, Julie; Kelley, Laura A
2014-05-22
It is often assumed that the primary purpose of a male's sexual display is to provide information about quality, or to strongly stimulate prospective mates, but other functions of courtship displays have been relatively neglected. Male great bowerbirds (Ptilonorhynchus nuchalis) construct bowers that exploit the female's predictable field of view (FOV) during courtship displays by creating forced perspective illusions, and the quality of illusion is a good predictor of mating success. Here, we present and discuss two additional components of male courtship displays that use the female's predetermined viewpoint: (i) the rapid and diverse flashing of coloured objects within her FOV and (ii) chromatic adaptation of the female's eyes that alters her perception of the colour of the displayed objects. Neither is directly related to mating success, but both are likely to increase signal efficacy, and may also be associated with attracting and holding the female's attention. Signal efficacy is constrained by trade-offs between the signal components; there are both positive and negative interactions within multicomponent signals. Important signal components may have a threshold effect on fitness rather than the often assumed linear relationship.
NASA Astrophysics Data System (ADS)
Roncoroni, Alan; Medo, Matus
2016-12-01
Models of spatial firm competition assume that customers are distributed in space and transportation costs are associated with their purchases of products from a small number of firms that are also placed at definite locations. It has been long known that the competition equilibrium is not guaranteed to exist if the most straightforward linear transportation costs are assumed. We show by simulations and also analytically that if periodic boundary conditions in a plane are assumed, the equilibrium exists for a pair of firms at any distance. When a larger number of firms is considered, we find that their total equilibrium profit is inversely proportional to the square root of the number of firms. We end with a numerical investigation of the system's behavior for a general transportation cost exponent.
Multiple imputation in the presence of non-normal data.
Lee, Katherine J; Carlin, John B
2017-02-20
Multiple imputation (MI) is becoming increasingly popular for handling missing data. Standard approaches for MI assume normality for continuous variables (conditionally on the other variables in the imputation model). However, it is unclear how to impute non-normally distributed continuous variables. Using simulation and a case study, we compared various transformations applied prior to imputation, including a novel non-parametric transformation, to imputation on the raw scale and using predictive mean matching (PMM) when imputing non-normal data. We generated data from a range of non-normal distributions, and set 50% to missing completely at random or missing at random. We then imputed missing values on the raw scale, following a zero-skewness log, Box-Cox or non-parametric transformation and using PMM with both type 1 and 2 matching. We compared inferences regarding the marginal mean of the incomplete variable and the association with a fully observed outcome. We also compared results from these approaches in the analysis of depression and anxiety symptoms in parents of very preterm compared with term-born infants. The results provide novel empirical evidence that the decision regarding how to impute a non-normal variable should be based on the nature of the relationship between the variables of interest. If the relationship is linear in the untransformed scale, transformation can introduce bias irrespective of the transformation used. However, if the relationship is non-linear, it may be important to transform the variable to accurately capture this relationship. A useful alternative is to impute the variable using PMM with type 1 matching. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Teachers' Evaluations and Students' Achievement: A "Deviation from the Reference" Analysis
ERIC Educational Resources Information Center
Iacus, Stefano M.; Porro, Giuseppe
2011-01-01
Several studies show that teachers make use of grading practices to affect students' effort and achievement. Generally linearity is assumed in the grading equation, while it is everyone's experience that grading practices are frequently non-linear. Representing grading practices as linear can be misleading both from a descriptive and a…
Claessens, T E; Georgakopoulos, D; Afanasyeva, M; Vermeersch, S J; Millar, H D; Stergiopulos, N; Westerhof, N; Verdonck, P R; Segers, P
2006-04-01
The linear time-varying elastance theory is frequently used to describe the change in ventricular stiffness during the cardiac cycle. The concept assumes that all isochrones (i.e., curves that connect pressure-volume data occurring at the same time) are linear and have a common volume intercept. Of specific interest is the steepest isochrone, the end-systolic pressure-volume relationship (ESPVR), of which the slope serves as an index for cardiac contractile function. Pressure-volume measurements, achieved with a combined pressure-conductance catheter in the left ventricle of 13 open-chest anesthetized mice, showed a marked curvilinearity of the isochrones. We therefore analyzed the shape of the isochrones by using six regression algorithms (two linear, two quadratic, and two logarithmic, each with a fixed or time-varying intercept) and discussed the consequences for the elastance concept. Our main observations were 1) the volume intercept varies considerably with time; 2) isochrones are equally well described by using quadratic or logarithmic regression; 3) linear regression with a fixed intercept shows poor correlation (R(2) < 0.75) during isovolumic relaxation and early filling; and 4) logarithmic regression is superior in estimating the fixed volume intercept of the ESPVR. In conclusion, the linear time-varying elastance fails to provide a sufficiently robust model to account for changes in pressure and volume during the cardiac cycle in the mouse ventricle. A new framework accounting for the nonlinear shape of the isochrones needs to be developed.
Mean electromotive force generated by asymmetric fluid flow near the surface of earth's outer core
NASA Astrophysics Data System (ADS)
Bhattacharyya, Archana
1992-10-01
The phi component of the mean electromotive force, (ETF) generated by asymmetric flow of fluid just beneath the core-mantle boundary (CMB), is obtained using a geomagnetic field model. This analysis is based on the supposition that the axisymmetric part of fluid flow beneath the CMB is tangentially geostrophic and toroidal. For all the epochs studied, the computed phi component is stronger in the Southern Hemisphere than that in the Northern Hemisphere. Assuming a linear relationship between (ETF) and the azimuthally averaged magnetic field (AAMF), the only nonzero off-diagonal components of the pseudotensor relating ETF to AAMF, are estimated as functions of colatitude, and the physical implications of the results are discussed.
Transformation of apparent ocean wave spectra observed from an aircraft sensor platform
NASA Technical Reports Server (NTRS)
Poole, L. R.
1976-01-01
The problem considered was transformation of a unidirectional apparent ocean wave spectrum observed from an aircraft sensor platform into the true spectrum that would be observed from a stationary platform. Spectral transformation equations were developed in terms of the linear wave dispersion relationship and the wave group speed. An iterative solution to the equations was outlined and used to transform reference theoretical apparent spectra for several assumed values of average water depth. Results show that changing the average water depth leads to a redistribution of energy density among the various frequency bands of the transformed spectrum. This redistribution is most severe when much of the energy density is expected, a priori, to reside at relatively low true frequencies.
Micromechanical analysis on anisotropy of structured magneto-rheological elastomer
NASA Astrophysics Data System (ADS)
Li, R.; Zhang, Z.; Chen, S. W.; Wang, X. J.
2015-07-01
This paper investigates the equivalent elastic modulus of structured magneto-rheological elastomer (MRE) in the absence of magnetic field. We assume that both matrix and ferromagnetic particles are linear elastic materials, and ferromagnetic particles are embedded in matrix with layer-like structure. The structured composite could be divided into matrix layer and reinforced layer, in which the reinforced layer is composed of matrix and the homogenously distributed ferromagnetic particles in matrix. The equivalent elastic modulus of reinforced layer is analysed by the Mori-Tanaka method. Finite Element Method (FEM) is also carried out to illustrate the relationship between the elastic modulus and the volume fraction of ferromagnetic particles. The results show that the anisotropy of elastic modulus becomes noticeable, as the volume fraction of particles increases.
A framework for comparing structural and functional measures of glaucomatous damage
Hood, Donald C.; Kardon, Randy H.
2007-01-01
While it is often said that structural damage due to glaucoma precedes functional damage, it is not always clear what this statement means. This review has two purposes: first, to show that a simple linear relationship describes the data relating a particular functional test (standard automated perimetry (SAP)) to a particular structural test (optical coherence tomography (OCT)); and, second, to propose a general framework for relating structural and functional damage, and for evaluating if one precedes the other. The specific functional and structural tests employed are described in Section 2. To compare SAP sensitivity loss to loss of the retinal nerve fiber layer (RNFL) requires a map that relates local field regions to local regions of the optic disc as described in Section 3. When RNFL thickness in the superior and inferior arcuate sectors of the disc are plotted against SAP sensitivity loss (dB units) in the corresponding arcuate regions of the visual field, RNFL thickness becomes asymptotic for sensitivity losses greater than about 10 dB. These data are well described by a simple linear model presented in Section 4. The model assumes that the RNFL thickness measured with OCT has two components. One component is the axons of the retinal ganglion cells and the other, the residual, is everything else (e.g. glial cells, blood vessels). The axon portion is assumed to decrease in a linear fashion with losses in SAP sensitivity (in linear units); the residual portion is assumed to remain constant. Based upon severe SAP losses in anterior ischemic optic neuropathy (AION), the residual RNFL thickness in the arcuate regions is, on average, about one-third of the premorbid (normal) thickness of that region. The model also predicts that, to a first approximation, SAP sensitivity in control subjects does not depend upon RNFL thickness. The data (Section 6) are, in general, consistent with this prediction showing a very weak correlation between RNFL thickness and SAP sensitivity. In Section 7, the model is used to estimate the proportion of patients showing statistical abnormalities (worse than the 5th percentile) on the OCT RNFL test before they show abnormalities on the 24-2 SAP field test. Ignoring measurement error, the patients with a relatively thick RNFL, when healthy, will be more likely to show significant SAP sensitivity loss before statistically significant OCT RNFL loss, while the reverse will be true for those who start with an average or a relatively thin RNFL when healthy. Thus, it is important to understand the implications of the wide variation in RNFL thickness among control subjects. Section 8 describes two of the factors contributing to this variation, variations in the position of blood vessels and variations in the mapping of field regions to disc sectors. Finally, in Sections 7 and 9, the findings are related to the general debate in the literature about the relationship between structural and functional glaucomatous damage and a framework is proposed for understanding what is meant by the question, ‘Does structural damage precede functional damage in glaucoma?’ An emphasis is placed upon the need to distinguish between “statistical” and “relational” meanings of this question. PMID:17889587
Herberholz, Jens; Swierzbinski, Matthew E; Birke, Juliane M
2016-04-01
Like most social animals, crayfish readily form dominance relationships and linear social hierarchies when competing for limited resources. Competition often entails dyadic aggressive interactions, from which one animal emerges as the dominant and one as the subordinate. Once dominance relationships are formed, they typically remain stable for extended periods of time; thus, access to future resources is divided unequally among conspecifics. We previously showed that firmly established dominance relationships in juvenile crayfish can be disrupted by briefly adding a larger conspecific to the original pair. This finding suggested that the stability of social relationships in crayfish was highly context-dependent and more transient than previously assumed. We now report results that further identify the mechanisms underlying the destabilization of crayfish dominance relationships. We found that rank orders remained stable when conspecifics of smaller or equal size were added to the original pair, suggesting that both dominant and subordinate must be defeated by a larger crayfish in order to destabilize dominance relationships. We also found that dominance relationships remained stable when both members of the original pair were defeated by larger conspecifics in the absence of their original opponent. This showed that dominance relationships are not destabilized unless both animals experience defeat together. Lastly, we found that dominance relationships of pairs were successfully disrupted by larger intruders, although with reduced magnitude, after all chemical cues associated with earlier agonistic experiences were eliminated. These findings provide important new insights into the contextual features that regulate the stability of social dominance relationships in crayfish and probably in other species as well. © 2016 Marine Biological Laboratory.
Kastberger, Gerald; Weihmann, Frank; Hoetzl, Thomas; Weiss, Sara E.; Maurer, Michael; Kranner, Ilse
2012-01-01
Shimmering is a collective defence behaviour in Giant honeybees (Apis dorsata) whereby individual bees flip their abdomen upwards, producing Mexican wave-like patterns on the nest surface. Bucket bridging has been used to explain the spread of information in a chain of members including three testable concepts: first, linearity assumes that individual “agent bees” that participate in the wave will be affected preferentially from the side of wave origin. The directed-trigger hypothesis addresses the coincidence of the individual property of trigger direction with the collective property of wave direction. Second, continuity describes the transfer of information without being stopped, delayed or re-routed. The active-neighbours hypothesis assumes coincidence between the direction of the majority of shimmering-active neighbours and the trigger direction of the agents. Third, the graduality hypothesis refers to the interaction between an agent and her active neighbours, assuming a proportional relationship in the strength of abdomen flipping of the agent and her previously active neighbours. Shimmering waves provoked by dummy wasps were recorded with high-resolution video cameras. Individual bees were identified by 3D-image analysis, and their strength of abdominal flipping was assessed by pixel-based luminance changes in sequential frames. For each agent, the directedness of wave propagation was based on wave direction, trigger direction, and the direction of the majority of shimmering-active neighbours. The data supported the bucket bridging hypothesis, but only for a small proportion of agents: linearity was confirmed for 2.5%, continuity for 11.3% and graduality for 0.4% of surface bees (but in 2.6% of those agents with high wave-strength levels). The complimentary part of 90% of surface bees did not conform to bucket bridging. This fuzziness is discussed in terms of self-organisation and evolutionary adaptedness in Giant honeybee colonies to respond to rapidly changing threats such as predatory wasps scanning in front of the nest. PMID:22662123
Kastberger, Gerald; Weihmann, Frank; Hoetzl, Thomas; Weiss, Sara E; Maurer, Michael; Kranner, Ilse
2012-01-01
Shimmering is a collective defence behaviour in Giant honeybees (Apis dorsata) whereby individual bees flip their abdomen upwards, producing Mexican wave-like patterns on the nest surface. Bucket bridging has been used to explain the spread of information in a chain of members including three testable concepts: first, linearity assumes that individual "agent bees" that participate in the wave will be affected preferentially from the side of wave origin. The directed-trigger hypothesis addresses the coincidence of the individual property of trigger direction with the collective property of wave direction. Second, continuity describes the transfer of information without being stopped, delayed or re-routed. The active-neighbours hypothesis assumes coincidence between the direction of the majority of shimmering-active neighbours and the trigger direction of the agents. Third, the graduality hypothesis refers to the interaction between an agent and her active neighbours, assuming a proportional relationship in the strength of abdomen flipping of the agent and her previously active neighbours. Shimmering waves provoked by dummy wasps were recorded with high-resolution video cameras. Individual bees were identified by 3D-image analysis, and their strength of abdominal flipping was assessed by pixel-based luminance changes in sequential frames. For each agent, the directedness of wave propagation was based on wave direction, trigger direction, and the direction of the majority of shimmering-active neighbours. The data supported the bucket bridging hypothesis, but only for a small proportion of agents: linearity was confirmed for 2.5%, continuity for 11.3% and graduality for 0.4% of surface bees (but in 2.6% of those agents with high wave-strength levels). The complimentary part of 90% of surface bees did not conform to bucket bridging. This fuzziness is discussed in terms of self-organisation and evolutionary adaptedness in Giant honeybee colonies to respond to rapidly changing threats such as predatory wasps scanning in front of the nest.
The effect of leaf size on the microwave backscattering by corn
NASA Technical Reports Server (NTRS)
Paris, J. F.
1986-01-01
Attema and Ulaby (1978) proposed the cloud model to predict the microwave backscattering properties of vegetation. This paper describes a modification in which the biophysical properties and microwave properties of vegetation are related at the level of the individual scatterer (e.g., the leaf or the stalk) rather than at the level of the aggregated canopy (e.g., the green leaf area index). Assuming that the extinction cross section of an average leaf was proportional to its water content, that a power law relationship existed between the backscattering cross section of an average green corn leaf and its area, and that the backscattering coefficient of the surface was a linear function of its volumetric soil moisture content, it is found that the explicit inclusion of the effects of corn leaf size in the model led to an excellent fit between the observed and predicted backscattering coefficients. Also, an excellent power law relationship existed between the backscattering cross section of a corn leaf and its area.
Polar Bear Conservation Status in Relation to Projected Sea-ice Conditions
NASA Astrophysics Data System (ADS)
Regehr, E. V.
2015-12-01
The status of the world's 19 subpopulations of polar bears (Ursus maritimus) varies as a function of sea-ice conditions, ecology, management, and other factors. Previous methods to project the response of polar bears to loss of Arctic sea ice—the primary threat to the species—include expert opinion surveys, Bayesian Networks providing qualitative stressor assessments, and subpopulations-specific demographic analyses. Here, we evaluated the global conservation status of polar bears using a data-based sensitivity analysis. First, we estimated generation length for subpopulations with available data (n=11). Second, we developed standardized sea-ice metrics representing habitat availability. Third, we projected global population size under alternative assumptions for relationships between sea ice and subpopulation abundance. Estimated generation length (median = 11.4 years; 95%CI = 9.8 to 13.6) and sea-ice change (median = loss of 1.26 ice-covered days per year; 95%CI = 0.70 to 3.37) varied across subpopulations. Assuming a one-to-one proportional relationship between sea ice and abundance, the median percent change in global population size over three polar bear generations was -30% (95%CI = -35% to -25%). Assuming a linear relationship between sea ice and normalized estimates of subpopulation abundance, median percent change was -4% (95% CI = -62% to +50%) or -43% (95% CI = -76% to -20%), depending on how subpopulations were grouped and how inference was extended from relatively well-studied subpopulations (n=7) to those with little or no data. Our findings suggest the potential for large reductions in polar bear numbers over the next three polar bear generations if sea-ice loss due to climate change continues as forecasted.
ERIC Educational Resources Information Center
Montiel, Mariana; Bhatti, Uzma
2010-01-01
This article presents an overview of some issues that were confronted when delivering an online second Linear Algebra course (assuming a previous Introductory Linear Algebra course) to graduate students enrolled in a Secondary Mathematics Education program. The focus is on performance in one particular aspect of the course: "change of basis" and…
NASA Technical Reports Server (NTRS)
Merenyi, E.; Miller, J. S.; Singer, R. B.
1992-01-01
The linear mixing model approach was successfully applied to data sets of various natures. In these sets, the measured radiance could be assumed to be a linear combination of radiance contributions. The present work is an attempt to analyze a spectral image of Mars with linear mixing modeling.
ERIC Educational Resources Information Center
Camporesi, Roberto
2011-01-01
We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of…
On computation of Gröbner bases for linear difference systems
NASA Astrophysics Data System (ADS)
Gerdt, Vladimir P.
2006-04-01
In this paper, we present an algorithm for computing Gröbner bases of linear ideals in a difference polynomial ring over a ground difference field. The input difference polynomials generating the ideal are also assumed to be linear. The algorithm is an adaptation to difference ideals of our polynomial algorithm based on Janet-like reductions.
Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.
Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine
2010-09-01
Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.
Parameter identifiability of linear dynamical systems
NASA Technical Reports Server (NTRS)
Glover, K.; Willems, J. C.
1974-01-01
It is assumed that the system matrices of a stationary linear dynamical system were parametrized by a set of unknown parameters. The question considered here is, when can such a set of unknown parameters be identified from the observed data? Conditions for the local identifiability of a parametrization are derived in three situations: (1) when input/output observations are made, (2) when there exists an unknown feedback matrix in the system and (3) when the system is assumed to be driven by white noise and only output observations are made. Also a sufficient condition for global identifiability is derived.
Kinetics of the B1-B2 phase transition in KCl under rapid compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Chuanlong; Smith, Jesse S.; Sinogeikin, Stanislav V.
2016-01-28
Kinetics of the B1-B2 phase transition in KCl has been investigated under various compression rates (0.03–13.5 GPa/s) in a dynamic diamond anvil cell using time-resolved x-ray diffraction and fast imaging. Our experimental data show that the volume fraction across the transition generally gives sigmoidal curves as a function of pressure during rapid compression. Based upon classical nucleation and growth theories (Johnson-Mehl-Avrami-Kolmogorov theories), we propose a model that is applicable for studying kinetics for the compression rates studied. The fit of the experimental volume fraction as a function of pressure provides information on effective activation energy and average activation volume at amore » given compression rate. The resulting parameters are successfully used for interpreting several experimental observables that are compression-rate dependent, such as the transition time, grain size, and over-pressurization. The effective activation energy (Q{sub eff}) is found to decrease linearly with the logarithm of compression rate. When Q{sub eff} is applied to the Arrhenius equation, this relationship can be used to interpret the experimentally observed linear relationship between the logarithm of the transition time and logarithm of the compression rates. The decrease of Q{sub eff} with increasing compression rate results in the decrease of the nucleation rate, which is qualitatively in agreement with the observed change of the grain size with compression rate. The observed over-pressurization is also well explained by the model when an exponential relationship between the average activation volume and the compression rate is assumed.« less
Using a Virtual Experiment to Analyze Infiltration Process from Point to Grid-cell Size Scale
NASA Astrophysics Data System (ADS)
Barrios, M. I.
2013-12-01
The hydrological science requires the emergence of a consistent theoretical corpus driving the relationships between dominant physical processes at different spatial and temporal scales. However, the strong spatial heterogeneities and non-linearities of these processes make difficult the development of multiscale conceptualizations. Therefore, scaling understanding is a key issue to advance this science. This work is focused on the use of virtual experiments to address the scaling of vertical infiltration from a physically based model at point scale to a simplified physically meaningful modeling approach at grid-cell scale. Numerical simulations have the advantage of deal with a wide range of boundary and initial conditions against field experimentation. The aim of the work was to show the utility of numerical simulations to discover relationships between the hydrological parameters at both scales, and to use this synthetic experience as a media to teach the complex nature of this hydrological process. The Green-Ampt model was used to represent vertical infiltration at point scale; and a conceptual storage model was employed to simulate the infiltration process at the grid-cell scale. Lognormal and beta probability distribution functions were assumed to represent the heterogeneity of soil hydraulic parameters at point scale. The linkages between point scale parameters and the grid-cell scale parameters were established by inverse simulations based on the mass balance equation and the averaging of the flow at the point scale. Results have shown numerical stability issues for particular conditions and have revealed the complex nature of the non-linear relationships between models' parameters at both scales and indicate that the parameterization of point scale processes at the coarser scale is governed by the amplification of non-linear effects. The findings of these simulations have been used by the students to identify potential research questions on scale issues. Moreover, the implementation of this virtual lab improved the ability to understand the rationale of these process and how to transfer the mathematical models to computational representations.
NASA Technical Reports Server (NTRS)
Shinn, J. L.; Cucinotta, F. A.; Badhwar, G. D.; ONeill, P. M.; Badavi, F. F.
1995-01-01
Recent improvements in the radiation transport code HZETRN/BRYNTRN and galactic cosmic ray environmental model have provided an opportunity to investigate the effects of target fragmentation on estimates of single event upset (SEU) rates for spacecraft memory devices. Since target fragments are mostly of very low energy, an SEU prediction model has been derived in terms of particle energy rather than linear energy transfer (LET) to account for nonlinear relationship between range and energy. Predictions are made for SEU rates observed on two Shuttle flights, each at low and high inclination orbit. Corrections due to track structure effects are made for both high energy ions with track structure larger than device sensitive volume and for low energy ions with dense track where charge recombination is important. Results indicate contributions from target fragments are relatively important at large shield depths (or any thick structure material) and at low inclination orbit. Consequently, a more consistent set of predictions for upset rates observed in these two flights is reached when compared to an earlier analysis with CREME model. It is also observed that the errors produced by assuming linear relationship in range and energy in the earlier analysis have fortuitously canceled out the errors for not considering target fragmentation and track structure effects.
A kinetic theory description of the viscosity of dense fluids consisting of chain molecules.
de Wijn, Astrid S; Vesovic, Velisa; Jackson, George; Trusler, J P Martin
2008-05-28
An expression for the viscosity of a dense fluid is presented that includes the effect of molecular shape. The molecules of the fluid are approximated by chains of equal-sized, tangentially jointed, rigid spheres. It is assumed that the collision dynamics in such a fluid can be approximated by instantaneous collisions between two rigid spheres belonging to different chains. The approach is thus analogous to that of Enskog for a fluid consisting of rigid spheres. The description is developed in terms of two molecular parameters, the diameter sigma of the spherical segment and the chain length (number of segments) m. It is demonstrated that an analysis of viscosity data of a particular pure fluid alone cannot be used to obtain independently effective values of both sigma and m. Nevertheless, the chain lengths of n-alkanes are determined by assuming that the diameter of each rigid sphere making up the chain can be represented by the diameter of a methane molecule. The effective chain lengths of n-alkanes are found to increase linearly with the number C of carbon atoms present. The dependence can be approximated by a simple relationship m=1+(C-1)3. The same relationship was reported within the context of a statistical associating fluid theory equation of state treatment of the fluid, indicating that both the equilibrium thermodynamic properties and viscosity yield the same value for the chain lengths of n-alkanes.
The infection rate of Daphnia magna by Pasteuria ramosa conforms with the mass-action principle.
Regoes, R R; Hottinger, J W; Sygnarski, L; Ebert, D
2003-10-01
In simple epidemiological models that describe the interaction between hosts with their parasites, the infection process is commonly assumed to be governed by the law of mass action, i.e. it is assumed that the infection rate depends linearly on the densities of the host and the parasite. The mass-action assumption, however, can be problematic if certain aspects of the host-parasite interaction are very pronounced, such as spatial compartmentalization, host immunity which may protect from infection with low doses, or host heterogeneity with regard to susceptibility to infection. As deviations from a mass-action infection rate have consequences for the dynamics of the host-parasite system, it is important to test for the appropriateness of the mass-action assumption in a given host-parasite system. In this paper, we examine the relationship between the infection rate and the parasite inoculum for the water flee Daphnia magna and its bacterial parasite Pasteuria ramosa. We measured the fraction of infected hosts after exposure to 14 different doses of the parasite. We find that the observed relationship between the fraction of infected hosts and the parasite dose is largely consistent with an infection process governed by the mass-action principle. However, we have evidence for a subtle but significant deviation from a simple mass-action infection model, which can be explained either by some antagonistic effects of the parasite spores during the infection process, or by heterogeneity in the hosts' susceptibility with regard to infection.
A SEMIPARAMETRIC BAYESIAN MODEL FOR CIRCULAR-LINEAR REGRESSION
We present a Bayesian approach to regress a circular variable on a linear predictor. The regression coefficients are assumed to have a nonparametric distribution with a Dirichlet process prior. The semiparametric Bayesian approach gives added flexibility to the model and is usefu...
Testing the Linearity of the Cosmic Origins Spectrograph FUV Channel Thermal Correction
NASA Astrophysics Data System (ADS)
Fix, Mees B.; De Rosa, Gisella; Sahnow, David
2018-05-01
The Far Ultraviolet Cross Delay Line (FUV XDL) detector on the Cosmic Origins Spectrograph (COS) is subject to temperature-dependent distortions. The correction performed by the COS calibration pipeline (CalCOS) assumes that these changes are linear across the detector. In this report we evaluate the accuracy of the linear approximations using data obtained on orbit. Our results show that the thermal distortions are consistent with our current linear model.
Conical Lens for 5-Inch/54 Gun Launched Missile
1981-06-01
Propagation, Interferenceand Diffraction of Light, 2nd ed. (revised), p. 121-124, Pergamon Press, 1964. 10. Anton , Howard, Elementary Linear Algebra , p. 1-21...equations is nonlinear in x, but is linear in the coefficients. Therefore, the techniques of linear algebra can be used on equation (F-13). The method...This thesis assumes the air to be homogenous, isotropic, linear , time indepen- dent (HILT) and free of shock waves in order to investigate the
Johnson, Brent A
2009-10-01
We consider estimation and variable selection in the partial linear model for censored data. The partial linear model for censored data is a direct extension of the accelerated failure time model, the latter of which is a very important alternative model to the proportional hazards model. We extend rank-based lasso-type estimators to a model that may contain nonlinear effects. Variable selection in such partial linear model has direct application to high-dimensional survival analyses that attempt to adjust for clinical predictors. In the microarray setting, previous methods can adjust for other clinical predictors by assuming that clinical and gene expression data enter the model linearly in the same fashion. Here, we select important variables after adjusting for prognostic clinical variables but the clinical effects are assumed nonlinear. Our estimator is based on stratification and can be extended naturally to account for multiple nonlinear effects. We illustrate the utility of our method through simulation studies and application to the Wisconsin prognostic breast cancer data set.
ERIC Educational Resources Information Center
Camporesi, Roberto
2016-01-01
We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations of any order based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as…
Correlation and agreement: overview and clarification of competing concepts and measures.
Liu, Jinyuan; Tang, Wan; Chen, Guanqin; Lu, Yin; Feng, Changyong; Tu, Xin M
2016-04-25
Agreement and correlation are widely-used concepts that assess the association between variables. Although similar and related, they represent completely different notions of association. Assessing agreement between variables assumes that the variables measure the same construct, while correlation of variables can be assessed for variables that measure completely different constructs. This conceptual difference requires the use of different statistical methods, and when assessing agreement or correlation, the statistical method may vary depending on the distribution of the data and the interest of the investigator. For example, the Pearson correlation, a popular measure of correlation between continuous variables, is only informative when applied to variables that have linear relationships; it may be non-informative or even misleading when applied to variables that are not linearly related. Likewise, the intraclass correlation, a popular measure of agreement between continuous variables, may not provide sufficient information for investigators if the nature of poor agreement is of interest. This report reviews the concepts of agreement and correlation and discusses differences in the application of several commonly used measures.
Kougiali, Zetta G; Fasulo, Alessandra; Needs, Adrian; Van Laar, Darren
2017-06-01
The dominant theoretical perspective that guides treatment evaluations in addiction assumes linearity in the relationship between treatment and outcomes, viewing behaviour change as a 'before and after event'. In this study we aim to examine how the direction of the trajectory of the process from addiction to recovery is constructed in personal narratives of active and recovering users. 21 life stories from individuals at different stages of recovery and active use were collected and analysed following the principles of narrative analysis. Personal trajectories were constructed in discontinuous, non-linear and long lasting patterns of repeated, and interchangeable, episodes of relapse and abstinence. Relapse appeared to be described as an integral part of a learning process through which knowledge leading to recovery was gradually obtained. The findings show that long-term recovery is represented as being preceded by periods of discontinuity before change is stabilised. Such periods are presented to be lasting longer than most short-term pre-post intervention designs can capture and suggest the need to rethink how change is defined and measured.
Physical Activity and Health: "What is Old is New Again".
Hills, Andrew P; Street, Steven J; Byrne, Nuala M
2015-01-01
Much recent interest has focused on the relationship between physical activity and health and supported with an abundance of scientific evidence. However, the concept of Exercise is Medicine™ copromoted by the American College of Sports Medicine and American Medical Association and similar august bodies worldwide is far from new--the importance of exercise for health has been reported for centuries. Participation in regular physical activity and exercise provides numerous benefits for health with such benefits typically varying according to the volume completed as reflected by intensity, duration, and frequency. Evidence suggests a dose-response relationship such that being active, even to a modest level, is preferable to being inactive or sedentary. Greatest benefits are commonly associated with the previously sedentary individual assuming a more active lifestyle. There is an apparent linear relationship between physical activity and health status and as a general rule, increases in physical activity and fitness result in additional improvements in health status. This narrative review provides a selective appraisal of the evidence for the importance of physical activity for health, commencing with a baseline historical perspective followed by a summary of key health benefits associated with an active lifestyle. © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wahid, Ali, E-mail: ali.wahid@live.com; Salim, Ahmed Mohamed Ahmed, E-mail: mohamed.salim@petronas.com.my; Yusoff, Wan Ismail Wan, E-mail: wanismail-wanyusoff@petronas.com.my
2016-02-01
Geostatistics or statistical approach is based on the studies of temporal and spatial trend, which depend upon spatial relationships to model known information of variable(s) at unsampled locations. The statistical technique known as kriging was used for petrophycial and facies analysis, which help to assume spatial relationship to model the geological continuity between the known data and the unknown to produce a single best guess of the unknown. Kriging is also known as optimal interpolation technique, which facilitate to generate best linear unbiased estimation of each horizon. The idea is to construct a numerical model of the lithofacies and rockmore » properties that honor available data and further integrate with interpreting seismic sections, techtonostratigraphy chart with sea level curve (short term) and regional tectonics of the study area to find the structural and stratigraphic growth history of the NW Bonaparte Basin. By using kriging technique the models were built which help to estimate different parameters like horizons, facies, and porosities in the study area. The variograms were used to determine for identification of spatial relationship between data which help to find the depositional history of the North West (NW) Bonaparte Basin.« less
Predictive relationship between polyphenol and nonfat cocoa solids content of chocolate.
Cooper, Karen A; Campos-Giménez, Esther; Jiménez Alvarez, Diego; Rytz, Andreas; Nagy, Kornél; Williamson, Gary
2008-01-09
Chocolate is often labeled with percent cocoa solids content. It is assumed that higher cocoa solids contents are indicative of higher polyphenol concentrations, which have potential health benefits. However, cocoa solids include polyphenol-free cocoa butter and polyphenol-rich nonfat cocoa solids (NFCS). In this study the strength of the relationship between NFCS content (estimated by theobromine as a proxy) and polyphenol content was tested in chocolate samples with labeled cocoa solids contents in the range of 20-100%, grouped as dark (n = 46), milk (n = 8), and those chocolates containing inclusions such as wafers or nuts (n = 15). The relationship was calculated with regard to both total polyphenol content and individual polyphenols. In dark chocolates, NFCS is linearly related to total polyphenols (r2 = 0.73). Total polyphenol content appears to be systematically slightly higher for milk chocolates than estimated by the dark chocolate model, whereas for chocolates containing other ingredients, the estimates fall close to or slightly below the model results. This shows that extra components such as milk, wafers, or nuts might influence the measurements of both theobromine and polyphenol contents. For each of the six main polyphenols (as well as their sum), the relationship with the estimated NFCS was much lower than for total polyphenols (r2 < 0.40), but these relationships were independent of the nature of the chocolate type, indicating that they might still have some predictive capabilities.
MULTIVARIATE LINEAR MIXED MODELS FOR MULTIPLE OUTCOMES. (R824757)
We propose a multivariate linear mixed (MLMM) for the analysis of multiple outcomes, which generalizes the latent variable model of Sammel and Ryan. The proposed model assumes a flexible correlation structure among the multiple outcomes, and allows a global test of the impact of ...
Guckenberger, Matthias; Klement, Rainer Johannes; Allgäuer, Michael; Appold, Steffen; Dieckmann, Karin; Ernst, Iris; Ganswindt, Ute; Holy, Richard; Nestle, Ursula; Nevinny-Stickel, Meinhard; Semrau, Sabine; Sterzing, Florian; Wittig, Andrea; Andratschke, Nicolaus; Flentje, Michael
2013-10-01
To compare the linear-quadratic (LQ) and the LQ-L formalism (linear cell survival curve beyond a threshold dose dT) for modeling local tumor control probability (TCP) in stereotactic body radiotherapy (SBRT) for stage I non-small cell lung cancer (NSCLC). This study is based on 395 patients from 13 German and Austrian centers treated with SBRT for stage I NSCLC. The median number of SBRT fractions was 3 (range 1-8) and median single fraction dose was 12.5 Gy (2.9-33 Gy); dose was prescribed to the median 65% PTV encompassing isodose (60-100%). Assuming an α/β-value of 10 Gy, we modeled TCP as a sigmoid-shaped function of the biologically effective dose (BED). Models were compared using maximum likelihood ratio tests as well as Bayes factors (BFs). There was strong evidence for a dose-response relationship in the total patient cohort (BFs>20), which was lacking in single-fraction SBRT (BFs<3). Using the PTV encompassing dose or maximum (isocentric) dose, our data indicated a LQ-L transition dose (dT) at 11 Gy (68% CI 8-14 Gy) or 22 Gy (14-42 Gy), respectively. However, the fit of the LQ-L models was not significantly better than a fit without the dT parameter (p=0.07, BF=2.1 and p=0.86, BF=0.8, respectively). Generally, isocentric doses resulted in much better dose-response relationships than PTV encompassing doses (BFs>20). Our data suggest accurate modeling of local tumor control in fractionated SBRT for stage I NSCLC with the traditional LQ formalism. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Yu, Dahai; Armstrong, Ben G.; Pattenden, Sam; Wilkinson, Paul; Doherty, Ruth M.; Heal, Mathew R.; Anderson, H. Ross
2012-01-01
Background: Short-term exposure to ozone has been associated with increased daily mortality. The shape of the concentration–response relationship—and, in particular, if there is a threshold—is critical for estimating public health impacts. Objective: We investigated the concentration–response relationship between daily ozone and mortality in five urban and five rural areas in the United Kingdom from 1993 to 2006. Methods: We used Poisson regression, controlling for seasonality, temperature, and influenza, to investigate associations between daily maximum 8-hr ozone and daily all-cause mortality, assuming linear, linear-threshold, and spline models for all-year and season-specific periods. We examined sensitivity to adjustment for particles (urban areas only) and alternative temperature metrics. Results: In all-year analyses, we found clear evidence for a threshold in the concentration–response relationship between ozone and all-cause mortality in London at 65 µg/m3 [95% confidence interval (CI): 58, 83] but little evidence of a threshold in other urban or rural areas. Combined linear effect estimates for all-cause mortality were comparable for urban and rural areas: 0.48% (95% CI: 0.35, 0.60) and 0.58% (95% CI: 0.36, 0.81) per 10-µg/m3 increase in ozone concentrations, respectively. Seasonal analyses suggested thresholds in both urban and rural areas for effects of ozone during summer months. Conclusions: Our results suggest that health impacts should be estimated across the whole ambient range of ozone using both threshold and nonthreshold models, and models stratified by season. Evidence of a threshold effect in London but not in other study areas requires further investigation. The public health impacts of exposure to ozone in rural areas should not be overlooked. PMID:22814173
On an apparent discrepancy between pulsation and evolution masses for Cepheids.
NASA Technical Reports Server (NTRS)
Iben, I., Jr.; Tuggle, R. S.
1972-01-01
Results of new theoretical pulsation calculations in the linear nonadiabatic approximation are presented. Emphasis is placed on the location of blue edges (the borderline between stability and instability against pulsation) for pulsation in the fundamental mode. The results of evolutionary calculations for the helium-burning phase are introduced, and a theoretical period-luminosity relationship is obtained for Cepheids that lie on the blue edge of the instability strip. The theoretical results are then compared with current estimates of the intrinsic bulk properties of 13 Cepheids, and it is shown how theoretical and observational properties may be reconciled without assuming significant mass loss or the necessity of major adjustments in the theory. Finally, it is argued that the required revision in Cepheid luminosities lies within the observational uncertainties.
Gong, Rui; Xu, Haisong; Tong, Qingfen
2012-10-20
The colorimetric characterization of active matrix organic light emitting diode (AMOLED) panels suffers from their poor channel independence. Based on the colorimetric characteristics evaluation of channel independence and chromaticity constancy, an accurate colorimetric characterization method, namely, the polynomial compensation model (PC model) considering channel interactions was proposed for AMOLED panels. In this model, polynomial expressions are employed to calculate the relationship between the prediction errors of XYZ tristimulus values and the digital inputs to compensate the XYZ prediction errors of the conventional piecewise linear interpolation assuming the variable chromaticity coordinates (PLVC) model. The experimental results indicated that the proposed PC model outperformed other typical characterization models for the two tested AMOLED smart-phone displays and for the professional liquid crystal display monitor as well.
Instrumental Variable Analysis with a Nonlinear Exposure–Outcome Relationship
Davies, Neil M.; Thompson, Simon G.
2014-01-01
Background: Instrumental variable methods can estimate the causal effect of an exposure on an outcome using observational data. Many instrumental variable methods assume that the exposure–outcome relation is linear, but in practice this assumption is often in doubt, or perhaps the shape of the relation is a target for investigation. We investigate this issue in the context of Mendelian randomization, the use of genetic variants as instrumental variables. Methods: Using simulations, we demonstrate the performance of a simple linear instrumental variable method when the true shape of the exposure–outcome relation is not linear. We also present a novel method for estimating the effect of the exposure on the outcome within strata of the exposure distribution. This enables the estimation of localized average causal effects within quantile groups of the exposure or as a continuous function of the exposure using a sliding window approach. Results: Our simulations suggest that linear instrumental variable estimates approximate a population-averaged causal effect. This is the average difference in the outcome if the exposure for every individual in the population is increased by a fixed amount. Estimates of localized average causal effects reveal the shape of the exposure–outcome relation for a variety of models. These methods are used to investigate the relations between body mass index and a range of cardiovascular risk factors. Conclusions: Nonlinear exposure–outcome relations should not be a barrier to instrumental variable analyses. When the exposure–outcome relation is not linear, either a population-averaged causal effect or the shape of the exposure–outcome relation can be estimated. PMID:25166881
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, B.
1994-12-31
This paper describes an elastic-plastic fracture mechanics (EPFM) study of shallow weld-toe cracks. Two limiting crack configurations, plane strain edge crack and semi-circular surface crack in fillet welded T-butt plate joint, were analyzed using the finite element method. Crack depth ranging from 2 to 40% of plate thickness were considered. The elastic-plastic analysis, assuming power-law hardening relationship and Mises yield criterion, was based on incremental plasticity theory. Tension and bending loads applied were monotonically increased to a level causing relatively large scale yielding at the crack tip. Effects of weld-notch geometry and ductile material modeling on prediction of fracture mechanicsmore » characterizing parameter were assessed. It was found that the weld-notch effect reduces and the effect of material modeling increases as crack depth increases. Material modeling is less important than geometric modeling in analysis of very shallow cracks but is more important for relatively deeper cracks, e.g. crack depth more than 20% of thickness. The effect of material modeling can be assessed using a simplified structural model. Weld magnification factors derived assuming linear elastic conditions can be applied to EPFM characterization.« less
Visual effects in great bowerbird sexual displays and their implications for signal design
Endler, John A.; Gaburro, Julie; Kelley, Laura A.
2014-01-01
It is often assumed that the primary purpose of a male's sexual display is to provide information about quality, or to strongly stimulate prospective mates, but other functions of courtship displays have been relatively neglected. Male great bowerbirds (Ptilonorhynchus nuchalis) construct bowers that exploit the female's predictable field of view (FOV) during courtship displays by creating forced perspective illusions, and the quality of illusion is a good predictor of mating success. Here, we present and discuss two additional components of male courtship displays that use the female's predetermined viewpoint: (i) the rapid and diverse flashing of coloured objects within her FOV and (ii) chromatic adaptation of the female's eyes that alters her perception of the colour of the displayed objects. Neither is directly related to mating success, but both are likely to increase signal efficacy, and may also be associated with attracting and holding the female's attention. Signal efficacy is constrained by trade-offs between the signal components; there are both positive and negative interactions within multicomponent signals. Important signal components may have a threshold effect on fitness rather than the often assumed linear relationship. PMID:24695430
Embracing complexity: theory, cases and the future of bioethics.
Wilson, James
2014-01-01
This paper reflects on the relationship between theory and practice in bioethics, by using various concepts drawn from debates on innovation in healthcare research--in particular debates around how best to connect up blue skies 'basic' research with practical innovations that can improve human lives. It argues that it is a mistake to assume that the most difficult and important questions in bioethics are the most abstract ones, and also a mistake to assume that getting clear about abstract cases will automatically be of much help in getting clear about more complex cases. It replaces this implicitly linear model with a more complex one that draws on the idea of translational research in healthcare. On the translational model, there is a continuum of cases from the most simple and abstract (thought experiments) to the most concrete and complex (real world cases). Insights need to travel in both directions along this continuum--from the more abstract to the more concrete and from the more concrete to the more abstract. The paper maps out some difficulties in moving from simpler to more complex cases, and in doing so makes recommendations about the future of bioethics.
ERIC Educational Resources Information Center
Xu, Xueli; von Davier, Matthias
2008-01-01
The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…
Mukherjee, Snigdha; Canterberry, Melanie; Yore, Jennifer B; Ledford, Edward Cannon; Carton, Thomas W
2017-08-24
The relationship between mental health status and smoking is complicated and often confounded by bi-directionality, yet most research on this relationship assumes exogeneity. The goal of this article is to implement an instrumental variable approach to (1) test the exogeneity assumption and (2) report on the association between mental health status and smoking post-disaster. This analysis utilizes the 2006 and 2007 Louisiana Behavioral Risk Factor Surveillance Survey to examine the link between mental distress and smoking in areas affected by Hurricanes Katrina and Rita. Residence in a hurricane-affected parish (county) was used as an instrumental variable for mental distress. Just over 22% of the sample resided in a hurricane-affected parish. Residents of hurricane-affected parishes were significantly more likely to report occasional and frequent mental distress. Residence in a hurricane-affected parish was not significantly associated with smoking status. With residence established as a salient instrumental variable for mental distress, the exogeneity assumption was tested and confirmed in this sample. A dose-response relationship existed between mental distress and smoking, with smoking prevalence increasing directly (and non-linearly) with mental distress. In this sample, the relationship between mental distress and smoking status was exogenous and followed a dose-response relationship, suggesting that the disasters did not result in an uptake of smoking initiation, but that the higher amounts of mental distress may lead to increased use among smokers. The findings suggest that tobacco control programs should devise unique strategies to address mentally distressed populations.
Changes of Linearity in MF2 Index with R12 and Solar Activity Maximum
NASA Astrophysics Data System (ADS)
Villanueva, L.
2013-05-01
Critical frequency of F2 layer is related to the solar activity, and the sunspot number has been the standard index for ionospheric prediction programs. This layer, being considered the most important in HF radio communications due to its highest electron density, determines the maximum frequency coming back from ground base transmitter signals, and shows irregular variation in time and space. Nowadays the spatial variation, better understood due to the availability of TEC measurements, let Space Weather Centers have observations almost in real time. However, it is still the most difficult layer to predict in time. Short time variations are improved in IRI model, but long term predictions are only related to the well-known CCIR and URSI coefficients and Solar activity R12 predictions, (or ionospheric indexes in regional models). The concept of the "saturation" of the ionosphere is based on data observations around 3 solar cycles before 1970, (NBS, 1968). There is a linear relationship among MUF (0Km) and R12, for smooth Sunspot numbers R12 less than 100, but constant for higher R12, so, no rise of MUF is expected for R12 higher than 100. This recommendation has been used in most of the known Ionospheric prediction programs for HF Radio communication. In this work, observations of smoothed ionospheric index MF2 related to R12 are presented to find common features of the linear relationship, which is found to persist in different ranges of R12 depending on the specific maximum level of each solar cycle. In the analysis of individual solar cycles, the lapse of linearity is less than 100 for a low solar cycle and higher than 100 for a high solar cycle. To improve ionospheric predictions we can establish levels for solar cycle maximum sunspot numbers R12 around low 100, medium 150 and high 200 and specify the ranges of linearity of MUF(0Km) related to R12 which is not only 100 as assumed for all the solar cycles. For lower levels of solar cycle, discussions of present observations are presented.
The Differentiation of Adaptive Behaviours: Evidence from High and Low Performers
ERIC Educational Resources Information Center
Kane, Harrison; Oakland, Thomas David
2015-01-01
Background: Professionals who use measures of adaptive behaviour when working with special populations may assume that adaptive behaviour is a consistent and linear construct at various ability levels and thus believe the construct of adaptive behaviour is the same for high and low performers. That is, highly adaptive people simply are assumed to…
ERIC Educational Resources Information Center
Kameda, Tatsuya; Tsukasaki, Takafumi; Hastie, Reid; Berg, Nathan
2011-01-01
We introduce a game theory model of individual decisions to cooperate by contributing personal resources to group decisions versus by free riding on the contributions of other members. In contrast to most public-goods games that assume group returns are linear in individual contributions, the present model assumes decreasing marginal group…
A Geomorphologic Synthesis of Nonlinearity in Surface Runoff
NASA Astrophysics Data System (ADS)
Wang, C. T.; Gupta, Vijay K.; Waymire, Ed
1981-06-01
The geomorphic approach leading to a representation of an instantaneous unit hydrograph (iuh) which we developed earlier is generalized to incorporate nonlinear effects in the rainfall-runoff transformation. It is demonstrated that the nonlinearity in the transformation enters in part through the dependence of the mean holding time on the rainfall intensity. Under an assumed first approximation that this dependence is the sole source of nonlinearity an explicit quasi-linear representation results for the rainfall- runoff transformation. The kernel function of this transformation can be termed as the instantaneous response function (irf) in contradistinction to the notion of an iuh for the case of a linear rainfall-runoff transformation. The predictions from the quasi-linear theory agree very well with predictions from the kinematic wave approach for the one small basin that is analyzed. Also, for two large basins in Illinois having areas of about 1100 mi2 the predictions from the quasi-linear approach compare very well with the observed flows. A measure of nonlinearity, α naturally arises through the dependence of the mean holding time KB(i0) on the rainfall intensity i0via KB (i0) ˜ i0 -α. Computations of α for four basins show that α approaches ⅔ as basin size decreases and approaches zero as the basin size increases. A semilog plot of α versus the square root of the basin area gives a straight line. Confirmation of this relationship for other basins would be of basic importance in predicting flows from ungaged basins.
Relaxation and self-organization in two-dimensional plasma and neutral fluid flow systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, Amita
Extensive numerical studies in the framework of a simplified two-dimensional model for neutral and plasma fluid for a variety of initial configurations and for both decaying and driven cases are carried out to illustrate relaxation toward a self-organized state. The dynamical model equation constitutes a simple choice for this purpose, e.g., the vorticity equation of the Navier-Stokes dynamics for the incompressible neutral fluids and the Hasegawa-Mima equation for plasma fluid flow system. Scatter plots are employed to observe a development of functional relationship, if any, amidst the generalized vorticity and its Laplacian. It is seen that they do not satisfymore » a linear relationship as the well known variational approach of enstrophy minimization subject to constancy of the energy integral for the two-dimensional (2D) system suggests. The observed nonlinear functional relationship is understood by separating the contribution to the scatter plot from spatial regions with intense vorticity patches and those of the background flow region where the background vorticity is weak or absent altogether. It is shown that such a separation has close connection with the known exact analytical solutions of the system. The analytical solutions are typically obtained by assuming a finite source of vorticity for the inner core of the localized structure, which is then matched with the solution in the outer region where vorticity is chosen to be zero. The work also demonstrates that the seemingly ad hoc choice of the linear vorticity source function for the inner region is in fact consistent with the self-organization paradigm of the 2D systems.« less
Body Condition Peaks at Intermediate Parasite Loads in the Common Bully Gobiomorphus cotidianus
Maceda-Veiga, Alberto; Green, Andy J.; Poulin, Robert; Lagrue, Clément
2016-01-01
Most ecologists and conservationists perceive parasitic infections as deleterious for the hosts. Their effects, however, depend on many factors including host body condition, parasite load and the life cycle of the parasite. More research into how multiple parasite taxa affect host body condition is required and will help us to better understand host-parasite coevolution. We used body condition indices, based on mass-length relationships, to test the effects that abundances and biomasses of six parasite taxa (five trematodes, Apatemon sp., Tylodelphys sp., Stegodexamene anguillae, Telogaster opisthorchis, Coitocaecum parvum, and the nematode Eustrongylides sp.) with different modes of transmission have on the body condition of their intermediate or final fish host, the common bully Gobiomorphus cotidianus in New Zealand. We used two alternative body condition methods, the Scaled Mass Index (SMI) and Fulton’s condition factor. General linear and hierarchical partitioning models consistently showed that fish body condition varied strongly across three lakes and seasons, and that most parasites did not have an effect on the two body condition indices. However, fish body condition showed a highly significant humpbacked relationship with the total abundance of all six parasite taxa, mostly driven by Apatemon sp. and S. anguillae, indicating that the effects of these parasites can range from positive to negative as abundance increases. Such a response was also evident in models including total parasite biomass. Our methodological comparison supports the SMI as the most robust mass-length method to examine the effects of parasitic infections on fish body condition, and suggests that linear, negative relationships between host condition and parasite load should not be assumed. PMID:28030606
Structural Aspects of System Identification
NASA Technical Reports Server (NTRS)
Glover, Keith
1973-01-01
The problem of identifying linear dynamical systems is studied by considering structural and deterministic properties of linear systems that have an impact on stochastic identification algorithms. In particular considered is parametrization of linear systems so that there is a unique solution and all systems in appropriate class can be represented. It is assumed that a parametrization of system matrices has been established from a priori knowledge of the system, and the question is considered of when the unknown parameters of this system can be identified from input/output observations. It is assumed that the transfer function can be asymptotically identified, and the conditions are derived for the local, global and partial identifiability of the parametrization. Then it is shown that, with the right formulation, identifiability in the presence of feedback can be treated in the same way. Similarly the identifiability of parametrizations of systems driven by unobserved white noise is considered using the results from the theory of spectral factorization.
NASA Astrophysics Data System (ADS)
Rozylo, Patryk; Teter, Andrzej; Debski, Hubert; Wysmulski, Pawel; Falkowicz, Katarzyna
2017-10-01
The object of the research are short, thin-walled columns with an open top-hat cross section made of multilayer laminate. The walls of the investigated profiles are made of plate elements. The entire columns are subjected to uniform compression. A detailed analysis allowed us to determine critical forces and post-critical equilibrium paths. It is assumed that the columns are articulately supported on the edges forming their ends. The numerical investigation is performed by the finite element method. The study involves solving the problem of eigenvalue and the non-linear problem of stability of the structure. The numerical analysis is performed by the commercial simulation software ABAQUS®. The numerical results are then validated experimentally. In the discussed cases, it is assumed that the material operates within a linearly-elastic range, and the non-linearity of the FEM model is due to large displacements.
Masurel, R J; Gelineau, P; Lequeux, F; Cantournet, S; Montes, H
2017-12-27
In this paper we focus on the role of dynamical heterogeneities on the non-linear response of polymers in the glass transition domain. We start from a simple coarse-grained model that assumes a random distribution of the initial local relaxation times and that quantitatively describes the linear viscoelasticity of a polymer in the glass transition regime. We extend this model to non-linear mechanics assuming a local Eyring stress dependence of the relaxation times. Implementing the model in a finite element mechanics code, we derive the mechanical properties and the local mechanical fields at the beginning of the non-linear regime. The model predicts a narrowing of distribution of relaxation times and the storage of a part of the mechanical energy --internal stress-- transferred to the material during stretching in this temperature range. We show that the stress field is not spatially correlated under and after loading and follows a Gaussian distribution. In addition the strain field exhibits shear bands, but the strain distribution is narrow. Hence, most of the mechanical quantities can be calculated analytically, in a very good approximation, with the simple assumption that the strain rate is constant.
NASA Astrophysics Data System (ADS)
Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J.
2009-08-01
In this paper, the state least-squares linear estimation problem from correlated uncertain observations coming from multiple sensors is addressed. It is assumed that, at each sensor, the state is measured in the presence of additive white noise and that the uncertainty in the observations is characterized by a set of Bernoulli random variables which are only correlated at consecutive time instants. Assuming that the statistical properties of such variables are not necessarily the same for all the sensors, a recursive filtering algorithm is proposed, and the performance of the estimators is illustrated by a numerical simulation example wherein a signal is estimated from correlated uncertain observations coming from two sensors with different uncertainty characteristics.
The genetic and economic effect of preliminary culling in the seedling orchard
Don E. Riemenschneider
1977-01-01
The genetic and economic effects of two stages of truncation selection in a white spruce seedling orchard were investigated by computer simulation. Genetic effects were computed by assuming a bivariate distribution of juvenile and mature traits and volume was used as the selection criterion. Seed production was assumed to rise in a linear fashion to maturity and then...
Prehn, Richmond T
2010-01-01
All nascent neoplasms probably elicit at least a weak immune reaction. However, the initial effect of the weak immune reaction on a nascent tumor is always stimulatory rather than inhibitory to tumor growth, assuming only that exposure to the tumor antigens did not antedate the initiation of the neoplasm (as may occur in some virally induced tumors). This conclusion derives from the observation that the relationship between the magnitude of an adaptive immune reaction and tumor growth is not linear but varies such that while large quantities of antitumor immune reactants tend to inhibit tumor growth, smaller quantities of the same reactants are, for unknown reasons, stimulatory. Any immune reaction must presumably be small before it can become large; hence the initial reaction to the first presentation of a tumor antigen must always be small and in the stimulatory portion of this nonlinear relationship. In mouse-skin carcinogenesis experiments it was found that premalignant papillomas were variously immunogenic, but that the carcinomas that arose in them were, presumably because of induced immune tolerance, nonimmunogenic in the animal of origin.
The relationship between carbohydrate and the mealtime insulin dose in type 1 diabetes.
Bell, Kirstine J; King, Bruce R; Shafat, Amir; Smart, Carmel E
2015-01-01
A primary focus of the nutritional management of type 1 diabetes has been on matching prandial insulin therapy with carbohydrate amount consumed. Different methods exist to quantify carbohydrate including counting in one gram increments, 10g portions or 15g exchanges. Clinicians have assumed that counting in one gram increments is necessary to precisely dose insulin and optimize postprandial control. Carbohydrate estimations in portions or exchanges have been thought of as inadequate because they may result in less precise matching of insulin dose to carbohydrate amount. However, studies examining the impact of errors in carbohydrate quantification on postprandial glycemia challenge this commonly held view. In addition it has been found that a single mealtime bolus of insulin can cover a range of carbohydrate intake without deterioration in postprandial control. Furthermore, limitations exist in the accuracy of the nutrition information panel on a food label. This article reviews the relationship between carbohydrate quantity and insulin dose, highlighting limitations in the evidence for a linear association. These insights have significant implications for patient education and mealtime insulin dose calculations. Copyright © 2015 Elsevier Inc. All rights reserved.
Li, Lei; Quinlivan, Patricia A; Knappe, Detlef R U
2005-05-01
A method based on the Polanyi-Dubinin-Manes (PDM) model is presented to predict adsorption isotherms of aqueous organic contaminants on activated carbons. It was assumed that trace organic compound adsorption from aqueous solution is primarily controlled by nonspecific dispersive interactions while water adsorption is controlled by specific interactions with oxygen-containing functional groups on the activated carbon surface. Coefficients describing the affinity of water for the activated carbon surface were derived from aqueous-phase methyl tertiary-butyl ether (MTBE) and trichloroethene (TCE) adsorption isotherm data that were collected with 12 well-characterized activated carbons. Over the range of oxygen contents covered by the adsorbents (approximately 0.8-10 mmol O/g dry, ash-free activated carbon), a linear relationship between water affinity coefficients and adsorbent oxygen content was obtained. Incorporating water affinity coefficients calculated from the developed relationship into the PDM model, isotherm predictions resulted that agreed well with experimental data for three adsorbents and two adsorbates [tetrachloroethene (PCE), cis-1,2-dichloroethene (DCE)] that were not used to calibrate the model.
Heat flow and heat generation in greenstone belts
NASA Technical Reports Server (NTRS)
Drury, M. J.
1986-01-01
Heat flow has been measured in Precambrian shields in both greenstone belts and crystalline terrains. Values are generally low, reflecting the great age and tectonic stability of the shields; they range typically between 30 and 50 mW/sq m, although extreme values of 18 and 79 mW/sq m have been reported. For large areas of the Earth's surface that are assumed to have been subjected to a common thermotectonic event, plots of heat flow against heat generation appear to be linear, although there may be considerable scatter in the data. The relationship is expressed as: Q = Q sub o + D A sub o in which Q is the observed heat flow, A sub o is the measured heat generation at the surface, Q sub o is the reduced heat flow from the lower crust and mantle, and D, which has the dimension of length, represents a scale depth for the distribution of radiogenic elements. Most authors have not used data from greenstone belts in attempting to define the relationship within shields, considering them unrepresentative and preferring to use data from relatively homogeneous crystalline rocks. A discussion follows.
A General Linear Method for Equating with Small Samples
ERIC Educational Resources Information Center
Albano, Anthony D.
2015-01-01
Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…
Modeling containment of large wildfires using generalized linear mixed-model analysis
Mark Finney; Isaac C. Grenfell; Charles W. McHugh
2009-01-01
Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...
NASA Astrophysics Data System (ADS)
Camporesi, Roberto
2011-06-01
We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of the other more advanced approaches: Laplace transform, linear systems, the general theory of linear equations with variable coefficients and the variation of constants method. The approach presented here can be used in a first course on differential equations for science and engineering majors.
The Effects of Impurities on Protein Crystal Growth and Nucleation: A Preliminary Study
NASA Technical Reports Server (NTRS)
Schall, Constance A.
1998-01-01
Kubota and Mullin (1995) devised a simple model to account for the effects of impurities on crystal growth of small inorganic and organic molecules in aqueous solutions. Experimentally, the relative step velocity and crystal growth of these molecules asymptotically approach zero or non-zero values with increasing concentrations of impurities. Alternatively, the step velocity and crystal growth can linearly approach zero as the impurity concentration increases. The Kubota-Mullin model assumes that the impurity exhibits Langmuirian adsorption onto the crystal surface. Decreases in step velocities and subsequent growth rates are related to the fractional coverage (theta) of the crystal surface by adsorbed impurities; theta = Kx / (I +Kx), x = mole fraction of impurity in solution. In the presence of impurities, the relative step velocity, V/Vo, and the relative growth rate of a crystal face, G/Go, are proposed to conform to the following equations: V/Vo approx. = G/Go = 1 - (alpha)(theta). The adsorption of impurity is assumed to be rapid and in quasi-equilibrium with the crystal surface sites available. When the value of alpha, an effectiveness factor, is one the growth will asymptotically approach zero with increasing concentrations of impurity. At values less than one, growth approaches a non-zero value asymptotically. When alpha is much greater than one, there will be a linear relationship between impurity concentration and growth rates. Kubota and Mullin expect alpha to decrease with increasing supersaturation and shrinking size of a two dimensional nucleus. It is expected that impurity effects on protein crystal growth will exhibit behavior similar to that of impurities in small molecule growth. A number of proteins were added to purified chicken egg white lysozyme, the effect on crystal nucleation and growth assessed.
Relationships between brightness of nighttime lights and population density
NASA Astrophysics Data System (ADS)
Naizhuo, Z.
2012-12-01
Brightness of nighttime lights has been proven to be a good proxy for socioeconomic and demographic statistics. Moreover, the satellite nighttime lights data have been used to spatially disaggregate amounts of gross domestic product (GDP), fossil fuel carbon dioxide emission, and electric power consumption (Ghosh et al., 2010; Oda and Maksyutov, 2011; Zhao et al., 2012). Spatial disaggregations were performed in these previous studies based on assumed linear relationships between digital number (DN) value of pixels in the nighttime light images and socioeconomic data. However, reliability of the linear relationships was never tested due to lack of relative high-spatial-resolution (equal to or finer than 1 km × 1 km) statistical data. With the similar assumption that brightness linearly correlates to population, Bharti et al. (2011) used nighttime light data as a proxy for population density and then developed a model about seasonal fluctuations of measles in West Africa. The Oak Ridge National Laboratory used sub-national census population data and high spatial resolution remotely-sensed-images to produce LandScan population raster datasets. The LandScan population datasets have 1 km × 1 km spatial resolution which is consistent with the spatial resolution of the nighttime light images. Therefore, in this study I selected 2008 LandScan population data as baseline reference data and the contiguous United State as study area. Relationships between DN value of pixels in the 2008 Defense Meteorological Satellite Program's Operational Linescan System (DMSP-OLS) stable light image and population density were established. Results showed that an exponential function can more accurately reflect the relationship between luminosity and population density than a linear function. Additionally, a certain number of saturated pixels with DN value of 63 exist in urban core areas. If directly using the exponential function to estimate the population density for the whole brightly lit area, relatively large under-estimations would emerge in the urban core regions. Previous studies have shown that GDP, carbon dioxide emission, and electric power consumption strongly correlate to urban population (Ghosh et al., 2010; Sutton et al., 2007; Zhao et al., 2012). Thus, although this study only examined the relationships between brightness of nighttime lights and population density, the results can provide insight for the spatial disaggregations of socioeconomic data (e.g. GDP, carbon dioxide emission, and electric power consumption) using the satellite nighttime light image data. Simply distributing the socioeconomic data to each pixel in proportion to the DN value of the nighttime light images may generate relatively large errors. References Bharit N, Tatem AJ, Ferrari MJ, Grais RF, Djibo A, Grenfell BT, 2011. Science, 334:1424-1427. Ghosh T, Elvidge CD, Sutton PC, Baugh KE, Ziskin D, Tuttle BT, 2010. Energies, 3:1895-1913. Oda T, Maksyutov S, 2011. Atmospheric Chemistry and Physics, 11:543-556. Sutton PC, Elvidge CD, Ghosh T, 2007. International Journal of Ecological Economics and Statistics, 8:5-21. Zhao N, Ghosh T, Samson EL, 2012. International Journal of Remote sensing, 33:6304-6320.
Relationships between coordination, active drag and propelling efficiency in crawl.
Seifert, Ludovic; Schnitzler, Christophe; Bideault, Gautier; Alberty, Morgan; Chollet, Didier; Toussaint, Huub Martin
2015-02-01
This study examines the relationships between the index of coordination (IdC) and active drag (D) assuming that at constant average speed, average drag equals average propulsion. The relationship between IdC and propulsive efficiency (ep) was also investigated at maximal speed. Twenty national swimmers completed two incremental speed tests swimming front crawl with arms only in free condition and using a measurement of active drag system. Each test was composed of eight 25-m bouts from 60% to 100% of maximal intensity whereby each lap was swum at constant speed. Different regression models were tested to analyse IdC-D relationship. Correlation between IdC and ep was calculated. IdC was linked to D by linear regression (IdC=0.246·D-27.06; R(2)=0.88, P<.05); swimmers switched from catch-up to superposition coordination mode at a speed of ∼1.55ms(-1) where average D is ∼110N. No correlation between IdC and ep at maximal speed was found. The intra-individual analysis revealed that coordination plays an important role in scaling propulsive forces with higher speed levels such that these are adapted to aquatic resistance. Inter-individual analysis showed that high IdC did not relate to a high ep suggesting an individual optimization of force and power generation is at play to reach high speeds. Copyright © 2014 Elsevier B.V. All rights reserved.
A dual-loop model of the human controller in single-axis tracking tasks
NASA Technical Reports Server (NTRS)
Hess, R. A.
1977-01-01
A dual loop model of the human controller in single axis compensatory tracking tasks is introduced. This model possesses an inner-loop closure which involves feeding back that portion of the controlled element output rate which is due to control activity. The sensory inputs to the human controller are assumed to be system error and control force. The former is assumed to be sensed via visual, aural, or tactile displays while the latter is assumed to be sensed in kinesthetic fashion. A nonlinear form of the model is briefly discussed. This model is then linearized and parameterized. A set of general adaptive characteristics for the parameterized model is hypothesized. These characteristics describe the manner in which the parameters in the linearized model will vary with such things as display quality. It is demonstrated that the parameterized model can produce controller describing functions which closely approximate those measured in laboratory tracking tasks for a wide variety of controlled elements.
Longwave emission trends over Africa and implications for Atlantic hurricanes
NASA Astrophysics Data System (ADS)
Zhang, Lei; Rechtman, Thomas; Karnauskas, Kristopher B.; Li, Laifang; Donnelly, Jeffrey P.; Kossin, James P.
2017-09-01
The latitudinal gradient of outgoing longwave radiation (OLR) over Africa is a skillful and physically based predictor of seasonal Atlantic hurricane activity. The African OLR gradient is observed to have strengthened during the satellite era, as predicted by state-of-the-art global climate models (GCMs) in response to greenhouse gas forcing. Prior to the satellite era and the U.S. and European clean air acts, the African OLR gradient weakened due to aerosol forcing of the opposite sign. GCMs predict a continuation of the increasing OLR gradient in response to greenhouse gas forcing. Assuming a steady linear relationship between African easterly waves and tropical cyclogenesis, this result suggests a future increase in Atlantic tropical cyclone frequency by 10% (20%) at the end of the 21st century under the RCP 4.5 (8.5) forcing scenario.
NASA Technical Reports Server (NTRS)
Dillard, D. A.; Morris, D. H.; Brinson, H. F.
1981-01-01
An incremental numerical procedure based on lamination theory is developed to predict creep and creep rupture of general laminates. Existing unidirectional creep compliance and delayed failure data is used to develop analytical models for lamina response. The compliance model is based on a procedure proposed by Findley which incorporates the power law for creep into a nonlinear constitutive relationship. The matrix octahedral shear stress is assumed to control the stress interaction effect. A modified superposition principle is used to account for the varying stress level effect on the creep strain. The lamina failure model is based on a modification of the Tsai-Hill theory which includes the time dependent creep rupture strength. A linear cumulative damage law is used to monitor the remaining lifetime in each ply.
Toward a dynamical theory of body movement in musical performance
Demos, Alexander P.; Chaffin, Roger; Kant, Vivek
2014-01-01
Musicians sway expressively as they play in ways that seem clearly related to the music, but quantifying the relationship has been difficult. We suggest that a complex systems framework and its accompanying tools for analyzing non-linear dynamical systems can help identify the motor synergies involved. Synergies are temporary assemblies of parts that come together to accomplish specific goals. We assume that the goal of the performer is to convey musical structure and expression to the audience and to other performers. We provide examples of how dynamical systems tools, such as recurrence quantification analysis (RQA), can be used to examine performers' movements and relate them to the musical structure and to the musician's expressive intentions. We show how detrended fluctuation analysis (DFA) can be used to identify synergies and discover how they are affected by the performer's expressive intentions. PMID:24904490
Liu, Lan; Jiang, Tao
2007-01-01
With the launch of the international HapMap project, the haplotype inference problem has attracted a great deal of attention in the computational biology community recently. In this paper, we study the question of how to efficiently infer haplotypes from genotypes of individuals related by a pedigree without mating loops, assuming that the hereditary process was free of mutations (i.e. the Mendelian law of inheritance) and recombinants. We model the haplotype inference problem as a system of linear equations as in [10] and present an (optimal) linear-time (i.e. O(mn) time) algorithm to generate a particular solution (A particular solution of any linear system is an assignment of numerical values to the variables in the system which satisfies the equations in the system.) to the haplotype inference problem, where m is the number of loci (or markers) in a genotype and n is the number of individuals in the pedigree. Moreover, the algorithm also provides a general solution (A general solution of any linear system is denoted by the span of a basis in the solution space to its associated homogeneous system, offset from the origin by a vector, namely by any particular solution. A general solution for ZRHC is very useful in practice because it allows the end user to efficiently enumerate all solutions for ZRHC and performs tasks such as random sampling.) in O(mn2) time, which is optimal because the size of a general solution could be as large as Theta(mn2). The key ingredients of our construction are (i) a fast consistency checking procedure for the system of linear equations introduced in [10] based on a careful investigation of the relationship between the equations (ii) a novel linear-time method for solving linear equations without invoking the Gaussian elimination method. Although such a fast method for solving equations is not known for general systems of linear equations, we take advantage of the underlying loop-free pedigree graph and some special properties of the linear equations.
Is There a Relationship between Fish Cannibalism and Latitude or Species Richness?
Pereira, Larissa Strictar; Keppeler, Friedrich Wolfgang; Agostinho, Angelo Antonio; Winemiller, Kirk O
2017-01-01
Cannibalism has been commonly observed in fish from northern and alpine regions and less frequently reported for subtropical and tropical fish in more diverse communities. Assuming all else being equal, cannibalism should be more common in communities with lower species richness because the probability of encountering conspecific versus heterospecific prey would be higher. A global dataset was compiled to determine if cannibalism occurrence is associated with species richness and latitude. Cannibalism occurrence, local species richness and latitude were recorded for 4,100 populations of 2,314 teleost fish species. Relationships between cannibalism, species richness and latitude were evaluated using generalized linear mixed models. Species richness was an important predictor of cannibalism, with occurrences more frequently reported for assemblages containing fewer species. Cannibalism was positively related with latitude for both marine and freshwater ecosystems in the Northern Hemisphere, but not in the Southern Hemisphere. The regression slope for the relationship was steeper for freshwater than marine fishes. In general, cannibalism is more frequent in communities with lower species richness, and the relationship between cannibalism and latitude is stronger in the Northern Hemisphere. In the Southern Hemisphere, weaker latitudinal gradients of fish species richness may account for the weak relationship between cannibalism and latitude. Cannibalism may be more common in freshwater than marine systems because freshwater habitats tend to be smaller and more closed to dispersal. Cannibalism should have greatest potential to influence fish population dynamics in freshwater systems at high northern latitudes.
Is There a Relationship between Fish Cannibalism and Latitude or Species Richness?
Keppeler, Friedrich Wolfgang; Agostinho, Angelo Antonio; Winemiller, Kirk O.
2017-01-01
Cannibalism has been commonly observed in fish from northern and alpine regions and less frequently reported for subtropical and tropical fish in more diverse communities. Assuming all else being equal, cannibalism should be more common in communities with lower species richness because the probability of encountering conspecific versus heterospecific prey would be higher. A global dataset was compiled to determine if cannibalism occurrence is associated with species richness and latitude. Cannibalism occurrence, local species richness and latitude were recorded for 4,100 populations of 2,314 teleost fish species. Relationships between cannibalism, species richness and latitude were evaluated using generalized linear mixed models. Species richness was an important predictor of cannibalism, with occurrences more frequently reported for assemblages containing fewer species. Cannibalism was positively related with latitude for both marine and freshwater ecosystems in the Northern Hemisphere, but not in the Southern Hemisphere. The regression slope for the relationship was steeper for freshwater than marine fishes. In general, cannibalism is more frequent in communities with lower species richness, and the relationship between cannibalism and latitude is stronger in the Northern Hemisphere. In the Southern Hemisphere, weaker latitudinal gradients of fish species richness may account for the weak relationship between cannibalism and latitude. Cannibalism may be more common in freshwater than marine systems because freshwater habitats tend to be smaller and more closed to dispersal. Cannibalism should have greatest potential to influence fish population dynamics in freshwater systems at high northern latitudes. PMID:28122040
A comparison of lidar inversion methods for cirrus applications
NASA Technical Reports Server (NTRS)
Elouragini, Salem; Flamant, Pierre H.
1992-01-01
Several methods for inverting the lidar equation are suggested as means to derive the cirrus optical properties (beta backscatter, alpha extinction coefficients, and delta optical depth) at one wavelength. The lidar equation can be inverted in a linear or logarithmic form; either solution assumes a linear relationship: beta = kappa(alpha), where kappa is the lidar ratio. A number of problems prevent us from calculating alpha (or beta) with a good accuracy. Some of these are as follows: (1) the multiple scattering effect (most authors neglect it); (2) an absolute calibration of the lidar system (difficult and sometimes not possible); (3) lack of accuracy on the lidar ratio k (taken as constant, but in fact it varies with range and cloud species); and (4) the determination of boundary condition for logarithmic solution which depends on signal to noise ration (SNR) at cloud top. An inversion in a linear form needs an absolute calibration of the system. In practice one uses molecular backscattering below the cloud to calibrate the system. This method is not permanent because the lower atmosphere turbidity is variable. For a logarithmic solution, a reference extinction coefficient (alpha(sub f)) at cloud top is required. Several methods to determine alpha(sub f) were suggested. We tested these methods at low SNR. This led us to propose two new methods referenced as S1 and S2.
Helbich, Marco; Klein, Nadja; Roberts, Hannah; Hagedoorn, Paulien; Groenewegen, Peter P
2018-06-20
Exposure to green space seems to be beneficial for self-reported mental health. In this study we used an objective health indicator, namely antidepressant prescription rates. Current studies rely exclusively upon mean regression models assuming linear associations. It is, however, plausible that the presence of green space is non-linearly related with different quantiles of the outcome antidepressant prescription rates. These restrictions may contribute to inconsistent findings. Our aim was: a) to assess antidepressant prescription rates in relation to green space, and b) to analyze how the relationship varies non-linearly across different quantiles of antidepressant prescription rates. We used cross-sectional data for the year 2014 at a municipality level in the Netherlands. Ecological Bayesian geoadditive quantile regressions were fitted for the 15%, 50%, and 85% quantiles to estimate green space-prescription rate correlations, controlling for physical activity levels, socio-demographics, urbanicity, etc. RESULTS: The results suggested that green space was overall inversely and non-linearly associated with antidepressant prescription rates. More important, the associations differed across the quantiles, although the variation was modest. Significant non-linearities were apparent: The associations were slightly positive in the lower quantile and strongly negative in the upper one. Our findings imply that an increased availability of green space within a municipality may contribute to a reduction in the number of antidepressant prescriptions dispensed. Green space is thus a central health and community asset, whilst a minimum level of 28% needs to be established for health gains. The highest effectiveness occurred at a municipality surface percentage higher than 79%. This inverse dose-dependent relation has important implications for setting future community-level health and planning policies. Copyright © 2018 Elsevier Inc. All rights reserved.
Diaz, Francisco J.; McDonald, Peter R.; Pinter, Abraham; Chaguturu, Rathnam
2018-01-01
Biomolecular screening research frequently searches for the chemical compounds that are most likely to make a biochemical or cell-based assay system produce a strong continuous response. Several doses are tested with each compound and it is assumed that, if there is a dose-response relationship, the relationship follows a monotonic curve, usually a version of the median-effect equation. However, the null hypothesis of no relationship cannot be statistically tested using this equation. We used a linearized version of this equation to define a measure of pharmacological effect size, and use this measure to rank the investigated compounds in order of their overall capability to produce strong responses. The null hypothesis that none of the examined doses of a particular compound produced a strong response can be tested with this approach. The proposed approach is based on a new statistical model of the important concept of response detection limit, a concept that is usually neglected in the analysis of dose-response data with continuous responses. The methodology is illustrated with data from a study searching for compounds that neutralize the infection by a human immunodeficiency virus of brain glioblastoma cells. PMID:24905187
2014-01-01
Background Protein sites evolve at different rates due to functional and biophysical constraints. It is usually considered that the main structural determinant of a site’s rate of evolution is its Relative Solvent Accessibility (RSA). However, a recent comparative study has shown that the main structural determinant is the site’s Local Packing Density (LPD). LPD is related with dynamical flexibility, which has also been shown to correlate with sequence variability. Our purpose is to investigate the mechanism that connects a site’s LPD with its rate of evolution. Results We consider two models: an empirical Flexibility Model and a mechanistic Stress Model. The Flexibility Model postulates a linear increase of site-specific rate of evolution with dynamical flexibility. The Stress Model, introduced here, models mutations as random perturbations of the protein’s potential energy landscape, for which we use simple Elastic Network Models (ENMs). To account for natural selection we assume a single active conformation and use basic statistical physics to derive a linear relationship between site-specific evolutionary rates and the local stress of the mutant’s active conformation. We compare both models on a large and diverse dataset of enzymes. In a protein-by-protein study we found that the Stress Model outperforms the Flexibility Model for most proteins. Pooling all proteins together we show that the Stress Model is strongly supported by the total weight of evidence. Moreover, it accounts for the observed nonlinear dependence of sequence variability on flexibility. Finally, when mutational stress is controlled for, there is very little remaining correlation between sequence variability and dynamical flexibility. Conclusions We developed a mechanistic Stress Model of evolution according to which the rate of evolution of a site is predicted to depend linearly on the local mutational stress of the active conformation. Such local stress is proportional to LPD, so that this model explains the relationship between LPD and evolutionary rate. Moreover, the model also accounts for the nonlinear dependence between evolutionary rate and dynamical flexibility. PMID:24716445
Lee, Juwon; Sohn, Bo Kyung; Lee, Hyunjoo; Seong, Sujeong; Park, Soowon; Lee, Jun-Young
2017-01-01
One caregiver relationship that has been neglected in caregiver depression research is the daughter-in-law. Compared with Western countries, in which those who are closer in familial relationships such as the spouse or child usually take care of the patient, in many Asian countries, the daughter-in-law often assumes the caretaker role. However, not much research has been done on how this relationship may result in different caregiver outcomes. We sought to identify whether the association between patient characteristics and caregiver depressive symptoms differs according to the familial relationship between caregiver and patient. Ninety-five daughter (n = 47) and daughter-in-law (n = 48) caregivers of dementia patients were asked to report their own depressive symptoms and patient behavioral symptoms. Patients' cognitive abilities, daily activities, and global dementia ratings were obtained. Hierarchical linear regression was employed to determine predictors of depressive symptoms. Daughters-in-law had marginally higher depressive scores. After adjusting for caregiver and patient characteristics, in both groups, greater dependency in activities of daily living and more severe and frequent behavioral symptoms predicted higher caregiver depressive scores. However, greater severity and frequency of behavioral symptoms predicted depression to a greater degree in daughters compared with daughters-in-law. Although behavioral symptoms predicted depression in both caregiver groups, the association was much stronger for daughters. This suggests that the emotional relationship between the daughter and patient exacerbates the negative effect of behavioral symptoms on caregiver depression. The familial relationship between the caregiver and dementia patient should be considered in managing caregiver stress.
Matsuda, Hideaki; Hirata, Noriko; Kawaguchi, Yoshiko; Yamazaki, Miho; Naruto, Shunsuke; Shibano, Makio; Taniguchi, Masahiko; Baba, Kimiye; Kubo, Michinori
2005-07-01
Melanogenesis stimulation activities of seven ethanolic extracts obtained from Umbelliferae plants used as Chinese crude drugs, namely the roots of Angelica dahurica BENTH. et HOOK., A. biserrata SHEN et YUAN, Notopterygium incisum TING, Heracleum lanatum MICHX., and H. candicans WALL., and the fruits of Cinidium monnieri (L.) CUSSON and C. formosanum YABE, were examined by using cultured murine B16 melanoma cells. Among them, the extract (5, 25 microg/ml) of H. lanatum showed a potent stimulatory effect on melanogenesis with significant enhancement of cell proliferation in a dose-dependent manner. The melanogenesis stimulatory effects of sixteen coumarins (1-16) isolated from the seven Umbelliferae crude drugs were also examined. Among them, linear-furocoumarins [psoralen (1), xanthotoxin (2), bergapten (3), and isopimpinellin (4)] and angular-furocoumarin [sphondin (13)] exhibited potent melanogenesis stimulation activity. From the view point of structure-activity relationships, it may be assumed that a linear-furocoumarin ring having a hydrogen and/or methoxyl group at 5 and 8 positions such as 1, 2, 3 and 4 was preferable for the melanogenesis stimulation activity. The introduction of a prenyl group into the furocoumarin ring was disadvantageous. Coumarin derivatives having a simple coumarin ring were inactive.
Anisotropy of susceptibility in rocks which are magnetically nonlinear even in low fields
NASA Astrophysics Data System (ADS)
Hrouda, František; Chadima, Martin; Ježek, Josef
2018-06-01
Theory of the low-field anisotropy of magnetic susceptibility (AMS) assumes a linear relationship between magnetization and magnetizing field, resulting in field-independent susceptibility. This is valid for diamagnetic and paramagnetic minerals by definition and also for pure magnetite, while in titanomagnetite, pyrrhotite and hematite the susceptibility may be clearly field-dependent even in low fields used in common AMS meter. Consequently, the use of the linear AMS theory is fully legitimate in the former minerals, but in principle incorrect in the latter ones. Automated measurement of susceptibility in 320 directions in variable low-fields ranging from 5 to 700 A m-1 was applied to more than 100 specimens of various pyrrhotite-bearing and titanomagnetite-bearing rocks. Data analysis showed that the anisotropic susceptibility remains well represented by an ellipsoid in the entire low-field span even though the ellipsoid increases its volume and eccentricity. The principal directions do not change their orientations with low-field in most specimens. Expressions for susceptibility as a function of field were found in the form of diagonal tensor whose elements are polynomials of low order. In a large proportion of samples, the susceptibility expressions can be further simplified to have one common skeleton polynomial.
1/f-Noise of open bacterial porin channels.
Wohnsland, F; Benz, R
1997-07-01
General diffusion pores and specific porin channels from outer membranes of gram-negative bacteria were reconstituted into lipid bilayer membranes. The current noise of the channels was investigated for the different porins in the open state and in the ligand-induced closed state using fast Fourier transformation. The open channel noise exhibited 1/f-noise for frequencies up to 200 Hz. The 1/f-noise was investigated using the Hooge formula (Hooge, Phys. Lett. 29A: 139-140 (1969)), and the Hooge parameter alpha was calculated for all bacterial porins used in this study. The 1/f-noise was in part caused by slow inactivation and activation of porin channels. However, when care was taken that during the noise measurement no opening or closing of porin channels occurred, the Hooge Parameter alpha was a meaningful number for a given channel. A linear relationship was observed between alpha and the single-channel conductance, g, of the different porins. This linear relation between single-channel conductance and the Hooge parameter alpha could be qualitatively explained by assuming that the passing of an ion through a bacterial porin channel is-to a certain extent-influenced by nonlinear effects between channel wall and passing ion.
NASA Astrophysics Data System (ADS)
Kuzmina, K. S.; Marchevsky, I. K.; Ryatina, E. P.
2017-11-01
We consider the methodology of numerical schemes development for two-dimensional vortex method. We describe two different approaches to deriving integral equation for unknown vortex sheet intensity. We simulate the velocity of the surface line of an airfoil as the influence of attached vortex and source sheets. We consider a polygonal approximation of the airfoil and assume intensity distributions of free and attached vortex sheets and attached source sheet to be approximated with piecewise constant or piecewise linear (continuous or discontinuous) functions. We describe several specific numerical schemes that provide different accuracy and have a different computational cost. The study shows that a Galerkin-type approach to solving boundary integral equation requires computing several integrals and double integrals over the panels. We obtain exact analytical formulae for all the necessary integrals, which makes it possible to raise significantly the accuracy of vortex sheet intensity computation and improve the quality of velocity and vorticity field representation, especially in proximity to the surface line of the airfoil. All the formulae are written down in the invariant form and depend only on the geometric relationship between the positions of the beginnings and ends of the panels.
The Role of Linear Order in the Acquisition of Quantifier Scope in Chinese.
ERIC Educational Resources Information Center
Lee, Thomas Hun-tak
1989-01-01
Assuming the relevance of the linear precedence to the scope interpretation of adult Mandarin, this study investigated the development of this principle in Mandarin-speaking children, with a view to providing a basis for further study of parametric variation. Three kinds of sentences were examined, all of which contained mutually commanding…
Cancer risk assessments for inorganic arsenic have been based on human epidemiological data, assuming a linear dose-response below the range of observation of tumors. Part of the reason for the continued use of the linear approach in arsenic risk assessments is the lack of an ad...
A hierarchical linear model for tree height prediction.
Vicente J. Monleon
2003-01-01
Measuring tree height is a time-consuming process. Often, tree diameter is measured and height is estimated from a published regression model. Trees used to develop these models are clustered into stands, but this structure is ignored and independence is assumed. In this study, hierarchical linear models that account explicitly for the clustered structure of the data...
ERIC Educational Resources Information Center
Zu, Jiyun; Yuan, Ke-Hai
2012-01-01
In the nonequivalent groups with anchor test (NEAT) design, the standard error of linear observed-score equating is commonly estimated by an estimator derived assuming multivariate normality. However, real data are seldom normally distributed, causing this normal estimator to be inconsistent. A general estimator, which does not rely on the…
The effect of a realistic thermal diffusivity on numerical model of a subducting slab
NASA Astrophysics Data System (ADS)
Maierova, P.; Steinle-Neumann, G.; Cadek, O.
2010-12-01
A number of numerical studies of subducting slab assume simplified (constant or only depth-dependent) models of thermal conductivity. The available mineral physics data indicate, however, that thermal diffusivity is strongly temperature- and pressure-dependent and may also vary among different mantle materials. In the present study, we examine the influence of realistic thermal properties of mantle materials on the thermal state of the upper mantle and the dynamics of subducting slabs. On the basis of the data published in mineral physics literature we compile analytical relationships that approximate the pressure and temperature dependence of thermal diffusivity for major mineral phases of the mantle (olivine, wadsleyite, ringwoodite, garnet, clinopyroxenes, stishovite and perovskite). We propose a simplified composition of mineral assemblages predominating in the subducting slab and the surrounding mantle (pyrolite, mid-ocean ridge basalt, harzburgite) and we estimate their thermal diffusivity using the Hashin-Shtrikman bounds. The resulting complex formula for the diffusivity of each aggregate is then approximated by a simpler analytical relationship that is used in our numerical model as an input parameter. For the numerical modeling we use the Elmer software (open source finite element software for multiphysical problems, see http://www.csc.fi/english/pages/elmer). We set up a 2D Cartesian thermo-mechanical steady-state model of a subducting slab. The model is partly kinematic as the flow is driven by a boundary condition on velocity that is prescribed on the top of the subducting lithospheric plate. Reology of the material is non-linear and is coupled with the thermal equation. Using the realistic relationship for thermal diffusivity of mantle materials, we compute the thermal and flow fields for different input velocity and age of the subducting plate and we compare the results against the models assuming a constant thermal diffusivity. The importance of the realistic description of thermal properties in models of subducted slabs is discussed.
Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming
2016-01-01
Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.
NASA Astrophysics Data System (ADS)
Camporesi, Roberto
2016-01-01
We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations of any order based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of the other more advanced approaches: Laplace transform, linear systems, the general theory of linear equations with variable coefficients and variation of parameters. The approach presented here can be used in a first course on differential equations for science and engineering majors.
NASA Astrophysics Data System (ADS)
RóŻyło, Patryk; Debski, Hubert; Kral, Jan
2018-01-01
The subject of the research was a short thin-walled top-hat cross-section composite profile. The tested structure was subjected to axial compression. As part of the critical state research, critical load and the corresponding buckling mode was determined. Later in the study laminate damage areas were determined throughout numerical analysis. It was assumed that the profile is simply supported on the cross sections ends. Experimental tests were carried out on a universal testing machine Zwick Z100 and the results were compared with the results of numerical calculations. The eigenvalue problem and a non-linear problem of stability of thin-walled structures were carried out by the use of commercial software ABAQUS®. In the presented cases, it was assumed that the material is linear-elastic and non-linearity of the model results from the large displacements. Solution to the geometrically nonlinear problem was conducted by the use of the incremental-iterative Newton-Raphson method.
Alcohol outlet density and assault: a spatial analysis.
Livingston, Michael
2008-04-01
A large number of studies have found links between alcohol outlet densities and assault rates in local areas. This study tests a variety of specifications of this link, focusing in particular on the possibility of a non-linear relationship. Cross-sectional data on police-recorded assaults during high alcohol hours, liquor outlets and socio-demographic characteristics were obtained for 223 postcodes in Melbourne, Australia. These data were used to construct a series of models testing the nature of the relationship between alcohol outlet density and assault, while controlling for socio-demographic factors and spatial auto-correlation. Four types of relationship were examined: a normal linear relationship between outlet density and assault, a non-linear relationship with potential threshold or saturation densities, a relationship mediated by the socio-economic status of the neighbourhood and a relationship which takes into account the effect of outlets in surrounding neighbourhoods. The model positing non-linear relationships between outlet density and assaults was found to fit the data most effectively. An increasing accelerating effect for the density of hotel (pub) licences was found, suggesting a plausible upper limit for these licences in Melbourne postcodes. The study finds positive relationships between outlet density and assault rates and provides evidence that this relationship is non-linear and thus has critical values at which licensing policy-makers can impose density limits.
Backhaus, Ramona; Verbeek, Hilde; van Rossum, Erik; Capezuti, Elizabeth; Hamers, Jan P H
2014-06-01
The relationship between nurse staffing and quality of care (QoC) in nursing homes continues to receive major attention. The evidence supporting this relationship, however, is weak because most studies employ a cross-sectional design. This review summarizes the findings from recent longitudinal studies. In April 2013, the databases PubMed, CINAHL, EMBASE, and PsycINFO were systematically searched. Studies were eligible if they (1) examined the relationship between nurse staffing and QoC outcomes, (2) included only nursing home data, (3) were original research articles describing quantitative, longitudinal studies, and (4) were written in English, Dutch, or German. The methodological quality of 20 studies was assessed using the Newcastle-Ottawa scale, excluding 2 low-quality articles for the analysis. No consistent relationship was found between nurse staffing and QoC. Higher staffing levels were associated with better as well as lower QoC indicators. For example, for restraint use both positive (ie, less restraint use) and negative outcomes (ie, more restraint use) were found. With regard to pressure ulcers, we found that more staff led to fewer pressure ulcers and, therefore, better results, no matter who (registered nurse, licensed practical nurse/ licensed vocational nurse, or nurse assistant) delivered care. No consistent evidence was found for a positive relationship between staffing and QoC. Although some positive indications were suggested, major methodological and theoretical weaknesses (eg, timing of data collection, assumed linear relationship between staffing and QoC) limit interpretation of results. Our findings demonstrate the necessity for well-designed longitudinal studies to gain a better insight into the relationship between nurse staffing and QoC in nursing homes. Copyright © 2014 American Medical Directors Association, Inc. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gholizadeh, H.; Gamon, J. A.; Zygielbaum, A. I.; Schweiger, A. K.; Cavender-Bares, J.; Yang, Y.; Knops, J. M. H.
2017-12-01
Grasslands cover as much as 25% of the Earth's surface and account for approximately 20% of overall terrestrial productivity and contribute to global biodiversity. To optimize the status of grasslands and to counteract their degradation, different management practices have been adopted. Fire has been shown to be an important management practice in the maintenance of grasslands. Our main goals were 1) to evaluate the productivity-biodiversity relationship in grasslands under fire treatment, and 2) to evaluate the capability of hyperspectral remote sensing in estimating biodiversity using spectral data (i.e. spectral diversity). We used above-ground biomass (as a surrogate for productivity), species richness (SR; as a surrogate for biodiversity), and airborne hyperspectral data from a natural grassland with fire treatment (20 plots), and a natural grassland without fire treatment (21 plots), all located at the Cedar Creek Ecosystem Science Reserve in Central Minnesota, USA. The productivity-biodiversity relationship for the fire treatment plots showed a hump-shaped model with adjusted R2=0.37, whereas the relationship for the non-burned plots were non-significant. The relationship between SR and spectral diversity (SD) were positive linear for both treatments; however, the relationship for plots with fire treatment was higher (adjusted R2 = 0.34 vs. 0.19). It is assumed that post-fire foliar nutrients increase soil nitrogen and phosphorus which facilitate post-fire growth and induce higher above-ground biomass and chlorophyll content in plants. Overall, the results of this study showed that management practices affect the productivity-biodiversity relationship and illustrated the effect of fire treatment on remote sensing of biodiversity.
Search for Linear Polarization of the Cosmic Background Radiation
DOE R&D Accomplishments Database
Lubin, P. M.; Smoot, G. F.
1978-10-01
We present preliminary measurements of the linear polarization of the cosmic microwave background (3 deg K blackbody) radiation. These ground-based measurements are made at 9 mm wavelength. We find no evidence for linear polarization, and set an upper limit for a polarized component of 0.8 m deg K with a 95% confidence level. This implies that the present rate of expansion of the Universe is isotropic to one part in 10{sup 6}, assuming no re-ionization of the primordial plasma after recombination
A program for identification of linear systems
NASA Technical Reports Server (NTRS)
Buell, J.; Kalaba, R.; Ruspini, E.; Yakush, A.
1971-01-01
A program has been written for the identification of parameters in certain linear systems. These systems appear in biomedical problems, particularly in compartmental models of pharmacokinetics. The method presented here assumes that some of the state variables are regularly modified by jump conditions. This simulates administration of drugs following some prescribed drug regime. Parameters are identified by a least-square fit of the linear differential system to a set of experimental observations. The method is especially suited when the interval of observation of the system is very long.
A Study of Alternative Quantile Estimation Methods in Newsboy-Type Problems
1980-03-01
decision maker selects to have on hand. The newsboy cost equation may be formulated as a two-piece continuous linear function in the following manner. C(S...number of observations, some approximations may be possible. Three points which are near each other can be assumed to be linear and some estimator using...respectively. Define the value r as: r = [nq + 0.5] , (6) where [X] denotes the largest integer of X. Let us consider an estimate of X as the linear
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beckwith, Andrew Walcott, E-mail: Rwill9955b@gmail.com
We review a relationship between cosmological vacuum energy and massive gravitons as given by Garattini and also the nonlinear electrodynamics of Camara et.al (2004) for a non singular universe and NLED. . In evaluating the Garattini result, we find that having the scale factor close to zero due to a given magnetic field value in, an early universe magnetic field affects how we would interpret Garattini’s linkage of the ‘cosmological constant’ value and non zero graviton mass.. We close as to how these initial conditions affect the issue of an early universe initial pressure and its experimental similarities and differencesmore » with results by Corda and Questa as to negative pressure at the surface of a star. Note, that in theDupays et.al. article , the star in question is rapidly spinning, which is not. assumed in the Camara et.al article , for an early universe. Also, Corda and Questa do not assume a spinning star. We conclude with a comparison between the Lagrangian Dupays and other authors bring up for non linear electrodynamics which is for rapidly spinning neutron stars , and a linkage between the Goldstone theorem and NLED. Our conclusion is for generalizing results seen in the Dupays neutron star Lagrangian with conditions which may confirm C. A. Escobar and L. F. Urrutia’s work on the Goldstone theorem and non linear electrodynamics, for some future projects we have in mind. If the universe does not spin, then we will stick with the density analogy given by adapting density as proportional to one over the fourth power of the minimum value of the scale factor as computed by adaptation of the Camara et.al.(2004) theory for non spinning universes. What may happen is that the Camara (2004) density and Quintessential density are both simultaneously satisfied, which would put additional restrictions on the magnetic field, which is one of our considerations, regardless if a universe spins, akin to spinning neutron stars. The spinning universe though may allow for easier reconciliation of the ‘Goldstone’ behavior of gravity and NLED though.« less
Funane, Tsukasa; Atsumori, Hirokazu; Katura, Takusige; Obata, Akiko N; Sato, Hiroki; Tanikawa, Yukari; Okada, Eiji; Kiguchi, Masashi
2014-01-15
To quantify the effect of absorption changes in the deep tissue (cerebral) and shallow tissue (scalp, skin) layers on functional near-infrared spectroscopy (fNIRS) signals, a method using multi-distance (MD) optodes and independent component analysis (ICA), referred to as the MD-ICA method, is proposed. In previous studies, when the signal from the shallow tissue layer (shallow signal) needs to be eliminated, it was often assumed that the shallow signal had no correlation with the signal from the deep tissue layer (deep signal). In this study, no relationship between the waveforms of deep and shallow signals is assumed, and instead, it is assumed that both signals are linear combinations of multiple signal sources, which allows the inclusion of a "shared component" (such as systemic signals) that is contained in both layers. The method also assumes that the partial optical path length of the shallow layer does not change, whereas that of the deep layer linearly increases along with the increase of the source-detector (S-D) distance. Deep- and shallow-layer contribution ratios of each independent component (IC) are calculated using the dependence of the weight of each IC on the S-D distance. Reconstruction of deep- and shallow-layer signals are performed by the sum of ICs weighted by the deep and shallow contribution ratio. Experimental validation of the principle of this technique was conducted using a dynamic phantom with two absorbing layers. Results showed that our method is effective for evaluating deep-layer contributions even if there are high correlations between deep and shallow signals. Next, we applied the method to fNIRS signals obtained on a human head with 5-, 15-, and 30-mm S-D distances during a verbal fluency task, a verbal working memory task (prefrontal area), a finger tapping task (motor area), and a tetrametric visual checker-board task (occipital area) and then estimated the deep-layer contribution ratio. To evaluate the signal separation performance of our method, we used the correlation coefficients of a laser-Doppler flowmetry (LDF) signal and a nearest 5-mm S-D distance channel signal with the shallow signal. We demonstrated that the shallow signals have a higher temporal correlation with the LDF signals and with the 5-mm S-D distance channel than the deep signals. These results show the MD-ICA method can discriminate between deep and shallow signals. Copyright © 2013 Elsevier Inc. All rights reserved.
Parameter Recovery for the 1-P HGLLM with Non-Normally Distributed Level-3 Residuals
ERIC Educational Resources Information Center
Kara, Yusuf; Kamata, Akihito
2017-01-01
A multilevel Rasch model using a hierarchical generalized linear model is one approach to multilevel item response theory (IRT) modeling and is referred to as a one-parameter hierarchical generalized linear logistic model (1-P HGLLM). Although it has the flexibility to model nested structure of data with covariates, the model assumes the normality…
How much would five trillion tonnes of carbon warm the climate?
NASA Astrophysics Data System (ADS)
Tokarska, Katarzyna Kasia; Gillett, Nathan P.; Weaver, Andrew J.; Arora, Vivek K.
2016-04-01
While estimates of fossil fuel reserves and resources are very uncertain, and the amount which could ultimately be burnt under a business as usual scenario would depend on prevailing economic and technological conditions, an amount of five trillion tonnes of carbon (5 EgC), corresponding to the lower end of the range of estimates of the total fossil fuel resource, is often cited as an estimate of total cumulative emissions in the absence of mitigation actions. The IPCC Fifth Assessment Report indicates that an approximately linear relationship between warming and cumulative carbon emissions holds only up to around 2 EgC emissions. It is typically assumed that at higher cumulative emissions the warming would tend to be less than that predicted by such a linear relationship, with the radiative saturation effect dominating the effects of positive carbon-climate feedbacks at high emissions, as predicted by simple carbon-climate models. We analyze simulations from four state-of-the-art Earth System Models (ESMs) from the Coupled Model Intercomparison Project Phase 5 (CMIP5) and seven Earth System Models of Intermediate Complexity (EMICs), driven by the Representative Concentration Pathway 8.5 Extension scenario (RCP 8.5 Ext), which represents a very high emission scenario of increasing greenhouse gas concentrations in absence of climate mitigation policies. Our results demonstrate that while terrestrial and ocean carbon storage varies between the models, the CO2-induced warming continues to increase approximately linearly with cumulative carbon emissions even for higher levels of cumulative emissions, in all four ESMs. Five of the seven EMICs considered simulate a similarly linear response, while two exhibit less warming at higher cumulative emissions for reasons we discuss. The ESMs simulate global mean warming of 6.6-11.0°C, mean Arctic warming of 15.3-19.7°C, and mean regional precipitation increases and decreases by more than a factor of four, in response to 5EgC, with smaller forcing contributions from other greenhouse gases. These results indicate that the unregulated exploitation of the fossil fuel resource would ultimately result in considerably more profound climate changes than previously suggested.
A new method for wind speed forecasting based on copula theory.
Wang, Yuankun; Ma, Huiqun; Wang, Dong; Wang, Guizuo; Wu, Jichun; Bian, Jinyu; Liu, Jiufu
2018-01-01
How to determine representative wind speed is crucial in wind resource assessment. Accurate wind resource assessments are important to wind farms development. Linear regressions are usually used to obtain the representative wind speed. However, terrain flexibility of wind farm and long distance between wind speed sites often lead to low correlation. In this study, copula method is used to determine the representative year's wind speed in wind farm by interpreting the interaction of the local wind farm and the meteorological station. The result shows that the method proposed here can not only determine the relationship between the local anemometric tower and nearby meteorological station through Kendall's tau, but also determine the joint distribution without assuming the variables to be independent. Moreover, the representative wind data can be obtained by the conditional distribution much more reasonably. We hope this study could provide scientific reference for accurate wind resource assessments. Copyright © 2017 Elsevier Inc. All rights reserved.
Manifold Learning by Preserving Distance Orders.
Ataer-Cansizoglu, Esra; Akcakaya, Murat; Orhan, Umut; Erdogmus, Deniz
2014-03-01
Nonlinear dimensionality reduction is essential for the analysis and the interpretation of high dimensional data sets. In this manuscript, we propose a distance order preserving manifold learning algorithm that extends the basic mean-squared error cost function used mainly in multidimensional scaling (MDS)-based methods. We develop a constrained optimization problem by assuming explicit constraints on the order of distances in the low-dimensional space. In this optimization problem, as a generalization of MDS, instead of forcing a linear relationship between the distances in the high-dimensional original and low-dimensional projection space, we learn a non-decreasing relation approximated by radial basis functions. We compare the proposed method with existing manifold learning algorithms using synthetic datasets based on the commonly used residual variance and proposed percentage of violated distance orders metrics. We also perform experiments on a retinal image dataset used in Retinopathy of Prematurity (ROP) diagnosis.
NASA Astrophysics Data System (ADS)
Korjenic, Sinan; Nowak, Bernhard; Löffler, Philipp; Vašková, Anna
2015-11-01
This paper is about the shear capacity of partition walls in old buildings based on shear tests which were carried out under real conditions in an existing building. There were experiments conducted on different floors and in each case, the maximum recordable horizontal force and the horizontal displacement of the respective mortar were measured. At the same time material studies and material investigations were carried out in the laboratory. The material parameters were used for the calculation of the precise shear capacity of each joint. In the shear tests, the maximum displacement of a mortar joint was determined at a maximum of two to four millimetres. Furthermore, no direct linear relationship between the theoretical load (wall above it) and the shear stress occurred could be detected in the analysis of the experiment, as it was previously assumed.
Chemoviscosity modeling for thermosetting resins, 2
NASA Technical Reports Server (NTRS)
Hou, T. H.
1985-01-01
A new analytical model for simulating chemoviscosity of thermosetting resin was formulated. The model is developed by modifying the Williams-Landel-Ferry (WLF) theory in polymer rheology for thermoplastic materials. By assuming a linear relationship between the glass transition temperature and the degree of cure of the resin system under cure, the WLF theory can be modified to account for the factor of reaction time. Temperature dependent functions of the modified WLF theory constants were determined from the isothermal cure data of Lee, Loos, and Springer for the Hercules 3501-6 resin system. Theoretical predictions of the model for the resin under dynamic heating cure cycles were shown to compare favorably with the experimental data reported by Carpenter. A chemoviscosity model which is capable of not only describing viscosity profiles accurately under various cure cycles, but also correlating viscosity data to the changes of physical properties associated with the structural transformations of the thermosetting resin systems during cure was established.
Acoustic modes in fluid networks
NASA Technical Reports Server (NTRS)
Michalopoulos, C. D.; Clark, Robert W., Jr.; Doiron, Harold H.
1992-01-01
Pressure and flow rate eigenvalue problems for one-dimensional flow of a fluid in a network of pipes are derived from the familiar transmission line equations. These equations are linearized by assuming small velocity and pressure oscillations about mean flow conditions. It is shown that the flow rate eigenvalues are the same as the pressure eigenvalues and the relationship between line pressure modes and flow rate modes is established. A volume at the end of each branch is employed which allows any combination of boundary conditions, from open to closed, to be used. The Jacobi iterative method is used to compute undamped natural frequencies and associated pressure/flow modes. Several numerical examples are presented which include acoustic modes for the Helium Supply System of the Space Shuttle Orbiter Main Propulsion System. It should be noted that the method presented herein can be applied to any one-dimensional acoustic system involving an arbitrary number of branches.
Inherent limitations of probabilistic models for protein-DNA binding specificity
Ruan, Shuxiang
2017-01-01
The specificities of transcription factors are most commonly represented with probabilistic models. These models provide a probability for each base occurring at each position within the binding site and the positions are assumed to contribute independently. The model is simple and intuitive and is the basis for many motif discovery algorithms. However, the model also has inherent limitations that prevent it from accurately representing true binding probabilities, especially for the highest affinity sites under conditions of high protein concentration. The limitations are not due to the assumption of independence between positions but rather are caused by the non-linear relationship between binding affinity and binding probability and the fact that independent normalization at each position skews the site probabilities. Generally probabilistic models are reasonably good approximations, but new high-throughput methods allow for biophysical models with increased accuracy that should be used whenever possible. PMID:28686588
Tikekar superdense stars in electric fields
NASA Astrophysics Data System (ADS)
Komathiraj, K.; Maharaj, S. D.
2007-04-01
We present exact solutions to the Einstein-Maxwell system of equations with a specified form of the electric field intensity by assuming that the hypersurface {t=constant} are spheroidal. The solution of the Einstein-Maxwell system is reduced to a recurrence relation with variable rational coefficients which can be solved in general using mathematical induction. New classes of solutions of linearly independent functions are obtained by restricting the spheroidal parameter K and the electric field intensity parameter α. Consequently, it is possible to find exact solutions in terms of elementary functions, namely, polynomials and algebraic functions. Our result contains models found previously including the superdense Tikekar neutron star model [J. Math. Phys. 31, 2454 (1990)] when K=-7 and α=0. Our class of charged spheroidal models generalize the uncharged isotropic Maharaj and Leach solutions [J. Math. Phys. 37, 430 (1996)]. In particular, we find an explicit relationship directly relating the spheroidal parameter K to the electromagnetic field.
Spatial correlation of hydrometeor occurrence, reflectivity, and rain rate from CloudSat
NASA Astrophysics Data System (ADS)
Marchand, Roger
2012-03-01
This paper examines the along-track vertical and horizontal structure of hydrometeor occurrence, reflectivity, and column rain rate derived from CloudSat. The analysis assumes hydrometeors statistics in a given region are horizontally invariant, with the probability of hydrometeor co-occurrence obtained simply by determining the relative frequency at which hydrometeors can be found at two points (which may be at different altitudes and offset by a horizontal distance, Δx). A correlation function is introduced (gamma correlation) that normalizes hydrometeor co-occurrence values to the range of 1 to -1, with a value of 0 meaning uncorrelated in the usual sense. This correlation function is a generalization of the alpha overlap parameter that has been used in recent studies to describe the overlap between cloud (or hydrometeor) layers. Examples of joint histograms of reflectivity at two points are also examined. The analysis shows that the traditional linear (or Pearson) correlation coefficient provides a useful one-to-one measure of the strength of the relationship between hydrometeor reflectivity at two points in the horizontal (that is, two points at the same altitude). While also potentially useful in the vertical direction, the relationship between reflectivity values at different altitudes is not as well described by the linear correlation coefficient. The decrease in correlation of hydrometeor occurrence and reflectivity with horizontal distance, as well as precipitation occurrence and column rain rate, can be reasonably well fit with a simple two-parameter exponential model. In this paper, the North Pacific and tropical western Pacific are examined in detail, as is the zonal dependence.
Adhesion of Streptococcus sanguis CH3 to polymers with different surface free energies.
van Pelt, A W; Weerkamp, A H; Uyen, M H; Busscher, H J; de Jong, H P; Arends, J
1985-01-01
The adhesion of the oral bacterium Streptococcus sanguis CH3 to various polymeric surfaces with surface free energies (gamma s) ranging from 22 to 141 erg cm-2 was investigated. Suspensions containing nine different bacterial concentrations (2.5 X 10(7) to 2.5 X 10(9) cells per ml) were used. After adhesion for 1 h at 21 degrees C and a standardized rinsing procedure, the number of attached bacteria per square centimeter (nb) was determined by scanning electron microscopy. The highest number of bacteria was consistently found on polytetrafluorethylene (gamma s = 22 erg cm-2), and the lowest number was found on glass (gamma s = 141 erg cm-2) at all bacterial concentrations tested. The overall negative correlation between nb and gamma s was weak. However, the slope of the line showing this decrease, calculated from an assumed linear relationship between nb and gamma s, appeared to depend strongly on the bacterial concentration and increased with increasing numbers of bacteria in the suspension. Analysis of the data for each separate polymer showed that the numbers of attached cells on polyvinyl chloride and polypropylene were higher but that those on polycarbonate were lower than would be expected on basis of a linear relationship between nb and gamma s. Desorption experiments were performed by first allowing the bacteria to attach to substrata for 1 h, after which the substrata and attached bacteria were removed to bacterial suspensions containing 10-fold lower bacterial concentrations.(ABSTRACT TRUNCATED AT 250 WORDS) Images PMID:4004241
Glenn, Edward P; Huete, Alfredo R; Nagler, Pamela L; Nelson, Stephen G
2008-03-28
Vegetation indices (VIs) are among the oldest tools in remote sensing studies. Although many variations exist, most of them ratio the reflection of light in the red and NIR sections of the spectrum to separate the landscape into water, soil, and vegetation. Theoretical analyses and field studies have shown that VIs are near-linearly related to photosynthetically active radiation absorbed by a plant canopy, and therefore to light-dependent physiological processes, such as photosynthesis, occurring in the upper canopy. Practical studies have used time-series VIs to measure primary production and evapotranspiration, but these are limited in accuracy to that of the data used in ground truthing or calibrating the models used. VIs are also used to estimate a wide variety of other canopy attributes that are used in Soil-Vegetation-Atmosphere Transfer (SVAT), Surface Energy Balance (SEB), and Global Climate Models (GCM). These attributes include fractional vegetation cover, leaf area index, roughness lengths for turbulent transfer, emissivity and albedo. However, VIs often exhibit only moderate, non-linear relationships to these canopy attributes, compromising the accuracy of the models. We use case studies to illustrate the use and misuse of VIs, and argue for using VIs most simply as a measurement of canopy light absorption rather than as a surrogate for detailed features of canopy architecture. Used this way, VIs are compatible with "Big Leaf" SVAT and GCMs that assume that canopy carbon and moisture fluxes have the same relative response to the environment as any single leaf, simplifying the task of modeling complex landscapes.
Berglund, Lars; Garmo, Hans; Lindbäck, Johan; Svärdsudd, Kurt; Zethelius, Björn
2008-09-30
The least-squares estimator of the slope in a simple linear regression model is biased towards zero when the predictor is measured with random error. A corrected slope may be estimated by adding data from a reliability study, which comprises a subset of subjects from the main study. The precision of this corrected slope depends on the design of the reliability study and estimator choice. Previous work has assumed that the reliability study constitutes a random sample from the main study. A more efficient design is to use subjects with extreme values on their first measurement. Previously, we published a variance formula for the corrected slope, when the correction factor is the slope in the regression of the second measurement on the first. In this paper we show that both designs improve by maximum likelihood estimation (MLE). The precision gain is explained by the inclusion of data from all subjects for estimation of the predictor's variance and by the use of the second measurement for estimation of the covariance between response and predictor. The gain of MLE enhances with stronger true relationship between response and predictor and with lower precision in the predictor measurements. We present a real data example on the relationship between fasting insulin, a surrogate marker, and true insulin sensitivity measured by a gold-standard euglycaemic insulin clamp, and simulations, where the behavior of profile-likelihood-based confidence intervals is examined. MLE was shown to be a robust estimator for non-normal distributions and efficient for small sample situations. Copyright (c) 2008 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Park, Kwangsoo
In this dissertation, a research effort aimed at development and implementation of a direct field test method to evaluate the linear and nonlinear shear modulus of soil is presented. The field method utilizes a surface footing that is dynamically loaded horizontally. The test procedure involves applying static and dynamic loads to the surface footing and measuring the soil response beneath the loaded area using embedded geophones. A wide range in dynamic loads under a constant static load permits measurements of linear and nonlinear shear wave propagation from which shear moduli and associated shearing strains are evaluated. Shear wave velocities in the linear and nonlinear strain ranges are calculated from time delays in waveforms monitored by geophone pairs. Shear moduli are then obtained using the shear wave velocities and the mass density of a soil. Shear strains are determined using particle displacements calculated from particle velocities measured at the geophones by assuming a linear variation between geophone pairs. The field test method was validated by conducting an initial field experiment at sandy site in Austin, Texas. Then, field experiments were performed on cemented alluvium, a complex, hard-to-sample material. Three separate locations at Yucca Mountain, Nevada were tested. The tests successfully measured: (1) the effect of confining pressure on shear and compression moduli in the linear strain range and (2) the effect of strain on shear moduli at various states of stress in the field. The field measurements were first compared with empirical relationships for uncemented gravel. This comparison showed that the alluvium was clearly cemented. The field measurements were then compared to other independent measurements including laboratory resonant column tests and field seismic tests using the spectral-analysis-of-surface-waves method. The results from the field tests were generally in good agreement with the other independent test results, indicating that the proposed method has the ability to directly evaluate complex material like cemented alluvium in the field.
Linear nozzle with tailored gas plumes
Leon, David D.; Kozarek, Robert L.; Mansour, Adel; Chigier, Norman
2001-01-01
There is claimed a method for depositing fluid material from a linear nozzle in a substantially uniform manner across and along a surface. The method includes directing gaseous medium through said nozzle to provide a gaseous stream at the nozzle exit that entrains fluid material supplied to the nozzle, said gaseous stream being provided with a velocity profile across the nozzle width that compensates for the gaseous medium's tendency to assume an axisymmetric configuration after leaving the nozzle and before reaching the surface. There is also claimed a nozzle divided into respective side-by-side zones, or preferably chambers, through which a gaseous stream can be delivered in various velocity profiles across the width of said nozzle to compensate for the tendency of this gaseous medium to assume an axisymmetric configuration.
Linear nozzle with tailored gas plumes and method
Leon, David D.; Kozarek, Robert L.; Mansour, Adel; Chigier, Norman
1999-01-01
There is claimed a method for depositing fluid material from a linear nozzle in a substantially uniform manner across and along a surface. The method includes directing gaseous medium through said nozzle to provide a gaseous stream at the nozzle exit that entrains fluid material supplied to the nozzle, said gaseous stream being provided with a velocity profile across the nozzle width that compensates for the gaseous medium's tendency to assume an axisymmetric configuration after leaving the nozzle and before reaching the surface. There is also claimed a nozzle divided into respective side-by-side zones, or preferably chambers, through which a gaseous stream can be delivered in various velocity profiles across the width of said nozzle to compensate for the tendency of this gaseous medium to assume an axisymmetric configuration.
NASA Astrophysics Data System (ADS)
Montecinos, S.; Barrientos, P.
2006-03-01
A photochemical model of the atmosphere constitutes a non-linear, non-autonomous dynamical system, enforced by the Earth's rotation. Some studies have shown that the region of the mesopause tends towards non-linear responses such as period-doubling cascades and chaos. In these studies, simple go approximations for the diurnal variations of the photolysis rates are assumed. The goal of this article is to investigate what happens if the more realistic, calculated photolysis rates are introduced. It is found that, if the usual approximations-sinusoidal and step functions-are assumed, the responses of the system are similar: it converges to a 2-day periodic solution. If the more realistic, calculated diurnal cycle is introduced, a new 4-day subharmonic appear.
Thermosolutal convection during directional solidification. II - Flow transitions
NASA Technical Reports Server (NTRS)
Mcfadden, G. B.; Coriell, S. R.
1987-01-01
The influence of thermosolutal convection on solute segregation in crystals grown by vertical directional solidification of binary metallic alloys or semiconductors is studied. Finite differences are used in a two-dimensional time-dependent model which assumes a planar crystal-melt interface to obtain numerical results. It is assumed that the configuration is periodic in the horizontal direction. Consideration is given to the possibility of multiple flow states sharing the same period. The results are represented in bifurcation diagrams of the nonlinear states associated with the critical points of linear theory. Variations of the solutal Rayleigh number can lead to the occurrence of multiple steady states, time-periodic states, and quasi-periodic states. This case is compared to that of thermosolutal convection with linear vertical gradients and stress-free boundaries.
Linear nozzle with tailored gas plumes
Kozarek, Robert L.; Straub, William D.; Fischer, Joern E.; Leon, David D.
2003-01-01
There is claimed a method for depositing fluid material from a linear nozzle in a substantially uniform manner across and along a surface. The method includes directing gaseous medium through said nozzle to provide a gaseous stream at the nozzle exit that entrains fluid material supplied to the nozzle, said gaseous stream being provided with a velocity profile across the nozzle width that compensates for the gaseous medium's tendency to assume an axisymmetric configuration after leaving the nozzle and before reaching the surface. There is also claimed a nozzle divided into respective side-by-side zones, or preferably chambers, through which a gaseous stream can be delivered in various velocity profiles across the width of said nozzle to compensate for the tendency of this gaseous medium to assume an axisymmetric configuration.
NASA Astrophysics Data System (ADS)
Chang, R. Y.-W.; Slowik, J. G.; Shantz, N. C.; Vlasenko, A.; Liggio, J.; Sjostedt, S. J.; Leaitch, W. R.; Abbatt, J. P. D.
2010-06-01
Cloud condensation nuclei (CCN) concentrations were measured at Egbert, a rural site in Ontario, Canada during the spring of 2007. The CCN concentrations were compared to values predicted from the aerosol chemical composition and size distribution using κ-Köhler theory, with the specific goal of this work being to determine the hygroscopic parameter (κ) of the oxygenated organic component of the aerosol, assuming that oxygenation drives the hygroscopicity for the entire organic fraction of the aerosol. The hygroscopicity of the oxygenated fraction of the organic component, as determined by an Aerodyne aerosol mass spectrometer (AMS), was characterised by two methods. First, positive matrix factorization (PMF) was used to separate oxygenated and unoxygenated organic aerosol factors. By assuming that the unoxygenated factor is completely non-hygroscopic and by varying κ of the oxygenated factor so that the predicted and measured CCN concentrations are internally consistent and in good agreement, κ of the oxygenated organic factor was found to be 0.22±0.04 for the suite of measurements made during this five-week campaign. In a second, equivalent approach, we continue to assume that the unoxygenated component of the aerosol, with a mole ratio of atomic oxygen to atomic carbon (O/C) ≈ 0, is completely non-hygroscopic, and we postulate a simple linear relationship between κorg and O/C. Under these assumptions, the κ of the entire organic component for bulk aerosols measured by the AMS can be parameterised as κorg=(0.29±0.05)·(O/C), for the range of O/C observed in this study (0.3 to 0.6). These results are averaged over our five-week study at one location using only the AMS for composition analysis. Empirically, our measurements are consistent with κorg generally increasing with increasing particle oxygenation, but high uncertainties preclude us from testing this hypothesis. Lastly, we examine select periods of different aerosol composition, corresponding to different air mass histories, to determine the generality of the campaign-wide findings described above.
Effect of gravity on terminal particle settling velocity on Moon, Mars and Earth
NASA Astrophysics Data System (ADS)
Kuhn, Nikolaus J.
2013-04-01
Gravity has a non-linear effect on the settling velocity of sediment particles in liquids and gases due to the interdependence of settling velocity, drag and friction. However, StokeśLaw, the common way of estimating the terminal velocity of a particle moving in a gas of liquid assumes a linear relationship between terminal velocity and gravity. For terrestrial applications, this "error" is not relevant, but it may strongly influence the terminal velocity achieved by settling particles on Mars. False estimates of these settling velocities will, in turn, affect the interpretation of particle sizes observed in sedimentary rocks on Mars. Wrong interpretations may occur, for example, when the texture of sedimentary rocks is linked to the amount and hydraulics of runoff and thus ultimately the environmental conditions on Mars at the time of their formation. A good understanding of particle behaviour in liquids on Mars is therefore essential. In principle, the effect of lower gravity on settling velocity can also be achieved by reducing the difference in density between particle and gas or liquid. However, the use of such analogues simulating the lower gravity on Mars on Earth is creates other problems because the properties (i.e. viscosity) and interaction of the liquids and sediment (i.e. flow around the boundary layer between liquid and particle) differ from those of water and mineral particles. An alternative for measuring the actual settling velocities of particles under Martian gravity, on Earth, is offered by placing a settling tube on a reduced gravity flight and conduct settling tests within the 20 to 25 seconds of Martian gravity that can be simulated during such a flight. In this presentation we report the results of such a test conducted during a reduced gravity flight in November 2012. The results explore the strength of the non-linearity in the gravity-settling velocity relationship for terrestrial, lunar and Martian gravity.
Trasande, Leonardo; DiGangi, Joseph; Evers, David C; Petrlik, Jindrich; Buck, David G; Šamánek, Jan; Beeler, Bjorn; Turnquist, Madeline A; Regan, Kevin
2016-12-01
Several developing countries have limited or no information about exposures near anthropogenic mercury sources and no studies have quantified costs of mercury pollution or economic benefits to mercury pollution prevention in these countries. In this study, we present data on mercury concentrations in human hair from subpopulations in developing countries most likely to benefit from the implementation of the Minamata Convention on Mercury. These data are then used to estimate economic costs of mercury exposure in these communities. Hair samples were collected from sites located in 15 countries. We used a linear dose-response relationship that previously identified a 0.18 IQ point decrement per part per million (ppm) increase in hair mercury, and modeled a base case scenario assuming a reference level of 1 ppm, and a second scenario assuming no reference level. We then estimated the corresponding increases in intellectual disability and lost Disability-Adjusted Life Years (DALY). A total of 236 participants provided hair samples for analysis, with an estimated population at risk of mercury exposure near the 15 sites of 11,302,582. Average mercury levels were in the range of 0.48 ppm-4.60 ppm, and 61% of all participants had hair mercury concentrations greater than 1 ppm, the level that approximately corresponds to the USA EPA reference dose. An additional 1310 cases of intellectual disability attributable to mercury exposure were identified annually (4110 assuming no reference level), resulting in 16,501 lost DALYs (51,809 assuming no reference level). A total of $77.4 million in lost economic productivity was estimated assuming a 1 ppm reference level and $130 million if no reference level was used. We conclude that significant mercury exposures occur in developing and transition country communities near sources named in the Minamata Convention, and our estimates suggest that a large economic burden could be avoided by timely implementation of measures to prevent mercury exposures. Copyright © 2016 Elsevier Ltd. All rights reserved.
Non-Gaussian lineshapes and dynamics of time-resolved linear and nonlinear (correlation) spectra.
Dinpajooh, Mohammadhasan; Matyushov, Dmitry V
2014-07-17
Signatures of nonlinear and non-Gaussian dynamics in time-resolved linear and nonlinear (correlation) 2D spectra are analyzed in a model considering a linear plus quadratic dependence of the spectroscopic transition frequency on a Gaussian nuclear coordinate of the thermal bath (quadratic coupling). This new model is contrasted to the commonly assumed linear dependence of the transition frequency on the medium nuclear coordinates (linear coupling). The linear coupling model predicts equality between the Stokes shift and equilibrium correlation functions of the transition frequency and time-independent spectral width. Both predictions are often violated, and we are asking here the question of whether a nonlinear solvent response and/or non-Gaussian dynamics are required to explain these observations. We find that correlation functions of spectroscopic observables calculated in the quadratic coupling model depend on the chromophore's electronic state and the spectral width gains time dependence, all in violation of the predictions of the linear coupling models. Lineshape functions of 2D spectra are derived assuming Ornstein-Uhlenbeck dynamics of the bath nuclear modes. The model predicts asymmetry of 2D correlation plots and bending of the center line. The latter is often used to extract two-point correlation functions from 2D spectra. The dynamics of the transition frequency are non-Gaussian. However, the effect of non-Gaussian dynamics is limited to the third-order (skewness) time correlation function, without affecting the time correlation functions of higher order. The theory is tested against molecular dynamics simulations of a model polar-polarizable chromophore dissolved in a force field water.
Equivalent linearization for fatigue life estimates of a nonlinear structure
NASA Technical Reports Server (NTRS)
Miles, R. N.
1989-01-01
An analysis is presented of the suitability of the method of equivalent linearization for estimating the fatigue life of a nonlinear structure. Comparisons are made of the fatigue life of a nonlinear plate as predicted using conventional equivalent linearization and three other more accurate methods. The excitation of the plate is assumed to be Gaussian white noise and the plate response is modeled using a single resonant mode. The methods used for comparison consist of numerical simulation, a probabalistic formulation, and a modification of equivalent linearization which avoids the usual assumption that the response process is Gaussian. Remarkably close agreement is obtained between all four methods, even for cases where the response is significantly linear.
Helicons in uniform fields. I. Wave diagnostics with hodograms
NASA Astrophysics Data System (ADS)
Urrutia, J. M.; Stenzel, R. L.
2018-03-01
The wave equation for whistler waves is well known and has been solved in Cartesian and cylindrical coordinates, yielding plane waves and cylindrical waves. In space plasmas, waves are usually assumed to be plane waves; in small laboratory plasmas, they are often assumed to be cylindrical "helicon" eigenmodes. Experimental observations fall in between both models. Real waves are usually bounded and may rotate like helicons. Such helicons are studied experimentally in a large laboratory plasma which is essentially a uniform, unbounded plasma. The waves are excited by loop antennas whose properties determine the field rotation and transverse dimensions. Both m = 0 and m = 1 helicon modes are produced and analyzed by measuring the wave magnetic field in three dimensional space and time. From Ampère's law and Ohm's law, the current density and electric field vectors are obtained. Hodograms for these vectors are produced. The sign ambiguity of the hodogram normal with respect to the direction of wave propagation is demonstrated. In general, electric and magnetic hodograms differ but both together yield the wave vector direction unambiguously. Vector fields of the hodogram normal yield the phase flow including phase rotation for helicons. Some helicons can have locally a linear polarization which is identified by the hodogram ellipticity. Alternatively the amplitude oscillation in time yields a measure for the wave polarization. It is shown that wave interference produces linear polarization. These observations emphasize that single point hodogram measurements are inadequate to determine the wave topology unless assuming plane waves. Observations of linear polarization indicate wave packets but not plane waves. A simple qualitative diagnostics for the wave polarization is the measurement of the magnetic field magnitude in time. Circular polarization has a constant amplitude; linear polarization results in amplitude modulations.
The energetic consequences of habitat structure for forest stream salmonids.
Naman, Sean M; Rosenfeld, Jordan S; Kiffney, Peter M; Richardson, John S
2018-05-08
1.Increasing habitat availability (i.e. habitat suitable for occupancy) is often assumed to elevate the abundance or production of mobile consumers; however, this relationship is often nonlinear (threshold or unimodal). Identifying the mechanisms underlying these nonlinearities is essential for predicting the ecological impacts of habitat change, yet the functional forms and ultimate causation of consumer-habitat relationships are often poorly understood. 2.Nonlinear effects of habitat on animal abundance may manifest through physical constraints on foraging that restrict consumers from accessing their resources. Subsequent spatial incongruence between consumers and resources should lead to unimodal or saturating effects of habitat availability on consumer production if increasing the area of habitat suitable for consumer occupancy comes at the expense of habitats that generate resources. However, the shape of this relationship could be sensitive to cross-ecosystem prey subsidies, which may be unrelated to recipient habitat structure and result in more linear habitat effects on consumer production. 3.We investigated habitat-productivity relationships for juveniles of stream-rearing Pacific salmon and trout (Oncorhynchus spp.), which typically forage in low-velocity pool habitats, while their prey (drifting benthic invertebrates) are produced upstream in high-velocity riffles. However, juvenile salmonids also consume subsidies of terrestrial invertebrates that may be independent of pool-riffle structure. 4.We measured salmonid biomass production in 13 experimental enclosures each containing a downstream pool and upstream riffle, spanning a gradient of relative pool area (14-80% pool). Increasing pool relative to riffle habitat area decreased prey abundance, leading to a nonlinear saturating effect on fish production. We then used bioenergetics model simulations to examine how the relationship between pool area and salmonid biomass is affected by varying levels of terrestrial subsidy. Simulations indicated that increasing terrestrial prey inputs linearized the effect of habitat availability on salmonid biomass, while decreasing terrestrial inputs exaggerated a 'hump-shaped' effect. 5.Our results imply that nonlinear effects of habitat availability on consumer production can arise from trade-offs between habitat suitable for consumer occupancy and habitat that generates prey. However, cross-ecosystem prey subsidies can effectively decouple this trade-off and modify consumer-habitat relationships in recipient systems. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Liu, Jian; Wang, Chunlei; Guo, Pan; Shi, Guosheng; Fang, Haiping
2013-12-21
Using molecular dynamics simulations, we show a fine linear relationship between surface energies and microscopic Lennard-Jones parameters of super-hydrophilic surfaces. The linear slope of the super-hydrophilic surfaces is consistent with the linear slope of the super-hydrophobic, hydrophobic, and hydrophilic surfaces where stable water droplets can stand, indicating that there is a universal linear behavior of the surface energies with the water-surface van der Waals interaction that extends from the super-hydrophobic to super-hydrophilic surfaces. Moreover, we find that the linear relationship exists for various substrate types, and the linear slopes of these different types of substrates are dependent on the surface atom density, i.e., higher surface atom densities correspond to larger linear slopes. These results enrich our understanding of water behavior on solid surfaces, especially the water wetting behaviors on uncharged super-hydrophilic metal surfaces.
The morphology of the ridge belts on Venus
NASA Astrophysics Data System (ADS)
Kriuchkov, V. P.
1990-06-01
The length and spacing of linear features were measured for ridge and groove belts, for the outer mountain zones of the Lakshmi planum, and for the outer ridge zones of coronal structures. The distributions of these parameters show small but significant differences in most of the cases. The ridges are assumed to result from deformations. Deformed-layer thickness were estimated for various types of linear subdivisions.
Heather, F J; Childs, D Z; Darnaude, A M; Blanchard, J L
2018-01-01
Accurate information on the growth rates of fish is crucial for fisheries stock assessment and management. Empirical life history parameters (von Bertalanffy growth) are widely fitted to cross-sectional size-at-age data sampled from fish populations. This method often assumes that environmental factors affecting growth remain constant over time. The current study utilized longitudinal life history information contained in otoliths from 412 juveniles and adults of gilthead seabream, Sparus aurata, a commercially important species fished and farmed throughout the Mediterranean. Historical annual growth rates over 11 consecutive years (2002-2012) in the Gulf of Lions (NW Mediterranean) were reconstructed to investigate the effect of temperature variations on the annual growth of this fish. S. aurata growth was modelled linearly as the relationship between otolith size at year t against otolith size at the previous year t-1. The effect of temperature on growth was modelled with linear mixed effects models and a simplified linear model to be implemented in a cohort Integral Projection Model (cIPM). The cIPM was used to project S. aurata growth, year to year, under different temperature scenarios. Our results determined current increasing summer temperatures to have a negative effect on S. aurata annual growth in the Gulf of Lions. They suggest that global warming already has and will further have a significant impact on S. aurata size-at-age, with important implications for age-structured stock assessments and reference points used in fisheries.
NASA Astrophysics Data System (ADS)
Thompson, A. P.; Swiler, L. P.; Trott, C. R.; Foiles, S. M.; Tucker, G. J.
2015-03-01
We present a new interatomic potential for solids and liquids called Spectral Neighbor Analysis Potential (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected onto a basis of hyperspherical harmonics in four dimensions. The bispectrum components are the same bond-orientational order parameters employed by the GAP potential [1]. The SNAP potential, unlike GAP, assumes a linear relationship between atom energy and bispectrum components. The linear SNAP coefficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. We demonstrate that a previously unnoticed symmetry property can be exploited to reduce the computational cost of the force calculations by more than one order of magnitude. We present results for a SNAP potential for tantalum, showing that it accurately reproduces a range of commonly calculated properties of both the crystalline solid and the liquid phases. In addition, unlike simpler existing potentials, SNAP correctly predicts the energy barrier for screw dislocation migration in BCC tantalum.
Prehn, Richmond T.
2010-01-01
All nascent neoplasms probably elicit at least a weak immune reaction. However, the initial effect of the weak immune reaction on a nascent tumor is always stimulatory rather than inhibitory to tumor growth, assuming only that exposure to the tumor antigens did not antedate the initiation of the neoplasm (as may occur in some virally induced tumors). This conclusion derives from the observation that the relationship between the magnitude of an adaptive immune reaction and tumor growth is not linear but varies such that while large quantities of antitumor immune reactants tend to inhibit tumor growth, smaller quantities of the same reactants are, for unknown reasons, stimulatory. Any immune reaction must presumably be small before it can become large; hence the initial reaction to the first presentation of a tumor antigen must always be small and in the stimulatory portion of this nonlinear relationship. In mouse-skin carcinogenesis experiments it was found that premalignant papillomas were variously immunogenic, but that the carcinomas that arose in them were, presumably because of induced immune tolerance, nonimmunogenic in the animal of origin. PMID:20811480
Toward a holographic theory for general spacetimes
NASA Astrophysics Data System (ADS)
Nomura, Yasunori; Salzetta, Nico; Sanches, Fabio; Weinberg, Sean J.
2017-04-01
We study a holographic theory of general spacetimes that does not rely on the existence of asymptotic regions. This theory is to be formulated in a holographic space. When a semiclassical description is applicable, the holographic space is assumed to be a holographic screen: a codimension-1 surface that is capable of encoding states of the gravitational spacetime. Our analysis is guided by conjectured relationships between gravitational spacetime and quantum entanglement in the holographic description. To understand basic features of this picture, we catalog predictions for the holographic entanglement structure of cosmological spacetimes. We find that qualitative features of holographic entanglement entropies for such spacetimes differ from those in AdS/CFT but that the former reduce to the latter in the appropriate limit. The Hilbert space of the theory is analyzed, and two plausible structures are found: a direct-sum and "spacetime-equals-entanglement" structure. The former preserves a naive relationship between linear operators and observable quantities, while the latter respects a more direct connection between holographic entanglement and spacetime. We also discuss the issue of selecting a state in quantum gravity, in particular how the state of the multiverse may be selected in the landscape.
Hartmann, Klaas; Steel, Mike
2006-08-01
The Noah's Ark Problem (NAP) is a comprehensive cost-effectiveness methodology for biodiversity conservation that was introduced by Weitzman (1998) and utilizes the phylogenetic tree containing the taxa of interest to assess biodiversity. Given a set of taxa, each of which has a particular survival probability that can be increased at some cost, the NAP seeks to allocate limited funds to conserving these taxa so that the future expected biodiversity is maximized. Finding optimal solutions using this framework is a computationally difficult problem to which a simple and efficient "greedy" algorithm has been proposed in the literature and applied to conservation problems. We show that, although algorithms of this type cannot produce optimal solutions for the general NAP, there are two restricted scenarios of the NAP for which a greedy algorithm is guaranteed to produce optimal solutions. The first scenario requires the taxa to have equal conservation cost; the second scenario requires an ultrametric tree. The NAP assumes a linear relationship between the funding allocated to conservation of a taxon and the increased survival probability of that taxon. This relationship is briefly investigated and one variation is suggested that can also be solved using a greedy algorithm.
Orgambídez-Ramos, Alejandro; de Almeida, Helena
2017-08-01
Job Demands-Resources model assumes the mediator role of work engagement between social support (job resource) and job satisfaction (organizational result). However, recent studies suggest that social support can be considered as a moderator variable in the relationship between engagement and job satisfaction in nursing staff. The aim of this study is to analyze the moderator role of social support, from supervisor and from co-workers, in the relationship between work engagement and job satisfaction in a Portuguese nursing sample. We conducted a cross-sectional and correlational study assessing a final sample of 215 participants (55.56% response rate, 77.21% women). Moderation analyses were carried out using multiple and hierarchical linear regression models. Job satisfaction was significantly predicted by work engagement and social support from supervisor and from co-workers. The significant interaction in predicting job satisfaction showed that social support from co-workers enhances the effects of work engagement on nurses' satisfaction. A climate of social support among co-workers and higher levels of work engagement have a positive effect on job satisfaction, improving quality care and reducing turnover intention in nursing staff. Copyright © 2017 Elsevier Inc. All rights reserved.
García Rodríguez, Y
1997-06-01
Various studies have explored the relationships between unemployment and expectation of success, commitment to work, motivation, causal attributions, self-esteem and depression. A model is proposed that assumes the relationships between these variables are moderated by (a) whether or not the unemployed individual is seeking a first job and (b) age. It is proposed that for the unemployed who are seeking their first job (seekers) the relationships among these variables will be consistent with expectancy-value theory, but for those who have had a previous job (losers), the relationships will be more consistent with learned helplessness theory. It is further assumed that within this latter group the young losers will experience "universal helplessness" whereas the adult losers will experience "personal helplessness".
Kujawa, Ellen Ruth; Goring, Simon; Dawson, Andria; Calcote, Randy; Grimm, Eric; Hotchkiss, Sara C.; Jackson, Stephen T.; Lynch, Elizabeth A.; McLachlan, Jason S.; St-Jacques, Jeannine-Marie; Umbanhowar, Charles; Williams, John W.
2016-01-01
Fossil pollen assemblages provide information about vegetation dynamics at time scales ranging from centuries to millennia. Pollen-vegetation models and process-based models of dispersal typically assume stable relationships between source vegetation and corresponding pollen in surface sediments, as well as stable parameterizations of dispersal and productivity. These assumptions, however, are largely unevaluated. This paper reports a test of the stability of pollen-vegetation relationships using vegetation and pollen data from the Midwestern region of the United States, during a period of large changes in land use and vegetation driven by Euro-American settlement. We compared a dataset of pollen records for the early settlement-era with three other datasets of pollen and forest composition for two time periods: before Euro-American settlement, and the late 20th century. Results from generalized linear models for thirteen genera indicate that pollen-vegetation relationships significantly differ (p < 0.05) between pre-settlement and the modern era for several genera: Fagus, Betula, Tsuga, Quercus, Pinus, and Picea. The estimated pollen source radius for the 8 km gridded vegetation data and associated pollen data is 25–85 km, consistent with prior studies using similar methods and spatial resolutions.Hence, the rapid changes in land cover associated with the Anthropocene affect the accuracy of ecological predictions for both the future and the past. In the Anthropocene, paleoecology should move beyond the assumption that pollen-vegetation relationships are stable over time. Multi-temporal calibration datasets are increasingly possible and enable paleoecologists to better understand the complex processes governing pollen-vegetation relationships through space and time.
NASA Astrophysics Data System (ADS)
Vincenzo, F.; Matteucci, F.; Spitoni, E.
2017-04-01
We present a theoretical method for solving the chemical evolution of galaxies by assuming an instantaneous recycling approximation for chemical elements restored by massive stars and the delay time distribution formalism for delayed chemical enrichment by Type Ia Supernovae. The galaxy gas mass assembly history, together with the assumed stellar yields and initial mass function, represents the starting point of this method. We derive a simple and general equation, which closely relates the Laplace transforms of the galaxy gas accretion history and star formation history, which can be used to simplify the problem of retrieving these quantities in the galaxy evolution models assuming a linear Schmidt-Kennicutt law. We find that - once the galaxy star formation history has been reconstructed from our assumptions - the differential equation for the evolution of the chemical element X can be suitably solved with classical methods. We apply our model to reproduce the [O/Fe] and [Si/Fe] versus [Fe/H] chemical abundance patterns as observed at the solar neighbourhood by assuming a decaying exponential infall rate of gas and different delay time distributions for Type Ia Supernovae; we also explore the effect of assuming a non-linear Schmidt-Kennicutt law, with the index of the power law being k = 1.4. Although approximate, we conclude that our model with the single-degenerate scenario for Type Ia Supernovae provides the best agreement with the observed set of data. Our method can be used by other complementary galaxy stellar population synthesis models to predict also the chemical evolution of galaxies.
Liu, Feng; Walters, Stephen J; Julious, Steven A
2017-10-02
It is important to quantify the dose response for a drug in phase 2a clinical trials so the optimal doses can then be selected for subsequent late phase trials. In a phase 2a clinical trial of new lead drug being developed for the treatment of rheumatoid arthritis (RA), a U-shaped dose response curve was observed. In the light of this result further research was undertaken to design an efficient phase 2a proof of concept (PoC) trial for a follow-on compound using the lessons learnt from the lead compound. The planned analysis for the Phase 2a trial for GSK123456 was a Bayesian Emax model which assumes the dose-response relationship follows a monotonic sigmoid "S" shaped curve. This model was found to be suboptimal to model the U-shaped dose response observed in the data from this trial and alternatives approaches were needed to be considered for the next compound for which a Normal dynamic linear model (NDLM) is proposed. This paper compares the statistical properties of the Bayesian Emax model and NDLM model and both models are evaluated using simulation in the context of adaptive Phase 2a PoC design under a variety of assumed dose response curves: linear, Emax model, U-shaped model, and flat response. It is shown that the NDLM method is flexible and can handle a wide variety of dose-responses, including monotonic and non-monotonic relationships. In comparison to the NDLM model the Emax model excelled with higher probability of selecting ED90 and smaller average sample size, when the true dose response followed Emax like curve. In addition, the type I error, probability of incorrectly concluding a drug may work when it does not, is inflated with the Bayesian NDLM model in all scenarios which would represent a development risk to pharmaceutical company. The bias, which is the difference between the estimated effect from the Emax and NDLM models and the simulated value, is comparable if the true dose response follows a placebo like curve, an Emax like curve, or log linear shape curve under fixed dose allocation, no adaptive allocation, half adaptive and adaptive scenarios. The bias though is significantly increased for the Emax model if the true dose response follows a U-shaped curve. In most cases the Bayesian Emax model works effectively and efficiently, with low bias and good probability of success in case of monotonic dose response. However, if there is a belief that the dose response could be non-monotonic then the NDLM is the superior model to assess the dose response.
Robust Neighboring Optimal Guidance for the Advanced Launch System
NASA Technical Reports Server (NTRS)
Hull, David G.
1993-01-01
In recent years, optimization has become an engineering tool through the availability of numerous successful nonlinear programming codes. Optimal control problems are converted into parameter optimization (nonlinear programming) problems by assuming the control to be piecewise linear, making the unknowns the nodes or junction points of the linear control segments. Once the optimal piecewise linear control (suboptimal) control is known, a guidance law for operating near the suboptimal path is the neighboring optimal piecewise linear control (neighboring suboptimal control). Research conducted under this grant has been directed toward the investigation of neighboring suboptimal control as a guidance scheme for an advanced launch system.
NASA Astrophysics Data System (ADS)
Milani, G.; Bertolesi, E.
2017-07-01
A simple quasi analytical holonomic homogenization approach for the non-linear analysis of masonry walls in-plane loaded is presented. The elementary cell (REV) is discretized with 24 triangular elastic constant stress elements (bricks) and non-linear interfaces (mortar). A holonomic behavior with softening is assumed for mortar. It is shown how the mechanical problem in the unit cell is characterized by very few displacement variables and how homogenized stress-strain behavior can be evaluated semi-analytically.
NASA Astrophysics Data System (ADS)
Shibata, Hisaichi; Takaki, Ryoji
2017-11-01
A novel method to compute current-voltage characteristics (CVCs) of direct current positive corona discharges is formulated based on a perturbation technique. We use linearized fluid equations coupled with the linearized Poisson's equation. Townsend relation is assumed to predict CVCs apart from the linearization point. We choose coaxial cylinders as a test problem, and we have successfully predicted parameters which can determine CVCs with arbitrary inner and outer radii. It is also confirmed that the proposed method essentially does not induce numerical instabilities.
An outflow boundary condition for aeroacoustic computations
NASA Technical Reports Server (NTRS)
Hayder, M. Ehtesham; Hagstrom, Thomas
1995-01-01
A formulation of boundary condition for flows with small disturbances is presented. The authors test their methodology in an axisymmetric jet flow calculation, using both the Navier-Stokes and Euler equations. Solutions in the far field are assumed to be oscillatory. If the oscillatory disturbances are small, the growth of the solution variables can be predicted by linear theory. Eigenfunctions of the linear theory are used explicitly in the formulation of the boundary conditions. This guarantees correct solutions at the boundary in the limit where the predictions of linear theory are valid.
LFSPMC: Linear feature selection program using the probability of misclassification
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Marion, B. P.
1975-01-01
The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.
ERIC Educational Resources Information Center
Unsal, Yasin
2011-01-01
One of the subjects that is confusing and difficult for students to fully comprehend is the concept of angular velocity and linear velocity. It is the relationship between linear and angular velocity that students find difficult; most students understand linear motion in isolation. In this article, we detail the design, construction and…
2015-09-01
scattering albedo (SSA) according to Hapke theory assuming bidirectional scattering at nadir look angles and uses a constrained linear model on the computed...following Hapke 9 (1993); and Mustard and Pieters 18 (1987)) assuming the reflectance spectra are bidirectional . SSA spectra were also generated...from AVIRIS data collected during a JPL/USGS campaign in response to the Deep Water Horizon (DWH) oil spill incident. 27 Out of the numerous
The dynamics and control of large flexible space structures, part 11
NASA Technical Reports Server (NTRS)
Bainum, Peter M.; Reddy, A. S. S. R; Diarra, Cheick M.; Li, Feiyue
1988-01-01
A mathematical model is developed to predict the dynamics of the proposed Spacecraft Control Laboratory Experiment during the stationkeeping phase. The Shuttle and reflector are assumed to be rigid, while the mass connecting the Shuttle to the reflector is assumed to be flexible with elastic deformations small as compared with its length. It is seen that in the presence of gravity-gradient torques, the system assumes a new equilibrium position primarily due to the offset in the mass attachment point to the reflector from the reflector's mass center. Control is assumed to be provided through the Shuttle's three torquers and throught six actuators located by painrs at two points on the mass and at the reflector mass center. Numerical results confirm the robustness of an LQR derived control strategy during stationkeeping with maximum control efforts significantly below saturation levels. The linear regulator theory is also used to derive control laws for the linearized model of the rigidized SCOLE configuration where the mast flexibility is not included. It is seen that this same type of control strategy can be applied for the rapid single axis slewing of the SCOLE through amplitudes as large as 20 degrees. These results provide a definite trade-off between the slightly larger slewing times with the considerable reduction in over-all control effort as compared with the results of the two point boundary value problem application of Pontryagin's Maximum Principle.
NASA Astrophysics Data System (ADS)
Ng, Chris Fook Sheng; Ueda, Kayo; Ono, Masaji; Nitta, Hiroshi; Takami, Akinori
2014-07-01
Despite rising concern on the impact of heat on human health, the risk of high summer temperature on heatstroke-related emergency dispatches is not well understood in Japan. A time-series study was conducted to examine the association between apparent temperature and daily heatstroke-related ambulance dispatches (HSAD) within the Kanto area of Japan. A total of 12,907 HSAD occurring from 2000 to 2009 in five major cities—Saitama, Chiba, Tokyo, Kawasaki, and Yokohama—were analyzed. Generalized additive models and zero-inflated Poisson regressions were used to estimate the effects of daily maximum three-hour apparent temperature (AT) on dispatch frequency from May to September, with adjustment for seasonality, long-term trend, weekends, and public holidays. Linear and non-linear exposure effects were considered. Effects on days when AT first exceeded its summer median were also investigated. City-specific estimates were combined using random effects meta-analyses. Exposure-response relationship was found to be fairly linear. Significant risk increase began from 21 °C with a combined relative risk (RR) of 1.22 (95 % confidence interval, 1.03-1.44), increasing to 1.49 (1.42-1.57) at peak AT. When linear exposure was assumed, combined RR was 1.43 (1.37-1.50) per degree Celsius increment. Overall association was significant the first few times when median AT was initially exceeded in a particular warm season. More than two-thirds of these initial hot days were in June, implying the harmful effect of initial warming as the season changed. Risk increase that began early at the fairly mild perceived temperature implies the need for early precaution.
Ng, Chris Fook Sheng; Ueda, Kayo; Ono, Masaji; Nitta, Hiroshi; Takami, Akinori
2014-07-01
Despite rising concern on the impact of heat on human health, the risk of high summer temperature on heatstroke-related emergency dispatches is not well understood in Japan. A time-series study was conducted to examine the association between apparent temperature and daily heatstroke-related ambulance dispatches (HSAD) within the Kanto area of Japan. A total of 12,907 HSAD occurring from 2000 to 2009 in five major cities-Saitama, Chiba, Tokyo, Kawasaki, and Yokohama-were analyzed. Generalized additive models and zero-inflated Poisson regressions were used to estimate the effects of daily maximum three-hour apparent temperature (AT) on dispatch frequency from May to September, with adjustment for seasonality, long-term trend, weekends, and public holidays. Linear and non-linear exposure effects were considered. Effects on days when AT first exceeded its summer median were also investigated. City-specific estimates were combined using random effects meta-analyses. Exposure-response relationship was found to be fairly linear. Significant risk increase began from 21 °C with a combined relative risk (RR) of 1.22 (95% confidence interval, 1.03-1.44), increasing to 1.49 (1.42-1.57) at peak AT. When linear exposure was assumed, combined RR was 1.43 (1.37-1.50) per degree Celsius increment. Overall association was significant the first few times when median AT was initially exceeded in a particular warm season. More than two-thirds of these initial hot days were in June, implying the harmful effect of initial warming as the season changed. Risk increase that began early at the fairly mild perceived temperature implies the need for early precaution.
Hou, Chen; Amunugama, Kaushalya
2015-07-01
The relationship between energy expenditure and longevity has been a central theme in aging studies. Empirical studies have yielded controversial results, which cannot be reconciled by existing theories. In this paper, we present a simple theoretical model based on first principles of energy conservation and allometric scaling laws. The model takes into considerations the energy tradeoffs between life history traits and the efficiency of the energy utilization, and offers quantitative and qualitative explanations for a set of seemingly contradictory empirical results. We show that oxidative metabolism can affect cellular damage and longevity in different ways in animals with different life histories and under different experimental conditions. Qualitative data and the linearity between energy expenditure, cellular damage, and lifespan assumed in previous studies are not sufficient to understand the complexity of the relationships. Our model provides a theoretical framework for quantitative analyses and predictions. The model is supported by a variety of empirical studies, including studies on the cellular damage profile during ontogeny; the intra- and inter-specific correlations between body mass, metabolic rate, and lifespan; and the effects on lifespan of (1) diet restriction and genetic modification of growth hormone, (2) the cold and exercise stresses, and (3) manipulations of antioxidant. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Dang, Tran Ngoc; Seposo, Xerxes T; Duc, Nguyen Huu Chau; Thang, Tran Binh; An, Do Dang; Hang, Lai Thi Minh; Long, Tran Thanh; Loan, Bui Thi Hong; Honda, Yasushi
2016-01-01
The relationship between temperature and mortality has been found to be U-, V-, or J-shaped in developed temperate countries; however, in developing tropical/subtropical cities, it remains unclear. Our goal was to investigate the relationship between temperature and mortality in Hue, a subtropical city in Viet Nam. We collected daily mortality data from the Vietnamese A6 mortality reporting system for 6,214 deceased persons between 2009 and 2013. A distributed lag non-linear model was used to examine the temperature effects on all-cause and cause-specific mortality by assuming negative binomial distribution for count data. We developed an objective-oriented model selection with four steps following the Akaike information criterion (AIC) rule (i.e. a smaller AIC value indicates a better model). High temperature-related mortality was more strongly associated with short lags, whereas low temperature-related mortality was more strongly associated with long lags. The low temperatures increased risk in all-category mortality compared to high temperatures. We observed elevated temperature-mortality risk in vulnerable groups: elderly people (high temperature effect, relative risk [RR]=1.42, 95% confidence interval [CI]=1.11-1.83; low temperature effect, RR=2.0, 95% CI=1.13-3.52), females (low temperature effect, RR=2.19, 95% CI=1.14-4.21), people with respiratory disease (high temperature effect, RR=2.45, 95% CI=0.91-6.63), and those with cardiovascular disease (high temperature effect, RR=1.6, 95% CI=1.15-2.22; low temperature effect, RR=1.99, 95% CI=0.92-4.28). In Hue, the temperature significantly increased the risk of mortality, especially in vulnerable groups (i.e. elderly, female, people with respiratory and cardiovascular diseases). These findings may provide a foundation for developing adequate policies to address the effects of temperature on health in Hue City.
Dang, Tran Ngoc; Seposo, Xerxes T.; Duc, Nguyen Huu Chau; Thang, Tran Binh; An, Do Dang; Hang, Lai Thi Minh; Long, Tran Thanh; Loan, Bui Thi Hong; Honda, Yasushi
2016-01-01
Background The relationship between temperature and mortality has been found to be U-, V-, or J-shaped in developed temperate countries; however, in developing tropical/subtropical cities, it remains unclear. Objectives Our goal was to investigate the relationship between temperature and mortality in Hue, a subtropical city in Viet Nam. Design We collected daily mortality data from the Vietnamese A6 mortality reporting system for 6,214 deceased persons between 2009 and 2013. A distributed lag non-linear model was used to examine the temperature effects on all-cause and cause-specific mortality by assuming negative binomial distribution for count data. We developed an objective-oriented model selection with four steps following the Akaike information criterion (AIC) rule (i.e. a smaller AIC value indicates a better model). Results High temperature-related mortality was more strongly associated with short lags, whereas low temperature-related mortality was more strongly associated with long lags. The low temperatures increased risk in all-category mortality compared to high temperatures. We observed elevated temperature-mortality risk in vulnerable groups: elderly people (high temperature effect, relative risk [RR]=1.42, 95% confidence interval [CI]=1.11–1.83; low temperature effect, RR=2.0, 95% CI=1.13–3.52), females (low temperature effect, RR=2.19, 95% CI=1.14–4.21), people with respiratory disease (high temperature effect, RR=2.45, 95% CI=0.91–6.63), and those with cardiovascular disease (high temperature effect, RR=1.6, 95% CI=1.15–2.22; low temperature effect, RR=1.99, 95% CI=0.92–4.28). Conclusions In Hue, the temperature significantly increased the risk of mortality, especially in vulnerable groups (i.e. elderly, female, people with respiratory and cardiovascular diseases). These findings may provide a foundation for developing adequate policies to address the effects of temperature on health in Hue City. PMID:26781954
Linear theory of plasma Čerenkov masers
NASA Astrophysics Data System (ADS)
Birau, M.
1996-11-01
A different theoretical model of Čerenkov instability in the linear amplification regime of plasma Čerenkov masers is developed. The model assumes a cold relativistic annular electron beam propagating through a column of cold dense plasma, the two bodies being immersed in an infinite magnetic guiding field inside a perfect cylindrical waveguide. In order to simplify the calculations, a radial rectangular distribution of plasma and beam density is assumed and only azimuthal symmetric modes are under investigation. The model's difference consists of taking into account the whole plasma and beam electromagnetic structures in the interpretation of the Čerenkov instability. This model leads to alternative results such as the possibility of emission at several frequencies. In addition, the electric field is calculated taking into account its radial phase dependence, so that a map of the field in the interaction region can be presented.
Slope Estimation in Noisy Piecewise Linear Functions✩
Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy
2014-01-01
This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure. PMID:25419020
Slope Estimation in Noisy Piecewise Linear Functions.
Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy
2015-03-01
This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure.
Collaboration: Assumed or Taught?
ERIC Educational Resources Information Center
Kaplan, Sandra N.
2014-01-01
The relationship between collaboration and gifted and talented students often is assumed to be an easy and successful learning experience. However, the transition from working alone to working with others necessitates an understanding of issues related to ability, sociability, and mobility. Collaboration has been identified as both an asset and a…
Correlation and simple linear regression.
Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G
2003-06-01
In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.
Analysis of Learning Curve Fitting Techniques.
1987-09-01
1986. 15. Neter, John and others. Applied Linear Regression Models. Homewood IL: Irwin, 19-33. 16. SAS User’s Guide: Basics, Version 5 Edition. SAS... Linear Regression Techniques (15:23-52). Random errors are assumed to be normally distributed when using -# ordinary least-squares, according to Johnston...lot estimated by the improvement curve formula. For a more detailed explanation of the ordinary least-squares technique, see Neter, et. al., Applied
Hansen, U P; Gradmann, D; Sanders, D; Slayman, C L
1981-01-01
This paper develops a simple reaction-kinetic model to describe electrogenic pumping and co- (or counter-) transport of ions. It uses the standard steady-state approach for cyclic enzyme- or carrier-mediated transport, but does not assume rate-limitation by any particular reaction step. Voltage-dependence is introduced, after the suggestion of Läuger and Stark (Biochim. Biophys. Acta 211:458-466, 1970), via a symmetric Eyring barrier, in which the charge-transit reaction constants are written as k12 = ko12 exp(zF delta psi/2RT) and k21 = ko21 exp(-zF delta psi/2RT). For interpretation of current-voltage relationships, all voltage-independent reaction steps are lumped together, so the model in its simplest form can be described as a pseudo-2-state model. It is characterized by the two voltage-dependent reaction constants, two lumped voltage-independent reaction constants (k12, k21), and two reserve factors (ri, ro) which formally take account of carrier states that are indistinguishable in the current-voltage (I-V) analysis. The model generates a wide range of I-V relationships, depending on the relative magnitudes of the four reaction constants, sufficient to describe essentially all I-V datas now available on "active" ion-transport systems. Algebraic and numerical analysis of the reserve factors, by means of expanded pseudo-3-, 4-, and 5-state models, shows them to be bounded and not large for most combinations of reaction constants in the lumped pathway. The most important exception to this rule occurs when carrier decharging immediately follows charge transit of the membrane and is very fast relative to other constituent voltage-independent reactions. Such a circumstance generates kinetic equivalence of chemical and electrical gradients, thus providing a consistent definition of ion-motive forces (e.g., proton-motive force, PMF). With appropriate restrictions, it also yields both linear and log-linear relationships between net transport velocity and either membrane potential or PMF. The model thus accommodates many known properties of proton-transport systems, particularly as observed in "chemiosmotic" or energy-coupling membranes.
Clarke, G. M.; Murray, M.; Holloway, C. M. B.; Liu, K.; Zubovits, J. T.; Yaffe, M. J.
2012-01-01
Tumour size, most commonly measured by maximum linear extent, remains a strong predictor of survival in breast cancer. Tumour volume, proportional to the number of tumour cells, may be a more accurate surrogate for size. We describe a novel “3D pathology volumetric technique” for lumpectomies and compare it with 2D measurements. Volume renderings and total tumour volume are computed from digitized whole-mount serial sections using custom software tools. Results are presented for two lumpectomy specimens selected for tumour features which may challenge accurate measurement of tumour burden with conventional, sampling-based pathology: (1) an infiltrative pattern admixed with normal breast elements; (2) a localized invasive mass separated from the in situ component by benign tissue. Spatial relationships between key features (tumour foci, close or involved margins) are clearly visualized in volume renderings. Invasive tumour burden can be underestimated using conventional pathology, compared to the volumetric technique (infiltrative pattern: 30% underestimation; localized mass: 3% underestimation for invasive tumour, 44% for in situ component). Tumour volume approximated from 2D measurements (i.e., maximum linear extent), assuming elliptical geometry, was seen to overestimate volume compared to the 3D volumetric calculation (by a factor of 7x for the infiltrative pattern; 1.5x for the localized invasive mass). PMID:23320179
NASA Astrophysics Data System (ADS)
Fujimura, Toshio; Takeshita, Kunimasa; Suzuki, Ryosuke O.
2018-04-01
An analytical approximate solution to non-linear solute- and heat-transfer equations in the unsteady-state mushy zone of Fe-C plain steel has been obtained, assuming a linear relationship between the solid fraction and the temperature of the mushy zone. The heat transfer equations for both the solid and liquid zone along with the boundary conditions have been linked with the equations to solve the whole equations. The model predictions ( e.g., the solidification constants and the effective partition ratio) agree with the generally accepted values and with a separately performed numerical analysis. The solidus temperature predicted by the model is in the intermediate range of the reported formulas. The model and Neuman's solution are consistent in the low carbon range. A conventional numerical heat analysis ( i.e., an equivalent specific heat method using the solidus temperature predicted by the model) is consistent with the model predictions for Fe-C plain steels. The model presented herein simplifies the computations to solve the solute- and heat-transfer simultaneous equations while searching for a solidus temperature as a part of the solution. Thus, this model can reduce the complexity of analyses considering the heat- and solute-transfer phenomena in the mushy zone.
Forster, Jeri E.; MaWhinney, Samantha; Ball, Erika L.; Fairclough, Diane
2011-01-01
Dropout is common in longitudinal clinical trials and when the probability of dropout depends on unobserved outcomes even after conditioning on available data, it is considered missing not at random and therefore nonignorable. To address this problem, mixture models can be used to account for the relationship between a longitudinal outcome and dropout. We propose a Natural Spline Varying-coefficient mixture model (NSV), which is a straightforward extension of the parametric Conditional Linear Model (CLM). We assume that the outcome follows a varying-coefficient model conditional on a continuous dropout distribution. Natural cubic B-splines are used to allow the regression coefficients to semiparametrically depend on dropout and inference is therefore more robust. Additionally, this method is computationally stable and relatively simple to implement. We conduct simulation studies to evaluate performance and compare methodologies in settings where the longitudinal trajectories are linear and dropout time is observed for all individuals. Performance is assessed under conditions where model assumptions are both met and violated. In addition, we compare the NSV to the CLM and a standard random-effects model using an HIV/AIDS clinical trial with probable nonignorable dropout. The simulation studies suggest that the NSV is an improvement over the CLM when dropout has a nonlinear dependence on the outcome. PMID:22101223
Partitioning of Aromatic Constituents into Water from Jet Fuels.
Tien, Chien-Jung; Shu, Youn-Yuen; Ciou, Shih-Rong; Chen, Colin S
2015-08-01
A comprehensive study of the most commonly used jet fuels (i.e., Jet A-1 and JP-8) was performed to properly assess potential contamination of the subsurface environment from a leaking underground storage tank occurred in an airport. The objectives of this study were to evaluate the concentration ranges of the major components in the water-soluble fraction of jet fuels and to estimate the jet fuel-water partition coefficients (K fw) for target compounds using partitioning experiments and a polyparameter linear free-energy relationship (PP-LFER) approach. The average molecular weight of Jet A-1 and JP-8 was estimated to be 161 and 147 g/mole, respectively. The density of Jet A-1 and JP-8 was measured to be 786 and 780 g/L, respectively. The distribution of nonpolar target compounds between the fuel and water phases was described using a two-phase liquid-liquid equilibrium model. Models were derived using Raoult's law convention for the activity coefficients and the liquid solubility. The observed inverse, log-log linear dependence of the K fw values on the aqueous solubility were well predicted by assuming jet fuel to be an ideal solvent mixture. The experimental partition coefficients were generally well reproduced by PP-LFER.
The Trend Odds Model for Ordinal Data‡
Capuano, Ana W.; Dawson, Jeffrey D.
2013-01-01
Ordinal data appear in a wide variety of scientific fields. These data are often analyzed using ordinal logistic regression models that assume proportional odds. When this assumption is not met, it may be possible to capture the lack of proportionality using a constrained structural relationship between the odds and the cut-points of the ordinal values (Peterson and Harrell, 1990). We consider a trend odds version of this constrained model, where the odds parameter increases or decreases in a monotonic manner across the cut-points. We demonstrate algebraically and graphically how this model is related to latent logistic, normal, and exponential distributions. In particular, we find that scale changes in these potential latent distributions are consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. We show how to fit this model using SAS Proc Nlmixed, and perform simulations under proportional odds and trend odds processes. We find that the added complexity of the trend odds model gives improved power over the proportional odds model when there are moderate to severe departures from proportionality. A hypothetical dataset is used to illustrate the interpretation of the trend odds model, and we apply this model to a Swine Influenza example where the proportional odds assumption appears to be violated. PMID:23225520
The trend odds model for ordinal data.
Capuano, Ana W; Dawson, Jeffrey D
2013-06-15
Ordinal data appear in a wide variety of scientific fields. These data are often analyzed using ordinal logistic regression models that assume proportional odds. When this assumption is not met, it may be possible to capture the lack of proportionality using a constrained structural relationship between the odds and the cut-points of the ordinal values. We consider a trend odds version of this constrained model, wherein the odds parameter increases or decreases in a monotonic manner across the cut-points. We demonstrate algebraically and graphically how this model is related to latent logistic, normal, and exponential distributions. In particular, we find that scale changes in these potential latent distributions are consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. We show how to fit this model using SAS Proc NLMIXED and perform simulations under proportional odds and trend odds processes. We find that the added complexity of the trend odds model gives improved power over the proportional odds model when there are moderate to severe departures from proportionality. A hypothetical data set is used to illustrate the interpretation of the trend odds model, and we apply this model to a swine influenza example wherein the proportional odds assumption appears to be violated. Copyright © 2012 John Wiley & Sons, Ltd.
A Novel Optical Model for Remote Sensing of Near-Surface Soil Moisture
NASA Astrophysics Data System (ADS)
Babaeian, E.; Sadeghi, M.; Jones, S. B.; Tuller, M.
2016-12-01
Common triangle and trapezoid methods that are based on both optical and thermal remote sensing (RS) information have been widely applied in the past to estimate near-surface soil moisture from the soil temperature - vegetation index space (e.g., LST-NDVI). For most cases, this approach assumes a linear relationship between soil moisture and temperature. Though this linearity assumption yields reasonable moisture estimates, it is not always justified as evidenced by laboratory and field measurements. Furthermore, this approach requires optical as well as thermal RS data for definition of the land surface temperature (LST) - vegetation index space, therefore, it is not applicable to satellites that do not provide thermal output such as the ESA Sentinel-2. To overcome these limitations, we propose a novel trapezoid model that only relies on optical NIR and SWIR data. The new model was validated using Sentinel-2 and Landsat-8 data for the semiarid Walnut Gulch (AZ) and sub humid Little Washita (OK) watersheds that vastly differ in land use and surface cover and provide excellent ground-truth moisture information from extensive sensor networks. Preliminary results for 2015-2016 indicate significant potential of the new model with a RMSE smaller than 4% volumetric near-surface moisture content and also confirm the enhanced utility of the high spatially and temporally resolved Sentinel-2 data.
Theoretical Studies of Strongly Interacting Fine Particle Systems
NASA Astrophysics Data System (ADS)
Fearon, Michael
Available from UMI in association with The British Library. A theoretical analysis of the time dependent behaviour of a system of fine magnetic particles as a function of applied field and temperature was carried out. The model used was based on a theory assuming Neel relaxation with a distribution of particle sizes. This theory predicted a linear variation of S_{max} with temperature and a finite intercept, which is not reflected by experimental observations. The remanence curves of strongly interacting fine-particle systems were also investigated theoretically. It was shown that the Henkel plot of the dc demagnetisation remanence vs the isothermal remanence is a useful representation of interactions. The form of the plot was found to be a reflection of the magnetic and physical microstructure of the material, which is consistent with experimental data. The relationship between the Henkel plot and the noise of a particulate recording medium, another property dependent on the microstructure, is also considered. The Interaction Field Factor (IFF), a single parameter characterising the non-linearity of the Henkel plot, is investigated. These results are consistent with a previous experimental study. Finally the results of the noise power spectral density for erased and saturated recording media are presented, so that characterisation of interparticle interactions may be carried out with greater accuracy.
What kind of Relationship is Between Body Mass Index and Body Fat Percentage?
Kupusinac, Aleksandar; Stokić, Edita; Sukić, Enes; Rankov, Olivera; Katić, Andrea
2017-01-01
Although body mass index (BMI) and body fat percentage (B F %) are well known as indicators of nutritional status, there are insuficient data whether the relationship between them is linear or not. There are appropriate linear and quadratic formulas that are available to predict B F % from age, gender and BMI. On the other hand, our previous research has shown that artificial neural network (ANN) is a more accurate method for that. The aim of this study is to analyze relationship between BMI and B F % by using ANN and big dataset (3058 persons). Our results show that this relationship is rather quadratic than linear for both gender and all age groups. Comparing genders, quadratic relathionship is more pronounced in women, while linear relationship is more pronounced in men. Additionaly, our results show that quadratic relationship is more pronounced in old than in young and middle-age men and it is slightly more pronounced in young and middle-age than in old women.
NASA Astrophysics Data System (ADS)
Sellers, Piers J.; Heiser, Mark D.; Hall, Forrest G.; Verma, Shashi B.; Desjardins, Raymond L.; Schuepp, Peter M.; Ian MacPherson, J.
1997-03-01
It is commonly assumed that biophysically based soil-vegetation-atmosphere transfer (SVAT) models are scale-invariant with respect to the initial boundary conditions of topography, vegetation condition and soil moisture. In practice, SVAT models that have been developed and tested at the local scale (a few meters or a few tens of meters) are applied almost unmodified within general circulation models (GCMs) of the atmosphere, which have grid areas of 50-500 km 2. This study, which draws much of its substantive material from the papers of Sellers et al. (1992c, J. Geophys. Res., 97(D17): 19033-19060) and Sellers et al. (1995, J. Geophys. Res., 100(D12): 25607-25629), explores the validity of doing this. The work makes use of the FIFE-89 data set which was collected over a 2 km × 15 km grassland area in Kansas. The site was characterized by high variability in soil moisture and vegetation condition during the late growing season of 1989. The area also has moderate topography. The 2 km × 15 km 'testbed' area was divided into 68 × 501 pixels of 30 m × 30 m spatial resolution, each of which could be assigned topographic, vegetation condition and soil moisture parameters from satellite and in situ observations gathered in FIFE-89. One or more of these surface fields was area-averaged in a series of simulation runs to determine the impact of using large-area means of these initial or boundary conditions on the area-integrated (aggregated) surface fluxes. The results of the study can be summarized as follows: 1. analyses and some of the simulations indicated that the relationships describing the effects of moderate topography on the surface radiation budget are near-linear and thus largely scale-invariant. The relationships linking the simple ratio vegetation index ( SR), the canopy conductance parameter (▽ F) and the canopy transpiration flux are also near-linear and similarly scale-invariant to first order. Because of this, it appears that simple area-averaging operations can be applied to these fields with relatively little impact on the calculated surface heat flux. 2. The relationships linking surface and root-zone soil wetness to the soil surface and canopy transpiration rates are non-linear. However, simulation results and observations indicate that soil moisture variability decreases significantly as an area dries out, which partially cancels out the effects of these non-linear functions.In conclusion, it appears that simple averages of topographic slope and vegetation parameters can be used to calculate surface energy and heat fluxes over a wide range of spatial scales, from a few meters up to many kilometers at least for grassland sites and areas with moderate topography. Although the relationships between soil moisture and evapotranspiration are non-linear for intermediate soil wetnesses, the dynamics of soil drying act to progressively reduce soil moisture variability and thus the impacts of these non-linearities on the area-averaged surface fluxes. These findings indicate that we may be able to use mean values of topography, vegetation condition and soil moisture to calculate the surface-atmosphere fluxes of energy, heat and moisture at larger length scales, to within an acceptable accuracy for climate modeling work. However, further tests over areas with different vegetation types, soils and more extreme topography are required to improve our confidence in this approach.
Nunez, Michael D.; Vandekerckhove, Joachim; Srinivasan, Ramesh
2016-01-01
Perceptual decision making can be accounted for by drift-diffusion models, a class of decision-making models that assume a stochastic accumulation of evidence on each trial. Fitting response time and accuracy to a drift-diffusion model produces evidence accumulation rate and non-decision time parameter estimates that reflect cognitive processes. Our goal is to elucidate the effect of attention on visual decision making. In this study, we show that measures of attention obtained from simultaneous EEG recordings can explain per-trial evidence accumulation rates and perceptual preprocessing times during a visual decision making task. Models assuming linear relationships between diffusion model parameters and EEG measures as external inputs were fit in a single step in a hierarchical Bayesian framework. The EEG measures were features of the evoked potential (EP) to the onset of a masking noise and the onset of a task-relevant signal stimulus. Single-trial evoked EEG responses, P200s to the onsets of visual noise and N200s to the onsets of visual signal, explain single-trial evidence accumulation and preprocessing times. Within-trial evidence accumulation variance was not found to be influenced by attention to the signal or noise. Single-trial measures of attention lead to better out-of-sample predictions of accuracy and correct reaction time distributions for individual subjects. PMID:28435173
Nunez, Michael D; Vandekerckhove, Joachim; Srinivasan, Ramesh
2017-02-01
Perceptual decision making can be accounted for by drift-diffusion models, a class of decision-making models that assume a stochastic accumulation of evidence on each trial. Fitting response time and accuracy to a drift-diffusion model produces evidence accumulation rate and non-decision time parameter estimates that reflect cognitive processes. Our goal is to elucidate the effect of attention on visual decision making. In this study, we show that measures of attention obtained from simultaneous EEG recordings can explain per-trial evidence accumulation rates and perceptual preprocessing times during a visual decision making task. Models assuming linear relationships between diffusion model parameters and EEG measures as external inputs were fit in a single step in a hierarchical Bayesian framework. The EEG measures were features of the evoked potential (EP) to the onset of a masking noise and the onset of a task-relevant signal stimulus. Single-trial evoked EEG responses, P200s to the onsets of visual noise and N200s to the onsets of visual signal, explain single-trial evidence accumulation and preprocessing times. Within-trial evidence accumulation variance was not found to be influenced by attention to the signal or noise. Single-trial measures of attention lead to better out-of-sample predictions of accuracy and correct reaction time distributions for individual subjects.
A commentary on perception-action relationships in spatial display instruments
NASA Technical Reports Server (NTRS)
Shebilske, Wayne L.
1989-01-01
Transfer of information across disciplines is promoted, while basic and applied researchers are cautioned about the danger of assuming simple relationships between stimulus information, perceptual impressions, and performance including pattern recognition and sensorimotor skills. A theoretical and empirical foundation was developed predicting those relationships.
MO-F-16A-02: Simulation of a Medical Linear Accelerator for Teaching Purposes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlone, M; Lamey, M; Anderson, R
Purpose: Detailed functioning of linear accelerator physics is well known. Less well developed is the basic understanding of how the adjustment of the linear accelerator's electrical components affects the resulting radiation beam. Other than the text by Karzmark, there is very little literature devoted to the practical understanding of linear accelerator functionality targeted at the radiotherapy clinic level. The purpose of this work is to describe a simulation environment for medical linear accelerators with the purpose of teaching linear accelerator physics. Methods: Varian type lineacs were simulated. Klystron saturation and peak output were modelled analytically. The energy gain of anmore » electron beam was modelled using load line expressions. The bending magnet was assumed to be a perfect solenoid whose pass through energy varied linearly with solenoid current. The dose rate calculated at depth in water was assumed to be a simple function of the target's beam current. The flattening filter was modelled as an attenuator with conical shape, and the time-averaged dose rate at a depth in water was determined by calculating kerma. Results: Fifteen analytical models were combined into a single model called SIMAC. Performance was verified systematically by adjusting typical linac control parameters. Increasing klystron pulse voltage increased dose rate to a peak, which then decreased as the beam energy was further increased due to the fixed pass through energy of the bending magnet. Increasing accelerator beam current leads to a higher dose per pulse. However, the energy of the electron beam decreases due to beam loading and so the dose rate eventually maximizes and the decreases as beam current was further increased. Conclusion: SIMAC can realistically simulate the functionality of a linear accelerator. It is expected to have value as a teaching tool for both medical physicists and linear accelerator service personnel.« less
Estimation on nonlinear damping in second order distributed parameter systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Reich, Simeon; Rosen, I. G.
1989-01-01
An approximation and convergence theory for the identification of nonlinear damping in abstract wave equations is developed. It is assumed that the unknown dissipation mechanism to be identified can be described by a maximal monotone operator acting on the generalized velocity. The stiffness is assumed to be linear and symmetric. Functional analytic techniques are used to establish that solutions to a sequence of finite dimensional (Galerkin) approximating identification problems in some sense approximate a solution to the original infinite dimensional inverse problem.
Cost Assessment for Shielding of C3 Type. Facilities
1980-03-01
imperfections and on penetrations . Long-conductor penetrants are assumed to enter the building through a one-quarter-inch thick entry plate and a shielded...Effects 21 3.2.3 Currents from Penetrants 21 3.2.4 Numerical Examples 23 3.3 Design Approach 23 3.3.1 Design Assuming Linear Behavior of Shield 23...General 36 4.1.1 Envelope Shield 36 4.1.2 Penetrations 41 4.2 Condition I, New Construction, External Shield 46 4.3 Condition II, New
EPA Unmix 6.0 Fundamentals & User Guide
Unmix seeks to solve the general mixture problem where the data are assumed to be a linear combination of an unknown number of sources of unknown composition, which contribute an unknown amount to each sample.
Wang, Zheng-Xin; Hao, Peng; Yao, Pei-Yi
2017-01-01
The non-linear relationship between provincial economic growth and carbon emissions is investigated by using panel smooth transition regression (PSTR) models. The research indicates that, on the condition of separately taking Gross Domestic Product per capita (GDPpc), energy structure (Es), and urbanisation level (Ul) as transition variables, three models all reject the null hypothesis of a linear relationship, i.e., a non-linear relationship exists. The results show that the three models all contain only one transition function but different numbers of location parameters. The model taking GDPpc as the transition variable has two location parameters, while the other two models separately considering Es and Ul as the transition variables both contain one location parameter. The three models applied in the study all favourably describe the non-linear relationship between economic growth and CO2 emissions in China. It also can be seen that the conversion rate of the influence of Ul on per capita CO2 emissions is significantly higher than those of GDPpc and Es on per capita CO2 emissions. PMID:29236083
Wang, Zheng-Xin; Hao, Peng; Yao, Pei-Yi
2017-12-13
The non-linear relationship between provincial economic growth and carbon emissions is investigated by using panel smooth transition regression (PSTR) models. The research indicates that, on the condition of separately taking Gross Domestic Product per capita (GDPpc), energy structure (Es), and urbanisation level (Ul) as transition variables, three models all reject the null hypothesis of a linear relationship, i.e., a non-linear relationship exists. The results show that the three models all contain only one transition function but different numbers of location parameters. The model taking GDPpc as the transition variable has two location parameters, while the other two models separately considering Es and Ul as the transition variables both contain one location parameter. The three models applied in the study all favourably describe the non-linear relationship between economic growth and CO₂ emissions in China. It also can be seen that the conversion rate of the influence of Ul on per capita CO₂ emissions is significantly higher than those of GDPpc and Es on per capita CO₂ emissions.
Accurate and scalable social recommendation using mixed-membership stochastic block models.
Godoy-Lorite, Antonia; Guimerà, Roger; Moore, Cristopher; Sales-Pardo, Marta
2016-12-13
With increasing amounts of information available, modeling and predicting user preferences-for books or articles, for example-are becoming more important. We present a collaborative filtering model, with an associated scalable algorithm, that makes accurate predictions of users' ratings. Like previous approaches, we assume that there are groups of users and of items and that the rating a user gives an item is determined by their respective group memberships. However, we allow each user and each item to belong simultaneously to mixtures of different groups and, unlike many popular approaches such as matrix factorization, we do not assume that users in each group prefer a single group of items. In particular, we do not assume that ratings depend linearly on a measure of similarity, but allow probability distributions of ratings to depend freely on the user's and item's groups. The resulting overlapping groups and predicted ratings can be inferred with an expectation-maximization algorithm whose running time scales linearly with the number of observed ratings. Our approach enables us to predict user preferences in large datasets and is considerably more accurate than the current algorithms for such large datasets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poppeliers, Christian
Matlab code for inversion of frequency domain, electrostatic geophysical data in terms of scalar scattering amplitudes in the subsurface. The data is assumed to be the difference between two measurements: electric field measurements prior to the injection of an electrically conductive proppant, and the electric field measurements after proppant injection. The proppant is injected into the subsurface via a well, and its purpose is to prop open fractures created by hydraulic fracturing. In both cases the illuminating electric field is assumed to be a vertically incident plane wave. The inversion strategy is to solve a set of linear system ofmore » equations, where each equation defines the amplitude of a candidate scattering volume. The model space is defined by M potential scattering locations and the frequency domain (of which there are k frequencies) data are recorded on N receivers. The solution thus solves a kN x M system of linear equations for M scalar amplitudes within the user-defined solution space. Practical Application: Oilfield environments where observed electrostatic geophysical data can reasonably be assumed to be scattered by subsurface proppant volumes. No field validation examples have so far been provided.« less
Accurate and scalable social recommendation using mixed-membership stochastic block models
Godoy-Lorite, Antonia; Moore, Cristopher
2016-01-01
With increasing amounts of information available, modeling and predicting user preferences—for books or articles, for example—are becoming more important. We present a collaborative filtering model, with an associated scalable algorithm, that makes accurate predictions of users’ ratings. Like previous approaches, we assume that there are groups of users and of items and that the rating a user gives an item is determined by their respective group memberships. However, we allow each user and each item to belong simultaneously to mixtures of different groups and, unlike many popular approaches such as matrix factorization, we do not assume that users in each group prefer a single group of items. In particular, we do not assume that ratings depend linearly on a measure of similarity, but allow probability distributions of ratings to depend freely on the user’s and item’s groups. The resulting overlapping groups and predicted ratings can be inferred with an expectation-maximization algorithm whose running time scales linearly with the number of observed ratings. Our approach enables us to predict user preferences in large datasets and is considerably more accurate than the current algorithms for such large datasets. PMID:27911773
Casual Dating Dissolution: A Typology.
ERIC Educational Resources Information Center
Loyer-Carlson, Vicki L.; Walker, Alexis J.
Although there are many typologies of relationship development and love, it is frequently assumed that all break-ups are alike. This longitudinal study examined persons' cognitions regarding early relationship interactions and/or observations which caused them to think about the viability of the relationship. The role of causal attributions in the…
Generating log-normal mock catalog of galaxies in redshift space
NASA Astrophysics Data System (ADS)
Agrawal, Aniket; Makiya, Ryu; Chiang, Chi-Ting; Jeong, Donghui; Saito, Shun; Komatsu, Eiichiro
2017-10-01
We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear bias relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.
Levine, Stephen Z; Leucht, Stefan
2013-04-01
The treatment and measurement of negative symptoms are currently at issue in schizophrenia, but the clinical meaning of symptom severity and change is unclear. To offer a clinically meaningful interpretation of severity and change scores on the Scale for the Assessment of Negative Symptoms (SANS). Patients were intention-to-treat participants (n=383) in two double-blind randomized placebo-controlled clinical trials that compared amisulpride with placebo for the treatment of predominant negative symptoms. Equipercentile linking was used to examine extrapolation from (a) CGI-S to SANS severity ratings, and (b) CGI-I to SANS percentage change (n=383). Linking was conducted at baseline, 8-14 days, 28-30 days, and 56-60 days of the trials. Across visits, CGI-S ratings of 'not ill' linked to SANS scores of 0-13, and ranged to 'extreme' ratings that linked to SANS scores of 102-105. The relationship between the CGI-S and the SANS severity scores assumed a linear trend (1=0-13, 2=15-56, 3=37-61, 4=49-66, 5=63-75, 6=79-89, 7=102-105). Similarly the relationship between CGI-I ratings and SANS percentage change followed a linear trend. For instance, CGI-I ratings of 'very much improved' were linked to SANS percent changes of -90 to -67, 'much improved' to -50 to -42, and 'minimally improved' to -21 to -13. The current results uniquely contribute to the debate surrounding negative symptoms by providing clinical meaning to SANS severity and change scores and so offer direction regarding clinically meaningful response cut-off scores to guide treatment targets of predominant negative symptoms. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lanci, L.; Kent, D. V.
2007-12-01
Low temperature measurements of isothermal remanent magnetization (IRM) in Greenland ice spanning the last glacial and Holocene have shown that ice samples contain a measurable concentration of magnetic minerals which are part of the atmospheric aerosol. Assuming that the source materials do not change much with time, the concentration of magnetic minerals should be proportional to the measured concentration of dust in ice. We have indeed found a consistent linear relationship with the contents of dust. However, the linear relationship between low temperature ice magnetization vs. dust concentration has an offset, which when extrapolated to zero dust concentration would seemingly indicate that a significantly large magnetization corresponds to a null amount of dust in ice. Thermal relaxation experiments have shown that magnetic grains of nanometric size carry virtually all the uncorrelated magnetization. Magnetic measurements in Antarctic ice cores confirm the existence of a similar nanometric-size magnetic fraction, which also appear uncorrelated with measured aerosol concentration. The magnitude of the uncorrelated magnetization from Vostok is similar to that measured in NorthGRIP ice. Measurements of IRM at 250K suggest that the SP magnetic particles are in the size range of about 7-17 nm, which is compatible with the expected size of particles produced by ablation and subsequent condensation of meteorites in the atmosphere. The concentration of extraterrestrial material in NorthGRIP ice was estimated from the magnetic relaxation data based on a crude estimate of chondritic Ms. The resulting concentration of 0.78±0.22 ppb for Greenland is in good agreement with the outcome based on published iridium concentrations; a virtually identical concentration of 0.53±0.18 ppb has been measured in Vostok ice core.
Statistical Modeling of Fire Occurrence Using Data from the Tōhoku, Japan Earthquake and Tsunami.
Anderson, Dana; Davidson, Rachel A; Himoto, Keisuke; Scawthorn, Charles
2016-02-01
In this article, we develop statistical models to predict the number and geographic distribution of fires caused by earthquake ground motion and tsunami inundation in Japan. Using new, uniquely large, and consistent data sets from the 2011 Tōhoku earthquake and tsunami, we fitted three types of models-generalized linear models (GLMs), generalized additive models (GAMs), and boosted regression trees (BRTs). This is the first time the latter two have been used in this application. A simple conceptual framework guided identification of candidate covariates. Models were then compared based on their out-of-sample predictive power, goodness of fit to the data, ease of implementation, and relative importance of the framework concepts. For the ground motion data set, we recommend a Poisson GAM; for the tsunami data set, a negative binomial (NB) GLM or NB GAM. The best models generate out-of-sample predictions of the total number of ignitions in the region within one or two. Prefecture-level prediction errors average approximately three. All models demonstrate predictive power far superior to four from the literature that were also tested. A nonlinear relationship is apparent between ignitions and ground motion, so for GLMs, which assume a linear response-covariate relationship, instrumental intensity was the preferred ground motion covariate because it captures part of that nonlinearity. Measures of commercial exposure were preferred over measures of residential exposure for both ground motion and tsunami ignition models. This may vary in other regions, but nevertheless highlights the value of testing alternative measures for each concept. Models with the best predictive power included two or three covariates. © 2015 Society for Risk Analysis.
Glenn, Edward P.; Huete, Alfredo R.; Nagler, Pamela L.; Nelson, Stephen G.
2008-01-01
Vegetation indices (VIs) are among the oldest tools in remote sensing studies. Although many variations exist, most of them ratio the reflection of light in the red and NIR sections of the spectrum to separate the landscape into water, soil, and vegetation. Theoretical analyses and field studies have shown that VIs are near-linearly related to photosynthetically active radiation absorbed by a plant canopy, and therefore to light-dependent physiological processes, such as photosynthesis, occurring in the upper canopy. Practical studies have used time-series VIs to measure primary production and evapotranspiration, but these are limited in accuracy to that of the data used in ground truthing or calibrating the models used. VIs are also used to estimate a wide variety of other canopy attributes that are used in Soil-Vegetation-Atmosphere Transfer (SVAT), Surface Energy Balance (SEB), and Global Climate Models (GCM). These attributes include fractional vegetation cover, leaf area index, roughness lengths for turbulent transfer, emissivity and albedo. However, VIs often exhibit only moderate, non-linear relationships to these canopy attributes, compromising the accuracy of the models. We use case studies to illustrate the use and misuse of VIs, and argue for using VIs most simply as a measurement of canopy light absorption rather than as a surrogate for detailed features of canopy architecture. Used this way, VIs are compatible with “Big Leaf” SVAT and GCMs that assume that canopy carbon and moisture fluxes have the same relative response to the environment as any single leaf, simplifying the task of modeling complex landscapes. PMID:27879814
Estimating the remaining useful life of bearings using a neuro-local linear estimator-based method.
Ahmad, Wasim; Ali Khan, Sheraz; Kim, Jong-Myon
2017-05-01
Estimating the remaining useful life (RUL) of a bearing is required for maintenance scheduling. While the degradation behavior of a bearing changes during its lifetime, it is usually assumed to follow a single model. In this letter, bearing degradation is modeled by a monotonically increasing function that is globally non-linear and locally linearized. The model is generated using historical data that is smoothed with a local linear estimator. A neural network learns this model and then predicts future levels of vibration acceleration to estimate the RUL of a bearing. The proposed method yields reasonably accurate estimates of the RUL of a bearing at different points during its operational life.
2016-01-01
Understanding the relationship between physiological measurements from human subjects and their demographic data is important within both the biometric and forensic domains. In this paper we explore the relationship between measurements of the human hand and a range of demographic features. We assess the ability of linear regression and machine learning classifiers to predict demographics from hand features, thereby providing evidence on both the strength of relationship and the key features underpinning this relationship. Our results show that we are able to predict sex, height, weight and foot size accurately within various data-range bin sizes, with machine learning classification algorithms out-performing linear regression in most situations. In addition, we identify the features used to provide these relationships applicable across multiple applications. PMID:27806075
Miguel-Hurtado, Oscar; Guest, Richard; Stevenage, Sarah V; Neil, Greg J; Black, Sue
2016-01-01
Understanding the relationship between physiological measurements from human subjects and their demographic data is important within both the biometric and forensic domains. In this paper we explore the relationship between measurements of the human hand and a range of demographic features. We assess the ability of linear regression and machine learning classifiers to predict demographics from hand features, thereby providing evidence on both the strength of relationship and the key features underpinning this relationship. Our results show that we are able to predict sex, height, weight and foot size accurately within various data-range bin sizes, with machine learning classification algorithms out-performing linear regression in most situations. In addition, we identify the features used to provide these relationships applicable across multiple applications.
Linear Space-Variant Image Restoration of Photon-Limited Images
1978-03-01
levels of performance of the wavefront seisor. The parameter ^ represents the residual rms wavefront error ^measurement noise plus ♦ttting error...known to be optimum only when the signal and noise are uncorrelated stationary random processes «nd when the noise statistics are gaussian. In the...regime of photon-Iimited imaging, the noise is non-gaussian and signaI-dependent, and it is therefore reasonable to assume that tome form of linear
Morignat, Eric; Gay, Emilie; Vinard, Jean-Luc; Calavas, Didier; Hénaux, Viviane
2015-07-01
In the context of climate change, the frequency and severity of extreme weather events are expected to increase in temperate regions, and potentially have a severe impact on farmed cattle through production losses or deaths. In this study, we used distributed lag non-linear models to describe and quantify the relationship between a temperature-humidity index (THI) and cattle mortality in 12 areas in France. THI incorporates the effects of both temperature and relative humidity and was already used to quantify the degree of heat stress on dairy cattle because it does reflect physical stress deriving from extreme conditions better than air temperature alone. Relationships between daily THI and mortality were modeled separately for dairy and beef cattle during the 2003-2006 period. Our general approach was to first determine the shape of the THI-mortality relationship in each area by modeling THI with natural cubic splines. We then modeled each relationship assuming a three-piecewise linear function, to estimate the critical cold and heat THI thresholds, for each area, delimiting the thermoneutral zone (i.e. where the risk of death is at its minimum), and the cold and heat effects below and above these thresholds, respectively. Area-specific estimates of the cold or heat effects were then combined in a hierarchical Bayesian model to compute the pooled effects of THI increase or decrease on dairy and beef cattle mortality. A U-shaped relationship, indicating a mortality increase below the cold threshold and above the heat threshold was found in most of the study areas for dairy and beef cattle. The pooled estimate of the mortality risk associated with a 1°C decrease in THI below the cold threshold was 5.0% for dairy cattle [95% posterior interval: 4.4, 5.5] and 4.4% for beef cattle [2.0, 6.5]. The pooled mortality risk associated with a 1°C increase above the hot threshold was estimated to be 5.6% [5.0, 6.2] for dairy and 4.6% [0.9, 8.7] for beef cattle. Knowing the thermoneutral zone and temperature effects outside this zone is of primary interest for farmers because it can help determine when to implement appropriate preventive and mitigation measures. Copyright © 2015 Elsevier Inc. All rights reserved.
General relativity as the effective theory of GL(4,R) spontaneous symmetry breaking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomboulis, E. T.
2011-10-15
We assume a GL(4,R) space-time symmetry which is spontaneously broken to SO(3,1). We carry out the coset construction of the effective theory for the nonlinearly realized broken symmetry in terms of the Goldstone fields and matter fields transforming linearly under the unbroken Lorentz subgroup. We then identify functions of the Goldstone and matter fields that transform linearly also under the broken symmetry. Expressed in terms of these quantities the effective theory reproduces the vierbein formalism of general relativity with general coordinate invariance being automatically realized nonlinearly over GL(4,R). The coset construction makes no assumptions about any underlying theory that mightmore » be responsible for the assumed symmetry breaking. We give a brief discussion of the possibility of field theories with GL(4,R) rather than Lorentz space-time symmetry providing the underlying dynamics.« less
Tailoring magnetic nanoparticle for transformers application.
Morais, P C; Silva, A S; Leite, E S; Garg, V K; Oliveira, A C; Viali, W R; Sartoratto, P P C
2010-02-01
In this study photoacoustic spectroscopy was used to investigate the effect of dilution of an oil-based magnetic fluid sample on the magnetic nanoparticle surface-coating. Changes of the photoacoustic signal intensity on the band-L region (640 to 830 nm) upon dilution of the stock magnetic fluid sample were discussed in terms of molecular surface desorption. The model proposed here assumes that the driving force taking the molecules out from the nanoparticle surface into the bulk solvent is the gradient of osmotic pressure. This gradient of osmotic pressure is established between the nanoparticle surface and the bulk suspension. It is further assumed that the photoacoustic signal intensity (area under the photoacoustic spectra) scales linearly with the number of coating molecules (surface grafting) at the nanoparticle surface. This model picture provides a non-linear analytical description for the reduction of the surface grafting coefficient upon dilution, which was successfully-used to curve-fit the photoacoustic experimental data.
Intertwining solutions for magnetic relativistic Hartree type equations
NASA Astrophysics Data System (ADS)
Cingolani, Silvia; Secchi, Simone
2018-05-01
We consider the magnetic pseudo-relativistic Schrödinger equation where , m > 0, is an external continuous scalar potential, is a continuous vector potential and is a convolution kernel, is a constant, , . We assume that A and V are symmetric with respect to a closed subgroup G of the group of orthogonal linear transformations of . If for any , the cardinality of the G-orbit of x is infinite, then we prove the existence of infinitely many intertwining solutions assuming that is either linear in x or uniformly bounded. The results are proved by means of a new local realization of the square root of the magnetic laplacian to a local elliptic operator with Neumann boundary condition on a half-space. Moreover we derive an existence result of a ground state intertwining solution for bounded vector potentials, if G admits a finite orbit.
A Control Model: Interpretation of Fitts' Law
NASA Technical Reports Server (NTRS)
Connelly, E. M.
1984-01-01
The analytical results for several models are given: a first order model where it is assumed that the hand velocity can be directly controlled, and a second order model where it is assumed that the hand acceleration can be directly controlled. Two different types of control-laws are investigated. One is linear function of the hand error and error rate; the other is the time-optimal control law. Results show that the first and second order models with the linear control-law produce a movement time (MT) function with the exact form of the Fitts' Law. The control-law interpretation implies that the effect of target width on MT must be a result of the vertical motion which elevates the hand from the starting point and drops it on the target at the target edge. The time optimal control law did not produce a movement-time formula simular to Fitt's Law.
Accounting for the relationship between per diem cost and LOS when estimating hospitalization costs.
Ishak, K Jack; Stolar, Marilyn; Hu, Ming-yi; Alvarez, Piedad; Wang, Yamei; Getsios, Denis; Williams, Gregory C
2012-12-01
Hospitalization costs in clinical trials are typically derived by multiplying the length of stay (LOS) by an average per-diem (PD) cost from external sources. This assumes that PD costs are independent of LOS. Resource utilization in early days of the stay is usually more intense, however, and thus, the PD cost for a short hospitalization may be higher than for longer stays. The shape of this relationship is unlikely to be linear, as PD costs would be expected to gradually plateau. This paper describes how to model the relationship between PD cost and LOS using flexible statistical modelling techniques. An example based on a clinical study of clevidipine for the treatment of peri-operative hypertension during hospitalizations for cardiac surgery is used to illustrate how inferences about cost-savings associated with good blood pressure (BP) control during the stay can be affected by the approach used to derive hospitalization costs.Data on the cost and LOS of hospitalizations for coronary artery bypass grafting (CABG) from the Massachusetts Acute Hospital Case Mix Database (the MA Case Mix Database) were analyzed to link LOS to PD cost, factoring in complications that may have occurred during the hospitalization or post-discharge. The shape of the relationship between LOS and PD costs in the MA Case Mix was explored graphically in a regression framework. A series of statistical models including those based on simple logarithmic transformation of LOS to more flexible models using LOcally wEighted Scatterplot Smoothing (LOESS) techniques were considered. A final model was selected, using simplicity and parsimony as guiding principles in addition traditional fit statistics (like Akaike's Information Criterion, or AIC). This mapping was applied in ECLIPSE to predict an LOS-specific PD cost, and then a total cost of hospitalization. These were then compared for patients who had good vs. poor peri-operative blood-pressure control. The MA Case Mix dataset included data from over 10,000 patients. Visual inspection of PD vs. LOS revealed a non-linear relationship. A logarithmic model and a series of LOESS and piecewise-linear models with varying connection points were tested. The logarithmic model was ultimately favoured for its fit and simplicity. Using this mapping in the ECLIPSE trials, we found that good peri-operative BP control was associated with a cost savings of $5,366 when costs were derived using the mapping, compared with savings of $7,666 obtained using the traditional approach of calculating the cost. PD costs vary systematically with LOS, with short stays being associated with high PD costs that drop gradually and level off. The shape of the relationship may differ in other settings. It is important to assess this and model the observed pattern, as this may have an impact on conclusions based on derived hospitalization costs.
Accounting for the relationship between per diem cost and LOS when estimating hospitalization costs
2012-01-01
Background Hospitalization costs in clinical trials are typically derived by multiplying the length of stay (LOS) by an average per-diem (PD) cost from external sources. This assumes that PD costs are independent of LOS. Resource utilization in early days of the stay is usually more intense, however, and thus, the PD cost for a short hospitalization may be higher than for longer stays. The shape of this relationship is unlikely to be linear, as PD costs would be expected to gradually plateau. This paper describes how to model the relationship between PD cost and LOS using flexible statistical modelling techniques. Methods An example based on a clinical study of clevidipine for the treatment of peri-operative hypertension during hospitalizations for cardiac surgery is used to illustrate how inferences about cost-savings associated with good blood pressure (BP) control during the stay can be affected by the approach used to derive hospitalization costs. Data on the cost and LOS of hospitalizations for coronary artery bypass grafting (CABG) from the Massachusetts Acute Hospital Case Mix Database (the MA Case Mix Database) were analyzed to link LOS to PD cost, factoring in complications that may have occurred during the hospitalization or post-discharge. The shape of the relationship between LOS and PD costs in the MA Case Mix was explored graphically in a regression framework. A series of statistical models including those based on simple logarithmic transformation of LOS to more flexible models using LOcally wEighted Scatterplot Smoothing (LOESS) techniques were considered. A final model was selected, using simplicity and parsimony as guiding principles in addition traditional fit statistics (like Akaike’s Information Criterion, or AIC). This mapping was applied in ECLIPSE to predict an LOS-specific PD cost, and then a total cost of hospitalization. These were then compared for patients who had good vs. poor peri-operative blood-pressure control. Results The MA Case Mix dataset included data from over 10,000 patients. Visual inspection of PD vs. LOS revealed a non-linear relationship. A logarithmic model and a series of LOESS and piecewise-linear models with varying connection points were tested. The logarithmic model was ultimately favoured for its fit and simplicity. Using this mapping in the ECLIPSE trials, we found that good peri-operative BP control was associated with a cost savings of $5,366 when costs were derived using the mapping, compared with savings of $7,666 obtained using the traditional approach of calculating the cost. Conclusions PD costs vary systematically with LOS, with short stays being associated with high PD costs that drop gradually and level off. The shape of the relationship may differ in other settings. It is important to assess this and model the observed pattern, as this may have an impact on conclusions based on derived hospitalization costs. PMID:23198908
Non-Linear Concentration-Response Relationships between Ambient Ozone and Daily Mortality.
Bae, Sanghyuk; Lim, Youn-Hee; Kashima, Saori; Yorifuji, Takashi; Honda, Yasushi; Kim, Ho; Hong, Yun-Chul
2015-01-01
Ambient ozone (O3) concentration has been reported to be significantly associated with mortality. However, linearity of the relationships and the presence of a threshold has been controversial. The aim of the present study was to examine the concentration-response relationship and threshold of the association between ambient O3 concentration and non-accidental mortality in 13 Japanese and Korean cities from 2000 to 2009. We selected Japanese and Korean cities which have population of over 1 million. We constructed Poisson regression models adjusting daily mean temperature, daily mean PM10, humidity, time trend, season, year, day of the week, holidays and yearly population. The association between O3 concentration and mortality was examined using linear, spline and linear-threshold models. The thresholds were estimated for each city, by constructing linear-threshold models. We also examined the city-combined association using a generalized additive mixed model. The mean O3 concentration did not differ greatly between Korea and Japan, which were 26.2 ppb and 24.2 ppb, respectively. Seven out of 13 cities showed better fits for the spline model compared with the linear model, supporting a non-linear relationships between O3 concentration and mortality. All of the 7 cities showed J or U shaped associations suggesting the existence of thresholds. The range of city-specific thresholds was from 11 to 34 ppb. The city-combined analysis also showed a non-linear association with a threshold around 30-40 ppb. We have observed non-linear concentration-response relationship with thresholds between daily mean ambient O3 concentration and daily number of non-accidental death in Japanese and Korean cities.
Knapp, M; Seuchter, S A; Baur, M P
1994-01-01
It is believed that the main advantage of affected sib-pair tests is that their application requires no information about the underlying genetic mechanism of the disease. However, here it is proved that the mean test, which can be considered the most prominent of the affected sib-pair tests, is equivalent to lod score analysis for an assumed recessive mode of inheritance, irrespective of the true mode of the disease. Further relationships of certain sib-pair tests and lod score analysis under specific assumed genetic modes are investigated.
ERIC Educational Resources Information Center
Ayalon, Michal; Watson, Anne; Lerman, Steve
2015-01-01
This study investigates students' ways of attending to linear sequential data in two tasks, and conjectures possible relationships between those ways and elements of the task design. Drawing on the substantial literature about such situations, we focus for this paper on linear rate of change, and on covariation and correspondence approaches to…
ERIC Educational Resources Information Center
Kane, Michael T.; Mroch, Andrew A.; Suh, Youngsuk; Ripkey, Douglas R.
2009-01-01
This paper analyzes five linear equating models for the "nonequivalent groups with anchor test" (NEAT) design with internal anchors (i.e., the anchor test is part of the full test). The analysis employs a two-dimensional framework. The first dimension contrasts two general approaches to developing the equating relationship. Under a "parameter…
ERIC Educational Resources Information Center
Dyehouse, Melissa; Bennett, Deborah; Harbor, Jon; Childress, Amy; Dark, Melissa
2009-01-01
Logic models are based on linear relationships between program resources, activities, and outcomes, and have been used widely to support both program development and evaluation. While useful in describing some programs, the linear nature of the logic model makes it difficult to capture the complex relationships within larger, multifaceted…
1988-11-01
rates.6 The Hammet equation , also called the Linear Free Energy Relationship (LFER) because of the relationship of the Gibb’s Free Energy to the... equations for numerous biological and physicochemical properties. Linear Solvation Enery Relationship (LSER), a sub-set of QSAR have been used by...originates from thermodynamics, where Hammet recognized the relationship of structure to the Gibb’s Free Energy, and ultimately to equilibria and reaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, Aidan P.; Swiler, Laura P.; Trott, Christian R.
2015-03-15
Here, we present a new interatomic potential for solids and liquids called Spectral Neighbor Analysis Potential (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected onto a basis of hyperspherical harmonics in four dimensions. The bispectrum components are the same bond-orientational order parameters employed by the GAP potential [1].more » The SNAP potential, unlike GAP, assumes a linear relationship between atom energy and bispectrum components. The linear SNAP coefficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. We demonstrate that a previously unnoticed symmetry property can be exploited to reduce the computational cost of the force calculations by more than one order of magnitude. We present results for a SNAP potential for tantalum, showing that it accurately reproduces a range of commonly calculated properties of both the crystalline solid and the liquid phases. In addition, unlike simpler existing potentials, SNAP correctly predicts the energy barrier for screw dislocation migration in BCC tantalum.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, A.P., E-mail: athomps@sandia.gov; Swiler, L.P., E-mail: lpswile@sandia.gov; Trott, C.R., E-mail: crtrott@sandia.gov
2015-03-15
We present a new interatomic potential for solids and liquids called Spectral Neighbor Analysis Potential (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected onto a basis of hyperspherical harmonics in four dimensions. The bispectrum components are the same bond-orientational order parameters employed by the GAP potential [1]. Themore » SNAP potential, unlike GAP, assumes a linear relationship between atom energy and bispectrum components. The linear SNAP coefficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. We demonstrate that a previously unnoticed symmetry property can be exploited to reduce the computational cost of the force calculations by more than one order of magnitude. We present results for a SNAP potential for tantalum, showing that it accurately reproduces a range of commonly calculated properties of both the crystalline solid and the liquid phases. In addition, unlike simpler existing potentials, SNAP correctly predicts the energy barrier for screw dislocation migration in BCC tantalum.« less
Cao, Xuan; van Oosten, Anne; Shenoy, Vivek B.; Janmey, Paul A.; Wells, Rebecca G.
2016-01-01
Tissues including liver stiffen and acquire more extracellular matrix with fibrosis. The relationship between matrix content and stiffness, however, is non-linear, and stiffness is only one component of tissue mechanics. The mechanical response of tissues such as liver to physiological stresses is not well described, and models of tissue mechanics are limited. To better understand the mechanics of the normal and fibrotic rat liver, we carried out a series of studies using parallel plate rheometry, measuring the response to compressive, extensional, and shear strains. We found that the shear storage and loss moduli G’ and G” and the apparent Young's moduli measured by uniaxial strain orthogonal to the shear direction increased markedly with both progressive fibrosis and increasing compression, that livers shear strain softened, and that significant increases in shear modulus with compressional stress occurred within a range consistent with increased sinusoidal pressures in liver disease. Proteoglycan content and integrin-matrix interactions were significant determinants of liver mechanics, particularly in compression. We propose a new non-linear constitutive model of the liver. A key feature of this model is that, while it assumes overall liver incompressibility, it takes into account water flow and solid phase compressibility. In sum, we report a detailed study of non-linear liver mechanics under physiological strains in the normal state, early fibrosis, and late fibrosis. We propose a constitutive model that captures compression stiffening, tension softening, and shear softening, and can be understood in terms of the cellular and matrix components of the liver. PMID:26735954
ERIC Educational Resources Information Center
Wagner, Matthias Oliver; Bos, Klaus; Jascenoka, Julia; Jekauc, Darko; Petermann, Franz
2012-01-01
The aim of this study was to gain insights into the relationship between developmental coordination disorder, peer problems, and behavioral problems in school-aged children where both internalizing and externalizing behavioral problems were considered. We assumed that the relationship between developmental coordination disorder and…
Breastfeeding and the Mother-Infant Relationship--A Review
ERIC Educational Resources Information Center
Jansen, Jarno; de Weerth, Carolina; Riksen-Walraven, J. Marianne
2008-01-01
A positive effect of breastfeeding on the mother-infant relationship is often assumed in the scientific literature, but this has not been systematically reviewed. This review aims to clarify the role of breastfeeding in the mother-infant relationship, which is conceptualized as the maternal bond toward the infant and infant attachment toward the…
NASA Astrophysics Data System (ADS)
Wilkie, Karina J.; Ayalon, Michal
2018-02-01
A foundational component of developing algebraic thinking for meaningful calculus learning is the idea of "function" that focuses on the relationship between varying quantities. Students have demonstrated widespread difficulties in learning calculus, particularly interpreting and modeling dynamic events, when they have a poor understanding of relationships between variables. Yet, there are differing views on how to develop students' functional thinking over time. In the Australian curriculum context, linear relationships are introduced to lower secondary students with content that reflects a hybrid of traditional and reform algebra pedagogy. This article discusses an investigation into Australian secondary students' understanding of linear functional relationships from Years 7 to 12 (approximately 12 to 18 years old; n = 215) in their approaches to three tasks (finding rate of change, pattern generalisation and interpretation of gradient) involving four different representations (table, geometric growing pattern, equation and graph). From the findings, it appears that these students' knowledge of linear functions remains context-specific rather than becoming connected over time.
Robust estimation for partially linear models with large-dimensional covariates
Zhu, LiPing; Li, RunZe; Cui, HengJian
2014-01-01
We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of o(n), where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures. PMID:24955087
Robust estimation for partially linear models with large-dimensional covariates.
Zhu, LiPing; Li, RunZe; Cui, HengJian
2013-10-01
We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of [Formula: see text], where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures.
Structural Dynamic Analyses And Test Predictions For Spacecraft Structures With Non-Linearities
NASA Astrophysics Data System (ADS)
Vergniaud, Jean-Baptiste; Soula, Laurent; Newerla, Alfred
2012-07-01
The overall objective of the mechanical development and verification process is to ensure that the spacecraft structure is able to sustain the mechanical environments encountered during launch. In general the spacecraft structures are a-priori assumed to behave linear, i.e. the responses to a static load or dynamic excitation, respectively, will increase or decrease proportionally to the amplitude of the load or excitation induced. However, past experiences have shown that various non-linearities might exist in spacecraft structures and the consequences of their dynamic effects can significantly affect the development and verification process. Current processes are mainly adapted to linear spacecraft structure behaviour. No clear rules exist for dealing with major structure non-linearities. They are handled outside the process by individual analysis and margin policy, and analyses after tests to justify the CLA coverage. Non-linearities can primarily affect the current spacecraft development and verification process on two aspects. Prediction of flights loads by launcher/satellite coupled loads analyses (CLA): only linear satellite models are delivered for performing CLA and no well-established rules exist how to properly linearize a model when non- linearities are present. The potential impact of the linearization on the results of the CLA has not yet been properly analyzed. There are thus difficulties to assess that CLA results will cover actual flight levels. Management of satellite verification tests: the CLA results generated with a linear satellite FEM are assumed flight representative. If the internal non- linearities are present in the tested satellite then there might be difficulties to determine which input level must be passed to cover satellite internal loads. The non-linear behaviour can also disturb the shaker control, putting the satellite at risk by potentially imposing too high levels. This paper presents the results of a test campaign performed in the frame of an ESA TRP study [1]. A bread-board including typical non-linearities has been designed, manufactured and tested through a typical spacecraft dynamic test campaign. The study has demonstrate the capabilities to perform non-linear dynamic test predictions on a flight representative spacecraft, the good correlation of test results with respect to Finite Elements Model (FEM) prediction and the possibility to identify modal behaviour and to characterize non-linearities characteristics from test results. As a synthesis for this study, overall guidelines have been derived on the mechanical verification process to improve level of expertise on tests involving spacecraft including non-linearity.
An effective description of dark matter and dark energy in the mildly non-linear regime
Lewandowski, Matthew; Maleknejad, Azadeh; Senatore, Leonardo
2017-05-18
In the next few years, we are going to probe the low-redshift universe with unprecedented accuracy. Among the various fruits that this will bear, it will greatly improve our knowledge of the dynamics of dark energy, though for this there is a strong theoretical preference for a cosmological constant. We assume that dark energy is described by the so-called Effective Field Theory of Dark Energy, which assumes that dark energy is the Goldstone boson of time translations. Such a formalism makes it easy to ensure that our signatures are consistent with well-established principles of physics. Since most of the informationmore » resides at high wavenumbers, it is important to be able to make predictions at the highest wavenumber that is possible. Furthermore, the Effective Field Theory of Large-Scale Structure (EFTofLSS) is a theoretical framework that has allowed us to make accurate predictions in the mildly non-linear regime. In this paper, we derive the non-linear equations that extend the EFTofLSS to include the effect of dark energy both on the matter fields and on the biased tracers. For the specific case of clustering quintessence, we then perturbatively solve to cubic order the resulting non-linear equations and construct the one-loop power spectrum of the total density contrast.« less
An effective description of dark matter and dark energy in the mildly non-linear regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewandowski, Matthew; Maleknejad, Azadeh; Senatore, Leonardo
In the next few years, we are going to probe the low-redshift universe with unprecedented accuracy. Among the various fruits that this will bear, it will greatly improve our knowledge of the dynamics of dark energy, though for this there is a strong theoretical preference for a cosmological constant. We assume that dark energy is described by the so-called Effective Field Theory of Dark Energy, which assumes that dark energy is the Goldstone boson of time translations. Such a formalism makes it easy to ensure that our signatures are consistent with well-established principles of physics. Since most of the informationmore » resides at high wavenumbers, it is important to be able to make predictions at the highest wavenumber that is possible. Furthermore, the Effective Field Theory of Large-Scale Structure (EFTofLSS) is a theoretical framework that has allowed us to make accurate predictions in the mildly non-linear regime. In this paper, we derive the non-linear equations that extend the EFTofLSS to include the effect of dark energy both on the matter fields and on the biased tracers. For the specific case of clustering quintessence, we then perturbatively solve to cubic order the resulting non-linear equations and construct the one-loop power spectrum of the total density contrast.« less
An effective description of dark matter and dark energy in the mildly non-linear regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewandowski, Matthew; Senatore, Leonardo; Maleknejad, Azadeh, E-mail: matthew.lewandowski@cea.fr, E-mail: azade@ipm.ir, E-mail: senatore@stanford.edu
In the next few years, we are going to probe the low-redshift universe with unprecedented accuracy. Among the various fruits that this will bear, it will greatly improve our knowledge of the dynamics of dark energy, though for this there is a strong theoretical preference for a cosmological constant. We assume that dark energy is described by the so-called Effective Field Theory of Dark Energy, which assumes that dark energy is the Goldstone boson of time translations. Such a formalism makes it easy to ensure that our signatures are consistent with well-established principles of physics. Since most of the informationmore » resides at high wavenumbers, it is important to be able to make predictions at the highest wavenumber that is possible. The Effective Field Theory of Large-Scale Structure (EFTofLSS) is a theoretical framework that has allowed us to make accurate predictions in the mildly non-linear regime. In this paper, we derive the non-linear equations that extend the EFTofLSS to include the effect of dark energy both on the matter fields and on the biased tracers. For the specific case of clustering quintessence, we then perturbatively solve to cubic order the resulting non-linear equations and construct the one-loop power spectrum of the total density contrast.« less
Helgesson, P; Sjöstrand, H
2017-11-01
Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r 1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r 1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r 1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.
NASA Astrophysics Data System (ADS)
Helgesson, P.; Sjöstrand, H.
2017-11-01
Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.
Validity of the Taylor hypothesis for linear kinetic waves in the weakly collisional solar wind
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howes, G. G.; Klein, K. G.; TenBarge, J. M.
The interpretation of single-point spacecraft measurements of solar wind turbulence is complicated by the fact that the measurements are made in a frame of reference in relative motion with respect to the turbulent plasma. The Taylor hypothesis—that temporal fluctuations measured by a stationary probe in a rapidly flowing fluid are dominated by the advection of spatial structures in the fluid rest frame—is often assumed to simplify the analysis. But measurements of turbulence in upcoming missions, such as Solar Probe Plus, threaten to violate the Taylor hypothesis, either due to slow flow of the plasma with respect to the spacecraft ormore » to the dispersive nature of the plasma fluctuations at small scales. Assuming that the frequency of the turbulent fluctuations is characterized by the frequency of the linear waves supported by the plasma, we evaluate the validity of the Taylor hypothesis for the linear kinetic wave modes in the weakly collisional solar wind. The analysis predicts that a dissipation range of solar wind turbulence supported by whistler waves is likely to violate the Taylor hypothesis, while one supported by kinetic Alfvén waves is not.« less
Portfolio optimization by using linear programing models based on genetic algorithm
NASA Astrophysics Data System (ADS)
Sukono; Hidayat, Y.; Lesmana, E.; Putra, A. S.; Napitupulu, H.; Supian, S.
2018-01-01
In this paper, we discussed the investment portfolio optimization using linear programming model based on genetic algorithms. It is assumed that the portfolio risk is measured by absolute standard deviation, and each investor has a risk tolerance on the investment portfolio. To complete the investment portfolio optimization problem, the issue is arranged into a linear programming model. Furthermore, determination of the optimum solution for linear programming is done by using a genetic algorithm. As a numerical illustration, we analyze some of the stocks traded on the capital market in Indonesia. Based on the analysis, it is shown that the portfolio optimization performed by genetic algorithm approach produces more optimal efficient portfolio, compared to the portfolio optimization performed by a linear programming algorithm approach. Therefore, genetic algorithms can be considered as an alternative on determining the investment portfolio optimization, particularly using linear programming models.
The relationship between treatment access and spending in a managed behavioral health organization.
Cuffel, B J; Regier, D
2001-07-01
This study replicated an earlier study that showed a linear relationship between level of treatment access and behavioral health spending. The study reported here examined whether this relationship varies by important characteristics of behavioral health plans. Access rates and total spending over a five- to seven-year period were computed for 30 behavioral health plans. Regression analysis was used to estimate the relationship between access and spending and to examine whether it varied with the characteristics of benefit plans. A linear relationship was found between level of treatment access and behavioral health spending. However, the relationship closely paralleled that found in the earlier study only for benefit plans with an employee assistance program linked to the managed behavioral health organization and for plans that do not allow the use of out-of-network providers. The results of this study replicate those of the earlier study in showing a linear relationship between access and spending, but they suggest that the magnitude of this relationship may vary according to key plan characteristics.
Benzene metabolite levels in blood and bone marrow of B6C3F{sub 1} mice after low-level exposure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bechtold, W.E.; Strunk, M.R.; Thornton-Manning, J.R.
1995-12-01
Studies at the Inhalation Toxicology Research Institute (ITRI) have explored the species-specific uptake and metabolism of benzene. Results have shown that metabolism is dependent on both dose and route of administration. Of particular interest were shifts in the major metabolic pathways as a function of exposure concentration. In these studies, B6C3F{sub 1} mice were exposed to increasing levels of benzene by either gavage or inhalation. As benzene internal dose increased, the relative amounts of muconic acid and hydroquinone decreased. In contrast, the relative amount of catechol increased with increasing exposure. These results show that the relative levels of toxic metabolitesmore » are a function of exposure level. Based on these results and assuming a linear relationship between exposure concentration and levels of bone marrow metabolites, it would be difficult to detect an elevation of any phenolic metabolites above background after occupational exposures to the OSHA Permissible Exposure Limit of 1 ppm benzene.« less
Tang, Kam W; Flury, Sabine; Grossart, Hans-Peter; McGinnis, Daniel F
2017-10-01
Hypolimnetic oxygen demand in lakes is often assumed to be driven mainly by sediment microbial processes, while the role of Chaoborus larvae, which are prevalent in eutrophic lakes with hypoxic to anoxic bottoms, has been overlooked. We experimentally measured the respiration rates of C. flavicans at different temperatures yielding a Q 10 of 1.44-1.71 and a respiratory quotient of 0.84-0.98. Applying the experimental data in a system analytical approach, we showed that migrating Chaoborus larvae can significantly add to the water column and sediment oxygen demand, and contribute to the observed linear relationship between water column respiration and depth. The estimated phosphorus excretion by Chaoborus in sediment is comparable in magnitude to the required phosphorus loading for eutrophication. Migrating Chaoborus larvae thereby essentially trap nutrients between the water column and the sediment, and this continuous internal loading of nutrients would delay lake remediation even when external inputs are stopped. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Colón, D. P.; Bindeman, I. N.; Gerya, T. V.
2018-05-01
Geophysical imaging of the Yellowstone supervolcano shows a broad zone of partial melt interrupted by an amagmatic gap at depths of 15-20 km. We reproduce this structure through a series of regional-scale magmatic-thermomechanical forward models which assume that magmatic dikes stall at rheologic discontinuities in the crust. We find that basaltic magmas accumulate at the Moho and at the brittle-ductile transition, which naturally forms at depths of 5-10 km. This leads to the development of a 10- to 15-km thick midcrustal sill complex with a top at a depth of approximately 10 km, consistent with geophysical observations of the pre-Yellowstone hot spot track. We show a linear relationship between melting rates in the mantle and rhyolite eruption rates along the hot spot track. Finally, melt production rates from our models suggest that the Yellowstone plume is 175°C hotter than the surrounding mantle and that the thickness of the overlying lithosphere is 80 km.
Heart Activity and Autistic Behavior in Infants and Toddlers with Fragile X Syndrome
Roberts, Jane E.; Tonnsen, Bridgette; Robinson, Ashley; Shinkareva, Svetlana V.
2014-01-01
The present study contrasted physiological arousal in infants and toddlers with fragile X syndrome to typically developing control participants and examined physiological predictors early in development to autism severity later in development in fragile X syndrome. Thirty-one males with fragile X syndrome (ages 8–40 months) and 25 age-matched control participants were included. The group with fragile X syndrome showed shorter interbeat intervals (IBIs), lower vagal tone (VT), and less modulation of IBI. Data suggested a nonlinear effect with IBI and autistic behavior; however, a linear effect with VT and autistic behavior emerged. These findings suggest that atypical physiological arousal emerges within the first year and predicts severity of autistic behavior in fragile X syndrome. These relationships are complex and dynamic, likely reflecting endogenous factors assumed to reflect atypical brain function secondary to reduced fragile X mental retardation protein. This research has important implications for the early identification and treatment of autistic behaviors in young children with fragile X syndrome. PMID:22515825
Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time‐to‐Event Analysis
Gong, Xiajing; Hu, Meng
2018-01-01
Abstract Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time‐to‐event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high‐dimensional data featured by a large number of predictor variables. Our results showed that ML‐based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high‐dimensional data. The prediction performances of ML‐based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML‐based methods provide a powerful tool for time‐to‐event analysis, with a built‐in capacity for high‐dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. PMID:29536640
Numerical analysis of composite STEEL-CONCRETE SECTIONS using integral equation of Volterra
NASA Astrophysics Data System (ADS)
Partov, Doncho; Kantchev, Vesselin
2011-09-01
The paper presents analysis of the stress and deflections changes due to creep in statically determinate composite steel-concrete beam. The mathematical model involves the equation of equilibrium, compatibility and constitutive relationship, i.e. an elastic law for the steel part and an integral-type creep law of Boltzmann — Volterra for the concrete part. On the basis of the theory of the viscoelastic body of Arutyunian-Trost-Bažant for determining the redistribution of stresses in beam section between concrete plate and steel beam with respect to time "t", two independent Volterra integral equations of the second kind have been derived. Numerical method based on linear approximation of the singular kernal function in the integral equation is presented. Example with the model proposed is investigated. The creep functions is suggested by the model CEB MC90-99 and the "ACI 209R-92 model. The elastic modulus of concrete E c (t) is assumed to be constant in time `t'. The obtained results from the both models are compared.
Multivariate Time Series Decomposition into Oscillation Components.
Matsuda, Takeru; Komaki, Fumiyasu
2017-08-01
Many time series are considered to be a superposition of several oscillation components. We have proposed a method for decomposing univariate time series into oscillation components and estimating their phases (Matsuda & Komaki, 2017 ). In this study, we extend that method to multivariate time series. We assume that several oscillators underlie the given multivariate time series and that each variable corresponds to a superposition of the projections of the oscillators. Thus, the oscillators superpose on each variable with amplitude and phase modulation. Based on this idea, we develop gaussian linear state-space models and use them to decompose the given multivariate time series. The model parameters are estimated from data using the empirical Bayes method, and the number of oscillators is determined using the Akaike information criterion. Therefore, the proposed method extracts underlying oscillators in a data-driven manner and enables investigation of phase dynamics in a given multivariate time series. Numerical results show the effectiveness of the proposed method. From monthly mean north-south sunspot number data, the proposed method reveals an interesting phase relationship.
NASA Technical Reports Server (NTRS)
Ng, C. F.
1988-01-01
Static postbuckling and nonlinear dynamic analysis of plates are usually accomplished by multimode analyses, although the methods are complicated and do not give straightforward understanding of the nonlinear behavior. Assuming single-mode transverse displacement, a simple formula is derived for the transverse load displacement relationship of a plate under in-plane compression. The formula is used to derive a simple analytical expression for the static postbuckling displacement and nonlinear dynamic responses of postbuckled plates under sinusoidal or random excitation. Regions with softening and hardening spring behavior are identified. Also, the highly nonlinear motion of snap-through and its effects on the overall dynamic response can be easily interpreted using the single-mode formula. Theoretical results are compared with experimental results obtained using a buckled aluminum panel, using discrete frequency and broadband point excitation. Some important effects of the snap-through motion on the dynamic response of the postbuckled plates are found.
Schmidt-Wellenburg, Carola A; Biebach, Herbert; Daan, Serge; Visser, G Henk
2007-04-01
Many bird species steeply increase their body mass prior to migration. These fuel stores are necessary for long flights and to overcome ecological barriers. The elevated body mass is generally thought to cause higher flight costs. The relationship between mass and costs has been investigated mostly by interspecific comparison and by aerodynamic modelling. Here, we directly measured the energy expenditure of Barn Swallows (Hirundo rustica) flying unrestrained and repeatedly for several hours in a wind tunnel with natural variations in body mass. Energy expenditure during flight (e (f), in W) was found to increase with body mass (m, in g) following the equation e (f) = 0.38 x m (0.58). The scaling exponent (0.58) is smaller than assumed in aerodynamic calculations and than observed in most interspecific allometric comparisons. Wing beat frequency (WBF, in Hz) also scales with body mass (WBF = 2.4 x m (0.38)), but at a smaller exponent. Hence there is no linear relationship between e (f) and WBF. We propose that spontaneous changes in body mass during endurance flights are accompanied by physiological changes (such as enhanced oxygen and nutrient supply of the muscles) that are not taken into consideration in standard aerodynamic calculations, and also do not appear in interspecific comparison.
Measuring the viscosity of whole bovine lens using a fiber optic oxygen sensing system
Thao, Mai T.; Perez, Daniel; Dillon, James
2014-01-01
Purpose To obtain a better understanding of oxygen and nutrient transport within the lens, the viscosity of whole lenses was investigated using a fiber optic oxygen sensor (optode). The diffusion coefficient of oxygen was calculated using the Stokes-Einstein equation at the slip boundary condition. Methods The optode was used to measure the oxygen decay signal in samples consisting of different glycerol/water solutions with known viscosities. The oxygen decay signal was fitted to a double exponential decay rate equation, and the lifetimes (tau) were calculated. It was determined that the tau-viscosity relationship is linear, which served as the standard curve. The same procedure was applied to fresh bovine lenses, and the unknown viscosity of the bovine lens was calculated from the tau-viscosity relationship. Results The average viscosity in a whole bovine lens was determined to be 5.74±0.88 cP by our method. Using the Stokes-Einstein equation at the slip boundary condition, the diffusion coefficient for oxygen was calculated to be 8.2 × 10−6 cm2/s. Conclusions These data indicate a higher resistance to flow for oxygen and nutrients in the lens than what is currently assumed in the literature. Overall, this study allows a better understanding of oxygen transport within the lens. PMID:24505211
NASA Astrophysics Data System (ADS)
Noh, S. J.; Lee, D. Y.
2017-12-01
In the classic theory of wave-particle resonant interaction, anisotropy parameter of proton distribution is considered as an important factor to determine an instability such as ion cyclotron instability. The particle distribution function is often assumed to be a bi-Maxwellian distribution, for which the anisotropy parameter can be simplified to temperature anisotropy (T⊥/T∥-1) independent of specific energy of particles. In this paper, we studied the proton anisotropy related to EMIC waves using the Van Allen Probes observations in the inner magnetosphere. First, we found that the real velocity distribution of protons is usually not expressed with a simple bi-Maxwellian distribution. Also, we calculated the anisotropy parameter using the exact formula defined by Kennel and Petschek [1966] and investigated the linear instability criterion of them. We found that, for majority of the EMIC wave events, the threshold anisotropy condition for proton cyclotron instability is satisfied in the expected range of resonant energy. We further determined the parallel plasma beta and its inverse relationship with the anisotropy parameter. The inverse relationship exists both during the EMIC wave times and non-EMIC wave times, but with different slopes. Based on this result, we demonstrate that the parallel plasma beta can be a critical factor that determines occurrence of EMIC waves.
Woolley, Thomas E; Belmonte-Beitia, Juan; Calvo, Gabriel F; Hopewell, John W; Gaffney, Eamonn A; Jones, Bleddyn
2018-06-01
To estimate, from experimental data, the retreatment radiation 'tolerances' of the spinal cord at different times after initial treatment. A model was developed to show the relationship between the biological effective doses (BEDs) for two separate courses of treatment with the BED of each course being expressed as a percentage of the designated 'retreatment tolerance' BED value, denoted [Formula: see text] and [Formula: see text]. The primate data of Ang et al. ( 2001 ) were used to determine the fitted parameters. However, based on rodent data, recovery was assumed to commence 70 days after the first course was complete, and with a non-linear relationship to the magnitude of the initial BED (BED init ). The model, taking into account the above processes, provides estimates of the retreatment tolerance dose after different times. Extrapolations from the experimental data can provide conservative estimates for the clinic, with a lower acceptable myelopathy incidence. Care must be taken to convert the predicted [Formula: see text] value into a formal BED value and then a practical dose fractionation schedule. Used with caution, the proposed model allows estimations of retreatment doses with elapsed times ranging from 70 days up to three years after the initial course of treatment.
Fuhrmann, Mark; Lasat, Mitch M; Ebbs, Stephen D; Kochian, Leon V; Cornish, Jay
2002-01-01
A field test was conducted to determine the ability of three plant species to extract 137Cs and 90Sr from contaminated soil. Redroot pigweed (Amaranthus retroflexus L.), Indian mustard [Brassica juncea (L.) Czern.], and tepary bean (Phaseolus acutifolius A. Gray) were planted in a series of spatially randomized cells in soil that was contaminated in the 1950s and 1960s. We examined the potential for phytoextraction of 90Sr and 137Cs by these three species. Concentration ratios (CR) for 137Cs for redroot pigweed, Indian mustard, and tepary bean were 2.58, 0.46, and 0.17, respectively. For 90Sr they were substantially higher: 6.5, 8.2, and 15.2, respectively. The greatest accumulation of both radionuclides was obtained with redroot pigweed, even though its CR for 90Sr was the lowest, because of its relatively large biomass. There was a linear relationship between the 137Cs concentration in plants and its concentration in soil only for redroot pigweed. Uptake of 90Sr exhibits no relationship to 90Sr concentrations in the soil. Estimates of time required for removal of 50% of the two contaminants, assuming two crops of redroot pigweed per year, are 7 yr for 90Sr and 18 yr for 137Cs.
Establishing the diffuse correlation spectroscopy signal relationship with blood flow.
Boas, David A; Sakadžić, Sava; Selb, Juliette; Farzam, Parisa; Franceschini, Maria Angela; Carp, Stefan A
2016-07-01
Diffuse correlation spectroscopy (DCS) measurements of blood flow rely on the sensitivity of the temporal autocorrelation function of diffusively scattered light to red blood cell (RBC) mean square displacement (MSD). For RBCs flowing with convective velocity [Formula: see text], the autocorrelation is expected to decay exponentially with [Formula: see text], where [Formula: see text] is the delay time. RBCs also experience shear-induced diffusion with a diffusion coefficient [Formula: see text] and an MSD of [Formula: see text]. Surprisingly, experimental data primarily reflect diffusive behavior. To provide quantitative estimates of the relative contributions of convective and diffusive movements, we performed Monte Carlo simulations of light scattering through tissue of varying vessel densities. We assumed laminar vessel flow profiles and accounted for shear-induced diffusion effects. In agreement with experimental data, we found that diffusive motion dominates the correlation decay for typical DCS measurement parameters. Furthermore, our model offers a quantitative relationship between the RBC diffusion coefficient and absolute tissue blood flow. We thus offer, for the first time, theoretical support for the empirically accepted ability of the DCS blood flow index ([Formula: see text]) to quantify tissue perfusion. We find [Formula: see text] to be linearly proportional to blood flow, but with a proportionality modulated by the hemoglobin concentration and the average blood vessel diameter.
Analysis of Fluid Gauge Sensor for Zero or Microgravity Conditions using Finite Element Method
NASA Technical Reports Server (NTRS)
Deshpande, Manohar D.; Doiron, Terence a.
2007-01-01
In this paper the Finite Element Method (FEM) is presented for mass/volume gauging of a fluid in a tank subjected to zero or microgravity conditions. In this approach first mutual capacitances between electrodes embedded inside the tank are measured. Assuming the medium properties the mutual capacitances are also estimated using FEM approach. Using proper non-linear optimization the assumed properties are updated by minimizing the mean square error between estimated and measured capacitances values. Numerical results are presented to validate the present approach.
Application of genetic algorithms in nonlinear heat conduction problems.
Kadri, Muhammad Bilal; Khan, Waqar A
2014-01-01
Genetic algorithms are employed to optimize dimensionless temperature in nonlinear heat conduction problems. Three common geometries are selected for the analysis and the concept of minimum entropy generation is used to determine the optimum temperatures under the same constraints. The thermal conductivity is assumed to vary linearly with temperature while internal heat generation is assumed to be uniform. The dimensionless governing equations are obtained for each selected geometry and the dimensionless temperature distributions are obtained using MATLAB. It is observed that GA gives the minimum dimensionless temperature in each selected geometry.
NASA Technical Reports Server (NTRS)
Mohr, R. L.
1975-01-01
A set of four digital computer programs is presented which can be used to investigate the effects of instrumentation errors on the accuracy of aircraft and helicopter stability-and-control derivatives identified from flight test data. The programs assume that the differential equations of motion are linear and consist of small perturbations about a quasi-steady flight condition. It is also assumed that a Newton-Raphson optimization technique is used for identifying the estimates of the parameters. Flow charts and printouts are included.
Modeling creep behavior of fiber composites
NASA Technical Reports Server (NTRS)
Chen, J. L.; Sun, C. T.
1988-01-01
A micromechanical model for the creep behavior of fiber composites is developed based on a typical cell consisting of a fiber and the surrounding matrix. The fiber is assumed to be linearly elastic and the matrix nonlinearly viscous. The creep strain rate in the matrix is assumed to be a function of stress. The nominal stress-strain relations are derived in the form of differential equations which are solved numerically for off-axis specimens under uniaxial loading. A potential function and the associated effective stress and effective creep strain rates are introduced to simplify the orthotropic relations.
Safety models incorporating graph theory based transit indicators.
Quintero, Liliana; Sayed, Tarek; Wahba, Mohamed M
2013-01-01
There is a considerable need for tools to enable the evaluation of the safety of transit networks at the planning stage. One interesting approach for the planning of public transportation systems is the study of networks. Network techniques involve the analysis of systems by viewing them as a graph composed of a set of vertices (nodes) and edges (links). Once the transport system is visualized as a graph, various network properties can be evaluated based on the relationships between the network elements. Several indicators can be calculated including connectivity, coverage, directness and complexity, among others. The main objective of this study is to investigate the relationship between network-based transit indicators and safety. The study develops macro-level collision prediction models that explicitly incorporate transit physical and operational elements and transit network indicators as explanatory variables. Several macro-level (zonal) collision prediction models were developed using a generalized linear regression technique, assuming a negative binomial error structure. The models were grouped into four main themes: transit infrastructure, transit network topology, transit route design, and transit performance and operations. The safety models showed that collisions were significantly associated with transit network properties such as: connectivity, coverage, overlapping degree and the Local Index of Transit Availability. As well, the models showed a significant relationship between collisions and some transit physical and operational attributes such as the number of routes, frequency of routes, bus density, length of bus and 3+ priority lanes. Copyright © 2012 Elsevier Ltd. All rights reserved.
Pollitz, F.F.
2003-01-01
Instantaneous velocity gradients within the continental lithosphere are often related to the tectonic driving forces. This relationship is direct if the forces are secular, as for the case of loading of a locked section of a subduction interface by the downgoing plate. If the forces are static, as for the case of lateral variations in gravitational potential energy, then velocity gradients can be produced only if the lithosphere has, on average, zero strength. The static force model may be related to the long-term velocity field but not the instantaneous velocity field (typically measured geodetically over a period of several years) because over short time intervals the upper lithosphere behaves elastically. In order to describe both the short- and long-term behaviour of an (elastic) lithosphere-(viscoelastic) asthenosphere system in a self-consistent manner, I construct a deformation model termed the expected interseismic velocity (EIV) model. Assuming that the lithosphere is populated with faults that rupture continually, each with a definite mean recurrence time, and that the Earth is well approximated as a linear elastic-viscoelastic coupled system, I derive a simple relationship between the instantaneous velocity field and the average rate of moment release in the lithosphere. Examples with synthetic fault networks demonstrate that velocity gradients in actively deforming regions may to a large extent be the product of compounded viscoelastic relaxation from past earthquakes on hundreds of faults distributed over large ( ≥106 km2) areas.
de la Fuente, Jesús; Fernández-Cabezas, María; Cambil, Matilde; Vera, Manuel M.; González-Torres, Maria Carmen; Artuch-Garde, Raquel
2017-01-01
The aim of the present research was to analyze the linear relationship between resilience (meta-motivational variable), learning approaches (meta-cognitive variables), strategies for coping with academic stress (meta-emotional variable) and academic achievement, necessary in the context of university academic stress. A total of 656 students from a southern university in Spain completed different questionnaires: a resiliency scale, a coping strategies scale, and a study process questionnaire. Correlations and structural modeling were used for data analyses. There was a positive and significant linear association showing a relationship of association and prediction of resilience to the deep learning approach, and problem-centered coping strategies. In a complementary way, these variables positively and significantly predicted the academic achievement of university students. These results enabled a linear relationship of association and consistent and differential prediction to be established among the variables studied. Implications for future research are set out. PMID:28713298
Duality in non-linear programming
NASA Astrophysics Data System (ADS)
Jeyalakshmi, K.
2018-04-01
In this paper we consider duality and converse duality for a programming problem involving convex objective and constraint functions with finite dimensional range. We do not assume any constraint qualification. The dual is presented by reducing the problem to a standard Lagrange multiplier problem.
Initial conditions for accurate N-body simulations of massive neutrino cosmologies
NASA Astrophysics Data System (ADS)
Zennaro, M.; Bel, J.; Villaescusa-Navarro, F.; Carbone, C.; Sefusatti, E.; Guzzo, L.
2017-04-01
The set-up of the initial conditions in cosmological N-body simulations is usually implemented by rescaling the desired low-redshift linear power spectrum to the required starting redshift consistently with the Newtonian evolution of the simulation. The implementation of this practical solution requires more care in the context of massive neutrino cosmologies, mainly because of the non-trivial scale-dependence of the linear growth that characterizes these models. In this work, we consider a simple two-fluid, Newtonian approximation for cold dark matter and massive neutrinos perturbations that can reproduce the cold matter linear evolution predicted by Boltzmann codes such as CAMB or CLASS with a 0.1 per cent accuracy or below for all redshift relevant to non-linear structure formation. We use this description, in the first place, to quantify the systematic errors induced by several approximations often assumed in numerical simulations, including the typical set-up of the initial conditions for massive neutrino cosmologies adopted in previous works. We then take advantage of the flexibility of this approach to rescale the late-time linear power spectra to the simulation initial redshift, in order to be as consistent as possible with the dynamics of the N-body code and the approximations it assumes. We implement our method in a public code (REPS rescaled power spectra for initial conditions with massive neutrinos https://github.com/matteozennaro/reps) providing the initial displacements and velocities for cold dark matter and neutrino particles that will allow accurate, I.e. 1 per cent level, numerical simulations for this cosmological scenario.
Generating log-normal mock catalog of galaxies in redshift space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agrawal, Aniket; Makiya, Ryu; Saito, Shun
We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear biasmore » relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.« less
Finite-time H∞ filtering for non-linear stochastic systems
NASA Astrophysics Data System (ADS)
Hou, Mingzhe; Deng, Zongquan; Duan, Guangren
2016-09-01
This paper describes the robust H∞ filtering analysis and the synthesis of general non-linear stochastic systems with finite settling time. We assume that the system dynamic is modelled by Itô-type stochastic differential equations of which the state and the measurement are corrupted by state-dependent noises and exogenous disturbances. A sufficient condition for non-linear stochastic systems to have the finite-time H∞ performance with gain less than or equal to a prescribed positive number is established in terms of a certain Hamilton-Jacobi inequality. Based on this result, the existence of a finite-time H∞ filter is given for the general non-linear stochastic system by a second-order non-linear partial differential inequality, and the filter can be obtained by solving this inequality. The effectiveness of the obtained result is illustrated by a numerical example.
Uddling, J; Gelang-Alfredsson, J; Piikki, K; Pleijel, H
2007-01-01
Relationships between chlorophyll concentration ([chl]) and SPAD values were determined for birch, wheat, and potato. For all three species, the relationships were non-linear with an increasing slope with increasing SPAD. The relationships for birch and wheat were strong (r (2) approximately 0.9), while the potato relationship was comparatively weak (r (2) approximately 0.5). Birch and wheat had very similar relationships when the chlorophyll concentration was expressed per unit leaf area, but diverged when it was expressed per unit fresh weight. Furthermore, wheat showed similar SPAD-[chl] relationships for two different cultivars and during two different growing seasons. The curvilinear shape of the SPAD-[chl] relationships agreed well with the simulated effects of non-uniform chlorophyll distribution across the leaf surface and multiple scattering, causing deviations from linearity in the high and low SPAD range, respectively. The effect of non-uniformly distributed chlorophyll is likely to be more important in explaining the non-linearity in the empirical relationships, since the effect of scattering was predicted to be comparatively weak. The simulations were based on the algorithm for the calculation of SPAD-502 output values. We suggest that SPAD calibration curves should generally be parameterised as non-linear equations, and we hope that the relationships between [chl] and SPAD and the simulations of the present study can facilitate the interpretation of chlorophyll meter calibrations in relation to optical properties of leaves in future studies.
Simultaneous masking additivity for short Gaussian-shaped tones: spectral effects.
Laback, Bernhard; Necciari, Thibaud; Balazs, Peter; Savel, Sophie; Ystad, Sølvi
2013-08-01
Laback et al. [(2011). J. Acoust. Soc. Am. 129, 888-897] investigated the additivity of nonsimultaneous masking using short Gaussian-shaped tones as maskers and target. The present study involved Gaussian stimuli to measure the additivity of simultaneous masking for combinations of up to four spectrally separated maskers. According to most basilar membrane measurements, the maskers should be processed linearly at the characteristic frequency (CF) of the target. Assuming also compression of the target, all masker combinations should produce excess masking (exceeding linear additivity). The results for a pair of maskers flanking the target indeed showed excess masking. The amount of excess masking could be predicted by a model assuming summation of masker-evoked excitations in intensity units at the target CF and compression of the target, using compressive input/output functions derived from the nonsimultaneous masking study. However, the combinations of lower-frequency maskers showed much less excess masking than predicted by the model. This cannot easily be attributed to factors like off-frequency listening, combination tone perception, or between-masker suppression. It was better predicted, however, by assuming weighted intensity summation of masker excitations. The optimum weights for the lower-frequency maskers were smaller than one, consistent with partial masker compression as indicated by recent psychoacoustic data.
The nonlinear dynamics of a spacecraft coupled to the vibration of a contained fluid
NASA Technical Reports Server (NTRS)
Peterson, Lee D.; Crawley, Edward F.; Hansman, R. John
1988-01-01
The dynamics of a linear spacecraft mode coupled to a nonlinear low gravity slosh of a fluid in a cylindrical tank is investigated. Coupled, nonlinear equations of motion for the fluid-spacecraft dynamics are derived through an assumed mode Lagrangian method. Unlike linear fluid slosh models, this nonlinear slosh model retains two fundamental slosh modes and three secondary modes. An approximate perturbation solution of the equations of motion indicates that the nonlinear coupled system response involves fluid-spacecraft modal resonances not predicted by either a linear, or a nonlinear, uncoupled slosh analysis. Experimental results substantiate the analytical predictions.
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
A sequential adaptive experimental design procedure for a related problem is studied. It is assumed that a finite set of potential linear models relating certain controlled variables to an observed variable is postulated, and that exactly one of these models is correct. The problem is to sequentially design most informative experiments so that the correct model equation can be determined with as little experimentation as possible. Discussion includes: structure of the linear models; prerequisite distribution theory; entropy functions and the Kullback-Leibler information function; the sequential decision procedure; and computer simulation results. An example of application is given.
Linear regression analysis of survival data with missing censoring indicators.
Wang, Qihua; Dinse, Gregg E
2011-04-01
Linear regression analysis has been studied extensively in a random censorship setting, but typically all of the censoring indicators are assumed to be observed. In this paper, we develop synthetic data methods for estimating regression parameters in a linear model when some censoring indicators are missing. We define estimators based on regression calibration, imputation, and inverse probability weighting techniques, and we prove all three estimators are asymptotically normal. The finite-sample performance of each estimator is evaluated via simulation. We illustrate our methods by assessing the effects of sex and age on the time to non-ambulatory progression for patients in a brain cancer clinical trial.
Axial calibration methods of piezoelectric load sharing dynamometer
NASA Astrophysics Data System (ADS)
Zhang, Jun; Chang, Qingbing; Ren, Zongjin; Shao, Jun; Wang, Xinlei; Tian, Yu
2018-06-01
The relationship between input and output of load sharing dynamometer is seriously non-linear in different loading points of a plane, so it's significant for accutately measuring force to precisely calibrate the non-linear relationship. In this paper, firstly, based on piezoelectric load sharing dynamometer, calibration experiments of different loading points are performed in a plane. And then load sharing testing system is respectively calibrated based on BP algorithm and ELM (Extreme Learning Machine) algorithm. Finally, the results show that the calibration result of ELM is better than BP for calibrating the non-linear relationship between input and output of loading sharing dynamometer in the different loading points of a plane, which verifies that ELM algorithm is feasible in solving force non-linear measurement problem.
A happiness degree predictor using the conceptual data structure for deep learning architectures.
Pérez-Benito, Francisco Javier; Villacampa-Fernández, Patricia; Conejero, J Alberto; García-Gómez, Juan M; Navarro-Pardo, Esperanza
2017-11-13
Happiness is a universal fundamental human goal. Since the emergence of Positive Psychology, a major focus in psychological research has been to study the role of certain factors in the prediction of happiness. The conventional methodologies are based on linear relationships, such as the commonly used Multivariate Linear Regression (MLR), which may suffer from the lack of representative capacity to the varied psychological features. Using Deep Neural Networks (DNN), we define a Happiness Degree Predictor (H-DP) based on the answers to five psychometric standardized questionnaires. A Data-Structure driven architecture for DNNs (D-SDNN) is proposed for defining a HDP in which the network architecture enables the conceptual interpretation of psychological factors associated to happiness. Four different neural network configurations have been tested, varying the number of neurons and the presence or absence of bias in the hidden layers. Two metrics for evaluating the influence of conceptual dimensions have been defined and computed: one quantifies the influence weight of the conceptual dimension in absolute terms and the other one pinpoints the direction (positive or negative) of the influence. A cross-sectional survey targeting non-institutionalized adult population residing in Spain was completed by 823 cases. The total of 111 elements of the survey are grouped by socio-demographic data and by five psychometric scales (Brief COPE Inventory, EPQR-A, GHQ-28, MOS-SSS and SDHS) measuring several psychological factors acting one as the outcome (SDHS) and the four others as predictors. Our D-SDNN approach provided a better outcome (MSE: 1.46·10 -2 ) than MLR (MSE: 2.30·10 -2 ), hence improving by 37% the predictive accuracy, and allowing to simulate the conceptual structure. We observe a better performance of Deep Neural Networks (DNN) with respect to traditional methodologies. This demonstrates its capability to capture the conceptual structure for predicting happiness degree through psychological variables assessed by standardized questionnaires. It also permits to estimate the influence of each factor on the outcome without assuming a linear relationship. Copyright © 2017. Published by Elsevier B.V.
Identification of Large Space Structures on Orbit
1986-09-01
requires only the eigenvector corresponding to the eigenvector 93 .:. ,S --- k’.’ L derivative being calculated. However, a set of linear algebraic ...Journal of Guidance, Control and Dynamics. 204. Noble, B. and J. W. Daniel, Applied Linear Algebra , Prentice-Hall, Inc., 1977. 205. Nurre, G. S., R. S...4.2.1. Linear Relationships . . . . . . . . . . 114 4.2.2. Nonlinear Relationships . . . . . . . . . 120 4.3. Series Expansion Methods
1988-06-01
abilities when I lost confidence. Without her help I would not have completed the program. Next, I wish to thank Dr. Peter Maybeck, my research ...being not only a resource for my research . but also for being a friend who listened when I needed a shoulder to cry on. Finally, I wish to give thanks...considered in this research are assumed to be linear quadratic Gaussian (LQG) based controllers. This research first uses a direct approach to
An Improved Search Approach for Solving Non-Convex Mixed-Integer Non Linear Programming Problems
NASA Astrophysics Data System (ADS)
Sitopu, Joni Wilson; Mawengkang, Herman; Syafitri Lubis, Riri
2018-01-01
The nonlinear mathematical programming problem addressed in this paper has a structure characterized by a subset of variables restricted to assume discrete values, which are linear and separable from the continuous variables. The strategy of releasing nonbasic variables from their bounds, combined with the “active constraint” method, has been developed. This strategy is used to force the appropriate non-integer basic variables to move to their neighbourhood integer points. Successful implementation of these algorithms was achieved on various test problems.
Linear-sweep voltammetry of a soluble redox couple in a cylindrical electrode
NASA Technical Reports Server (NTRS)
Weidner, John W.
1991-01-01
An approach is described for using the linear sweep voltammetry (LSV) technique to study the kinetics of flooded porous electrodes by assuming a porous electrode as a collection of identical noninterconnected cylindrical pores that are filled with electrolyte. This assumption makes possible to study the behavior of this ideal electrode as that of a single pore. Alternatively, for an electrode of a given pore-size distribution, it is possible to predict the performance of different pore sizes and then combine the performance values.
1981-09-01
corresponds to the same square footage that consumed the electrical energy. 3. The basic assumptions of multiple linear regres- sion, as enumerated in...7. Data related to the sample of bases is assumed to be representative of bases in the population. Limitations Basic limitations on this research were... Ratemaking --Overview. Rand Report R-5894, Santa Monica CA, May 1977. Chatterjee, Samprit, and Bertram Price. Regression Analysis by Example. New York: John
Detection of Bioaerosols Using Single Particle Thermal Emission Spectroscopy (First-year Report)
2012-02-01
cooled MCT detector with a noise equivalent power (NEP) of 7x10(–13) W/Hz, yields a detection S/N > 13 (assuming a sufficiently cooled background). We...dispersively resolved using 190-mm Horiba spectrometer that houses a time-gated 32-element mercury cadmium telluride ( MCT ) linear array. In this report...to 10.0 ms. Minimum integration (and readout) periods for the time-gated 32-element mercury cadmium telluride ( MCT ) linear array are 10 µs. Based
Elementary operators on self-adjoint operators
NASA Astrophysics Data System (ADS)
Molnar, Lajos; Semrl, Peter
2007-03-01
Let H be a Hilbert space and let and be standard *-operator algebras on H. Denote by and the set of all self-adjoint operators in and , respectively. Assume that and are surjective maps such that M(AM*(B)A)=M(A)BM(A) and M*(BM(A)B)=M*(B)AM*(B) for every pair , . Then there exist an invertible bounded linear or conjugate-linear operator and a constant c[set membership, variant]{-1,1} such that M(A)=cTAT*, , and M*(B)=cT*BT, .
Imaging Through Random Discrete-Scatterer Dispersive Media
2015-08-27
to that of a conventional, continuous, linear - frequency-modulated chirped signal [3]. Chirped train signals are a particular realization of a class of...continuous chirp signals, characterized by linear frequency modulation [3], we assume the time instances tn to be given by 1 tn = τg ( 1− βg n 2Ng ) n...kernel Dn(z) [9] by sincN (z) = (N + 1)−1DN/2(2πz/N). DISTRIBUTION A: Distribution approved for public release. 4 We use the elementary identity5 π sin
A Note on the Disturbance Decoupling Problem for Retarded Systems.
1984-10-01
disturbance decoupling problem f or linear control system is to design a feedback control law in such a way that the disturbances do not * influence...and in 141 by Pandolfi who analyses the situation in some detail. HeU concludes that for retarded systems one needs an unbounded feedback control law...ult) 6 JP is the control input, d(t) 6 AR is same disturbance, and z(t) e 3k is the output to be regularted. We assume that L is a bounded linear
Time, distance, and drawdown relationships in a pumped ground-water basin
Kunkel, Fred
1960-01-01
Several reasonable values are assumed for coefficients of transmissibility and storage of lenticular alluvial deposits, These values when substituted in the Theis (1935) nonequilibrium formula as modified by Wenzel (1942) give curves from which time, distance, drawdown relationships are estimated.
NASA Astrophysics Data System (ADS)
Demekhov, A. G.
2017-03-01
By using numerical simulations we generalize certain relationships between the parameters of quasimonochromatic whistler-mode waves generated at the linear and nonlinear stages of the cyclotron instability in the backward-wave oscillator regime. One of these relationships is between the wave amplitude at the nonlinear stage and the linear growth rate of the cyclotron instability. It was obtained analytically by V.Yu.Trakhtengerts (1984) for a uniform medium under the assumption of constant frequency and amplitude of the generated wave. We show that a similar relationship also holds for the signals generated in a nonuniform magnetic field and having a discrete structure in the form of short wave packets (elements) with fast frequency drift inside each element. We also generalize the formula for the linear growth rate of absolute cyclotron instability in a nonuniform medium and analyze the relationship between the frequency drift rate in the discrete elements and the wave amplitude. These relationships are important for analyzing the links between the parameters of chorus emissions in the Earth's and planetary magnetospheres and the characteristics of the energetic charged particles generating these signals.
Gerster, Samuel; Namer, Barbara; Elam, Mikael
2017-01-01
Abstract Skin conductance responses (SCR) are increasingly analyzed with model‐based approaches that assume a linear and time‐invariant (LTI) mapping from sudomotor nerve (SN) activity to observed SCR. These LTI assumptions have previously been validated indirectly, by quantifying how much variance in SCR elicited by sensory stimulation is explained under an LTI model. This approach, however, collapses sources of variability in the nervous and effector organ systems. Here, we directly focus on the SN/SCR mapping by harnessing two invasive methods. In an intraneural recording experiment, we simultaneously track SN activity and SCR. This allows assessing the SN/SCR relationship but possibly suffers from interfering activity of non‐SN sympathetic fibers. In an intraneural stimulation experiment under regional anesthesia, such influences are removed. In this stimulation experiment, about 95% of SCR variance is explained under LTI assumptions when stimulation frequency is below 0.6 Hz. At higher frequencies, nonlinearities occur. In the intraneural recording experiment, explained SCR variance is lower, possibly indicating interference from non‐SN fibers, but higher than in our previous indirect tests. We conclude that LTI systems may not only be a useful approximation but in fact a rather accurate description of biophysical reality in the SN/SCR system, under conditions of low baseline activity and sporadic external stimuli. Intraneural stimulation under regional anesthesia is the most sensitive method to address this question. PMID:28862764
MarsSedEx I: feasibility test for sediment settling experiments under Martian gravity
NASA Astrophysics Data System (ADS)
Kuhn, Nikolaus J.
2013-04-01
Gravity has a non-linear effect on the settling velocity of sediment particles in liquids and gases. However, StokeśLaw, the common way of estimating the terminal velocity of a particle moving in a gas of liquid assumes a linear relationship between terminal velocity and gravity. For terrestrial applications, this "error" is not relevant, but it may strongly influence the terminal velocity achieved by settling particles in the Martian atmosphere or water bodies. In principle, the effect of gravity on settling velocity can also be achieved by reducing the difference in density between particle and gas or liquid. However, the use of analogues simulating the lower gravity on Mars on Earth is difficult because the properties and interaction of the liquids and materials differ from those of water and sediment, .i.e. the viscosity of the liquid or the interaction between charges surfaces and liquid molecules. An alternative for measuring the actual settling velocities of particles under Martian gravity, on Earth, is offered by placing a settling tube on a reduced gravity flight and conduct settling tests within the 20 to 25 seconds of Martian gravity that can be simulated during such a flight. In this presentation we report on the feasibility of such a test based on an experiment conducted during a reduced gravity flight in November 2012.
NASA Astrophysics Data System (ADS)
Tamayao, M. M.; Blackhurst, M. F.; Matthews, H. S.
2014-10-01
Recent sustainability research has focused on urban systems given their high share of environmental impacts and potential for centralized impact mitigation. Recent research emphasizes descriptive statistics from place-based case studies to argue for policy action. This limits the potential for general insights and decision support. Here, we implement generalized linear and multiple linear regression analyses to obtain more robust insights on the relationship between urbanization and greenhouse gas (GHG) emissions in the US We used consistently derived county-level scope 1 and scope 2 GHG inventories for our response variable while predictor variables included dummy-coded variables for county geographic type (central, outlying, and nonmetropolitan), median household income, population density, and climate indices (heating degree days (HDD) and cooling degree days (CDD)). We find that there is not enough statistical evidence indicating per capita scope 1 and 2 emissions differ by geographic type, ceteris paribus. These results are robust for different assumed electricity emissions factors. We do find statistically significant differences in per capita emissions by sector for different county types, with transportation and residential emissions highest in nonmetropolitan (rural) counties, transportation emissions lowest in central counties, and commercial sector emissions highest in central counties. These results indicate the importance of regional land use and transportation dynamics when planning local emissions mitigation measures.
Word Order and Voice Influence the Timing of Verb Planning in German Sentence Production.
Sauppe, Sebastian
2017-01-01
Theories of incremental sentence production make different assumptions about when speakers encode information about described events and when verbs are selected, accordingly. An eye tracking experiment on German testing the predictions from linear and hierarchical incrementality about the timing of event encoding and verb planning is reported. In the experiment, participants described depictions of two-participant events with sentences that differed in voice and word order. Verb-medial active sentences and actives and passives with sentence-final verbs were compared. Linear incrementality predicts that sentences with verbs placed early differ from verb-final sentences because verbs are assumed to only be planned shortly before they are articulated. By contrast, hierarchical incrementality assumes that speakers start planning with relational encoding of the event. A weak version of hierarchical incrementality assumes that only the action is encoded at the outset of formulation and selection of lexical verbs only occurs shortly before they are articulated, leading to the prediction of different fixation patterns for verb-medial and verb-final sentences. A strong version of hierarchical incrementality predicts no differences between verb-medial and verb-final sentences because it assumes that verbs are always lexically selected early in the formulation process. Based on growth curve analyses of fixations to agent and patient characters in the described pictures, and the influence of character humanness and the lack of an influence of the visual salience of characters on speakers' choice of active or passive voice, the current results suggest that while verb planning does not necessarily occur early during formulation, speakers of German always create an event representation early.
Bercu, J P; Galloway, S M; Parris, P; Teasdale, A; Masuda-Herrera, M; Dobo, K; Heard, P; Kenyon, M; Nicolette, J; Vock, E; Ku, W; Harvey, J; White, A; Glowienke, S; Martin, E A; Custer, L; Jolly, R A; Thybaud, V
2018-04-01
This paper provides compound-specific toxicology limits for 20 widely used synthetic reagents and common by-products that are potential impurities in drug substances. In addition, a 15 μg/day class-specific limit was developed for monofunctional alkyl bromides, aligning this with the class-specific limit previously defined for monofunctional alkyl chlorides. Both the compound- and class-specific toxicology limits assume a lifetime chronic exposure for the general population (including sensitive subpopulations) by all routes of exposure for pharmaceuticals. Inhalation-specific toxicology limits were also derived for acrolein, formaldehyde, and methyl bromide because of their localized toxicity via that route. Mode of action was an important consideration for a compound-specific toxicology limit. Acceptable intake (AI) calculations for certain mutagenic carcinogens assumed a linear dose-response for tumor induction, and permissible daily exposure (PDE) determination assumed a non-linear dose-response. Several compounds evaluated have been previously incorrectly assumed to be mutagenic, or to be mutagenic carcinogens, but the evidence reported here for such compounds indicates a lack of mutagenicity, and a non-mutagenic mode of action for tumor induction. For non-mutagens with insufficient data to develop a toxicology limit, the ICH Q3A qualification thresholds are recommended. The compound- and class-specific toxicology limits described here may be adjusted for an individual drug substance based on treatment duration, dosing schedule, severity of the disease and therapeutic indication. Copyright © 2018. Published by Elsevier Inc.
Mori, J.; Abercrombie, R.E.
1997-01-01
Statistics of earthquakes in California show linear frequency-magnitude relationships in the range of M2.0 to M5.5 for various data sets. Assuming Gutenberg-Richter distributions, there is a systematic decrease in b value with increasing depth of earthquakes. We find consistent results for various data sets from northern and southern California that both include and exclude the larger aftershock sequences. We suggest that at shallow depth (???0 to 6 km) conditions with more heterogeneous material properties and lower lithospheric stress prevail. Rupture initiations are more likely to stop before growing into large earthquakes, producing relatively more smaller earthquakes and consequently higher b values. These ideas help to explain the depth-dependent observations of foreshocks in the western United States. The higher occurrence rate of foreshocks preceding shallow earthquakes can be interpreted in terms of rupture initiations that are stopped before growing into the mainshock. At greater depth (9-15 km), any rupture initiation is more likely to continue growing into a larger event, so there are fewer foreshocks. If one assumes that frequency-magnitude statistics can be used to estimate probabilities of a small rupture initiation growing into a larger earthquake, then a small (M2) rupture initiation at 9 to 12 km depth is 18 times more likely to grow into a M5.5 or larger event, compared to the same small rupture initiation at 0 to 3 km. Copyright 1997 by the American Geophysical Union.
ERIC Educational Resources Information Center
Nielsen, Annemette; Michaelsen, Kim F.; Holm, Lotte
2014-01-01
Researchers question the implications of the way in which "motherhood" is constructed in public health discourse. Current nutritional guidelines for Danish parents of young children are part of this discourse. They are shaped by an assumed symbiotic relationship between the nutritional needs of the child and the interest and focus of the…
26 CFR 1.269-6 - Relationship of section 269 to section 382 before the Tax Reform Act of 1986.
Code of Federal Regulations, 2010 CFR
2010-04-01
... utilizing its net operating loss carryovers by changing its business to a profitable new business. Assume further that A makes no attempt to revitalize the business of L Corporation during the calendar year 1956... on a calendar year basis and has sustained heavy net operating losses for a number of years. Assume...
Advanced statistics: linear regression, part I: simple linear regression.
Marill, Keith A
2004-01-01
Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.
Rocca, Jennifer D.; Hall, Edward K.; Lennon, Jay T.; Evans, Sarah E.; Waldrop, Mark P.; Cotner, James B.; Nemergut, Diana R.; Graham, Emily B.; Wallenstein, Matthew D.
2015-01-01
For any enzyme-catalyzed reaction to occur, the corresponding protein-encoding genes and transcripts are necessary prerequisites. Thus, a positive relationship between the abundance of gene or transcripts and corresponding process rates is often assumed. To test this assumption, we conducted a meta-analysis of the relationships between gene and/or transcript abundances and corresponding process rates. We identified 415 studies that quantified the abundance of genes or transcripts for enzymes involved in carbon or nitrogen cycling. However, in only 59 of these manuscripts did the authors report both gene or transcript abundance and rates of the appropriate process. We found that within studies there was a significant but weak positive relationship between gene abundance and the corresponding process. Correlations were not strengthened by accounting for habitat type, differences among genes or reaction products versus reactants, suggesting that other ecological and methodological factors may affect the strength of this relationship. Our findings highlight the need for fundamental research on the factors that control transcription, translation and enzyme function in natural systems to better link genomic and transcriptomic data to ecosystem processes.
Interpreting the sub-linear Kennicutt-Schmidt relationship: the case for diffuse molecular gas
NASA Astrophysics Data System (ADS)
Shetty, Rahul; Clark, Paul C.; Klessen, Ralf S.
2014-08-01
Recent statistical analysis of two extragalactic observational surveys strongly indicate a sub-linear Kennicutt-Schmidt (KS) relationship between the star formation rate (ΣSFR) and molecular gas surface density (Σmol). Here, we consider the consequences of these results in the context of common assumptions, as well as observational support for a linear relationship between ΣSFR and the surface density of dense gas. If the CO traced gas depletion time (τ_dep^CO) is constant, and if CO only traces star-forming giant molecular clouds (GMCs), then the physical properties of each GMC must vary, such as the volume densities or star formation rates. Another possibility is that the conversion between CO luminosity and Σmol, the XCO factor, differs from cloud-to-cloud. A more straightforward explanation is that CO permeates the hierarchical interstellar medium, including the filaments and lower density regions within which GMCs are embedded. A number of independent observational results support this description, with the diffuse gas comprising at least 30 per cent of the total molecular content. The CO bright diffuse gas can explain the sub-linear KS relationship, and consequently leads to an increasing τ_dep^CO with Σmol. If ΣSFR linearly correlates with the dense gas surface density, a sub-linear KS relationship indicates that the fraction of diffuse gas fdiff grows with Σmol. In galaxies where Σmol falls towards the outer disc, this description suggests that fdiff also decreases radially.
Inferring climate variability from skewed proxy records
NASA Astrophysics Data System (ADS)
Emile-Geay, J.; Tingley, M.
2013-12-01
Many paleoclimate analyses assume a linear relationship between the proxy and the target climate variable, and that both the climate quantity and the errors follow normal distributions. An ever-increasing number of proxy records, however, are better modeled using distributions that are heavy-tailed, skewed, or otherwise non-normal, on account of the proxies reflecting non-normally distributed climate variables, or having non-linear relationships with a normally distributed climate variable. The analysis of such proxies requires a different set of tools, and this work serves as a cautionary tale on the danger of making conclusions about the underlying climate from applications of classic statistical procedures to heavily skewed proxy records. Inspired by runoff proxies, we consider an idealized proxy characterized by a nonlinear, thresholded relationship with climate, and describe three approaches to using such a record to infer past climate: (i) applying standard methods commonly used in the paleoclimate literature, without considering the non-linearities inherent to the proxy record; (ii) applying a power transform prior to using these standard methods; (iii) constructing a Bayesian model to invert the mechanistic relationship between the climate and the proxy. We find that neglecting the skewness in the proxy leads to erroneous conclusions and often exaggerates changes in climate variability between different time intervals. In contrast, an explicit treatment of the skewness, using either power transforms or a Bayesian inversion of the mechanistic model for the proxy, yields significantly better estimates of past climate variations. We apply these insights in two paleoclimate settings: (1) a classical sedimentary record from Laguna Pallcacocha, Ecuador (Moy et al., 2002). Our results agree with the qualitative aspects of previous analyses of this record, but quantitative departures are evident and hold implications for how such records are interpreted, and compared to other proxy records. (2) a multiproxy reconstruction of temperature over the Common Era (Mann et al., 2009), where we find that about one third of the records display significant departures from normality. Accordingly, accounting for skewness in proxy predictors has a notable influence on both reconstructed global mean and spatial patterns of temperature change. Inferring climate variability from skewed proxy records thus requires cares, but can be done with relatively simple tools. References - Mann, M. E., Z. Zhang, S. Rutherford, R. S. Bradley, M. K. Hughes, D. Shindell, C. Ammann, G. Faluvegi, and F. Ni (2009), Global signatures and dynamical origins of the little ice age and medieval climate anomaly, Science, 326(5957), 1256-1260, doi:10.1126/science.1177303. - Moy, C., G. Seltzer, D. Rodbell, and D. Anderson (2002), Variability of El Niño/Southern Oscillation activ- ity at millennial timescales during the Holocene epoch, Nature, 420(6912), 162-165.
Dilations and the Equation of a Line
ERIC Educational Resources Information Center
Yopp, David A.
2016-01-01
Students engage in proportional reasoning when they use covariance and multiple comparisons. Without rich connections to proportional reasoning, students may develop inadequate understandings of linear relationships and the equations that model them. Teachers can improve students' understanding of linear relationships by focusing on realistic…
The relationships between internal and external training load models during basketball training.
Scanlan, Aaron T; Wen, Neal; Tucker, Patrick S; Dalbo, Vincent J
2014-09-01
The present investigation described and compared the internal and external training loads during basketball training. Eight semiprofessional male basketball players (mean ± SD, age: 26.3 ± 6.7 years; stature: 188.1 ± 6.2 cm; body mass: 92.0 ± 13.8 kg) were monitored across a 7-week period during the preparatory phase of the annual training plan. A total of 44 total sessions were monitored. Player session ratings of perceived exertion (sRPE), heart rate, and accelerometer data were collected across each training session. Internal training load was determined using the sRPE, training impulse (TRIMP), and summated-heart-rate-zones (SHRZ) training load models. External training load was calculated using an established accelerometer algorithm. Pearson product-moment correlations with 95% confidence intervals (CIs) were used to determine the relationships between internal and external training load models. Significant moderate relationships were observed between external training load and the sRPE (r42 = 0.49, 95% CI = 0.23-0.69, p < 0.001) and TRIMP models (r42 = 0.38, 95% CI = 0.09-0.61, p = 0.011). A significant large correlation was evident between external training load and the SHRZ model (r42 = 0.61, 95% CI = 0.38-0.77, p < 0.001). Although significant relationships were found between internal and external training load models, the magnitude of the correlations and low commonality suggest that internal training load models measure different constructs of the training process than the accelerometer training load model in basketball settings. Basketball coaching and conditioning professionals should not assume a linear dose-response between accelerometer and internal training load models during training and are recommended to combine internal and external approaches when monitoring training load in players.
Genomic Model with Correlation Between Additive and Dominance Effects.
Xiang, Tao; Christensen, Ole Fredslund; Vitezica, Zulma Gladis; Legarra, Andres
2018-05-09
Dominance genetic effects are rarely included in pedigree-based genetic evaluation. With the availability of single nucleotide polymorphism markers and the development of genomic evaluation, estimates of dominance genetic effects have become feasible using genomic best linear unbiased prediction (GBLUP). Usually, studies involving additive and dominance genetic effects ignore possible relationships between them. It has been often suggested that the magnitude of functional additive and dominance effects at the quantitative trait loci are related, but there is no existing GBLUP-like approach accounting for such correlation. Wellmann and Bennewitz showed two ways of considering directional relationships between additive and dominance effects, which they estimated in a Bayesian framework. However, these relationships cannot be fitted at the level of individuals instead of loci in a mixed model and are not compatible with standard animal or plant breeding software. This comes from a fundamental ambiguity in assigning the reference allele at a given locus. We show that, if there has been selection, assigning the most frequent as the reference allele orients the correlation between functional additive and dominance effects. As a consequence, the most frequent reference allele is expected to have a positive value. We also demonstrate that selection creates negative covariance between genotypic additive and dominance genetic values. For parameter estimation, it is possible to use a combined additive and dominance relationship matrix computed from marker genotypes, and to use standard restricted maximum likelihood (REML) algorithms based on an equivalent model. Through a simulation study, we show that such correlations can easily be estimated by mixed model software and accuracy of prediction for genetic values is slightly improved if such correlations are used in GBLUP. However, a model assuming uncorrelated effects and fitting orthogonal breeding values and dominant deviations performed similarly for prediction. Copyright © 2018, Genetics.
Comparative Study of the Ride Quality of TRACV Suspension Alternatives
DOT National Transportation Integrated Search
1979-06-01
A linearized model of the pitch-heave dynamics of a Tracked Ram Air Cushion Vehicle is presented. This model is based on aerodynamic theory which has been verified by wind tunnel and towed model experiments. The vehicle is assumed to be equipped with...
Conditional Independence in Applied Probability.
ERIC Educational Resources Information Center
Pfeiffer, Paul E.
This material assumes the user has the background provided by a good undergraduate course in applied probability. It is felt that introductory courses in calculus, linear algebra, and perhaps some differential equations should provide the requisite experience and proficiency with mathematical concepts, notation, and argument. The document is…
Third-Degree Price Discrimination Revisited
ERIC Educational Resources Information Center
Kwon, Youngsun
2006-01-01
The author derives the probability that price discrimination improves social welfare, using a simple model of third-degree price discrimination assuming two independent linear demands. The probability that price discrimination raises social welfare increases as the preferences or incomes of consumer groups become more heterogeneous. He derives the…
A research on Performance Efficiency of Rubber Metal Support Structures
NASA Astrophysics Data System (ADS)
Mkrtychev, Oleg V.; Bunov, Artem A.
2017-11-01
The paper scrutinizes structural behavior of lead rubber bearings by a Chinese manufacturer subjected to a single-component seismic action. Several problems were solved using specialized software complexes, which conducted forth integration of motion equations through the explicit method or response spectrum method. Depending on the calculation method, the diagram of the bearing performance was assumed to be either an actual diagram approximated by an idealized non-linear diagram or an idealized linear diagram with a specific stiffness. The computational model was assumed to be a single-mass oscillator with a lumped mass. The effort undertaken facilitated the investigation of the patterns of horizontal displacement of the bearing top relative to bottom caused by earthquakes modeled as accelerograms with different spectral compositions. The behavior of the support structure was benchmarked against similar supports by another manufacturer. The paper presents the outcomes of the research effort and draws conclusions about the efficiency of using the bearings of this particular type and model.
Stress estimation in reservoirs using an integrated inverse method
NASA Astrophysics Data System (ADS)
Mazuyer, Antoine; Cupillard, Paul; Giot, Richard; Conin, Marianne; Leroy, Yves; Thore, Pierre
2018-05-01
Estimating the stress in reservoirs and their surroundings prior to the production is a key issue for reservoir management planning. In this study, we propose an integrated inverse method to estimate such initial stress state. The 3D stress state is constructed with the displacement-based finite element method assuming linear isotropic elasticity and small perturbations in the current geometry of the geological structures. The Neumann boundary conditions are defined as piecewise linear functions of depth. The discontinuous functions are determined with the CMA-ES (Covariance Matrix Adaptation Evolution Strategy) optimization algorithm to fit wellbore stress data deduced from leak-off tests and breakouts. The disregard of the geological history and the simplified rheological assumptions mean that only the stress field, statically admissible and matching the wellbore data should be exploited. The spatial domain of validity of this statement is assessed by comparing the stress estimations for a synthetic folded structure of finite amplitude with a history constructed assuming a viscous response.
Estimating Causal Effects with Ancestral Graph Markov Models
Malinsky, Daniel; Spirtes, Peter
2017-01-01
We present an algorithm for estimating bounds on causal effects from observational data which combines graphical model search with simple linear regression. We assume that the underlying system can be represented by a linear structural equation model with no feedback, and we allow for the possibility of latent variables. Under assumptions standard in the causal search literature, we use conditional independence constraints to search for an equivalence class of ancestral graphs. Then, for each model in the equivalence class, we perform the appropriate regression (using causal structure information to determine which covariates to include in the regression) to estimate a set of possible causal effects. Our approach is based on the “IDA” procedure of Maathuis et al. (2009), which assumes that all relevant variables have been measured (i.e., no unmeasured confounders). We generalize their work by relaxing this assumption, which is often violated in applied contexts. We validate the performance of our algorithm on simulated data and demonstrate improved precision over IDA when latent variables are present. PMID:28217244
Water impact analysis of space shuttle solid rocket motor by the finite element method
NASA Technical Reports Server (NTRS)
Buyukozturk, O.; Hibbitt, H. D.; Sorensen, E. P.
1974-01-01
Preliminary analysis showed that the doubly curved triangular shell elements were too stiff for these shell structures. The doubly curved quadrilateral shell elements were found to give much improved results. A total of six load cases were analyzed in this study. The load cases were either those resulting from a static test using reaction straps to simulate the drop conditions or under assumed hydrodynamic conditions resulting from a drop test. The latter hydrodynamic conditions were obtained through an emperical fit of available data. Results obtained from a linear analysis were found to be consistent with results obtained elsewhere with NASTRAN and BOSOR. The nonlinear analysis showed that the originally assumed loads would result in failure of the shell structures. The nonlinear analysis also showed that it was useful to apply internal pressure as a stabilizing influence on collapse. A final analysis with an updated estimate of load conditions resulted in linear behavior up to full load.
Efficient detection of a CW signal with a linear frequency drift
NASA Technical Reports Server (NTRS)
Swarztrauber, Paul N.; Bailey, David H.
1989-01-01
An efficient method is presented for the detection of a continuous wave (CW) signal with a frequency drift that is linear in time. Signals of this type occur in transmissions between any two locations that are accelerating relative to one another, e.g., transmissions from the Voyager spacecraft. We assume that both the frequency and the drift are unknown. We also assume that the signal is weak compared to the Gaussian noise. The signal is partitioned into subsequences whose discrete Fourier transforms provide a sequence of instantaneous spectra at equal time intervals. These spectra are then accumulated with a shift that is proportional to time. When the shift is equal to the frequency drift, the signal to noise ratio increases and detection occurs. Here, we show how to compute these accumulations for many shifts in an efficient manner using a variety of Fast Fourier Transformations (FFT). Computing time is proportional to L log L where L is the length of the time series.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawson, M.; Yu, Y. H.; Nelessen, A.
2014-05-01
Wave energy converters (WECs) are commonly designed and analyzed using numerical models that combine multi-body dynamics with hydrodynamic models based on the Cummins Equation and linearized hydrodynamic coefficients. These modeling methods are attractive design tools because they are computationally inexpensive and do not require the use of high performance computing resources necessitated by high-fidelity methods, such as Navier Stokes computational fluid dynamics. Modeling hydrodynamics using linear coefficients assumes that the device undergoes small motions and that the wetted surface area of the devices is approximately constant. WEC devices, however, are typically designed to undergo large motions in order to maximizemore » power extraction, calling into question the validity of assuming that linear hydrodynamic models accurately capture the relevant fluid-structure interactions. In this paper, we study how calculating buoyancy and Froude-Krylov forces from the instantaneous position of a WEC device (referred to as instantaneous buoyancy and Froude-Krylov forces from herein) changes WEC simulation results compared to simulations that use linear hydrodynamic coefficients. First, we describe the WEC-Sim tool used to perform simulations and how the ability to model instantaneous forces was incorporated into WEC-Sim. We then use a simplified one-body WEC device to validate the model and to demonstrate how accounting for these instantaneously calculated forces affects the accuracy of simulation results, such as device motions, hydrodynamic forces, and power generation.« less
Update to the conventional model for rotational deformation
NASA Astrophysics Data System (ADS)
Ries, J. C.; Desai, S.
2017-12-01
Rotational deformation (also called the "pole tide") is the deformation resulting from the centrifugal effect of polar motion on the solid earth and ocean, which manifests itself as variations in ocean heights, in the gravity field and in surface displacements. The model for rotational deformation assumes a primarily elastic response of the Earth to the centrifugal potential at the annual and Chandler periods and applies body tide Love numbers to the polar motion after removing the mean pole. The original model was conceived when the mean pole was moving (more or less) linearly, largely in response to glacial isostatic adjustment. In light of the significant variations in the mean pole due to present-day ice mass losses, an `appropriately' filtered mean pole was adopted for the conventional model, so that the longer period variations in the mean pole were not included in the rotational deformation model. However, the elastic Love numbers should be applicable to longer period variations as well, and only the secular (i.e. linear) mean pole should be removed. A model for the linear mean pole is recommended based on a linear fit to the IERS C01 time series spanning 1900 to 2015: in milliarcsec, Xp = 55.0+1.677*dt and Yp = 320.5+3.460*dt where dt=(t-t0), t0=2000.0 and assuming a year=365.25 days. The consequences of an updated model for rotational deformation for site motion and the gravity field are illustrated.
Busch, Michael; Wodrich, Matthew D.
2015-01-01
Linear free energy scaling relationships and volcano plots are common tools used to identify potential heterogeneous catalysts for myriad applications. Despite the striking simplicity and predictive power of volcano plots, they remain unknown in homogeneous catalysis. Here, we construct volcano plots to analyze a prototypical reaction from homogeneous catalysis, the Suzuki cross-coupling of olefins. Volcano plots succeed both in discriminating amongst different catalysts and reproducing experimentally known trends, which serves as validation of the model for this proof-of-principle example. These findings indicate that the combination of linear scaling relationships and volcano plots could serve as a valuable methodology for identifying homogeneous catalysts possessing a desired activity through a priori computational screening. PMID:28757966
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bezler, P.; Hartzman, M.; Reich, M.
1980-08-01
A set of benchmark problems and solutions have been developed for verifying the adequacy of computer programs used for dynamic analysis and design of nuclear piping systems by the Response Spectrum Method. The problems range from simple to complex configurations which are assumed to experience linear elastic behavior. The dynamic loading is represented by uniform support motion, assumed to be induced by seismic excitation in three spatial directions. The solutions consist of frequencies, participation factors, nodal displacement components and internal force and moment components. Solutions to associated anchor point motion static problems are not included.
Kurtosis Approach for Nonlinear Blind Source Separation
NASA Technical Reports Server (NTRS)
Duong, Vu A.; Stubbemd, Allen R.
2005-01-01
In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation.
Contact stresses in meshing spur gear teeth: Use of an incremental finite element procedure
NASA Technical Reports Server (NTRS)
Hsieh, Chih-Ming; Huston, Ronald L.; Oswald, Fred B.
1992-01-01
Contact stresses in meshing spur gear teeth are examined. The analysis is based upon an incremental finite element procedure that simultaneously determines the stresses in the contact region between the meshing teeth. The teeth themselves are modeled by two dimensional plain strain elements. Friction effects are included, with the friction forces assumed to obey Coulomb's law. The analysis assumes that the displacements are small and that the tooth materials are linearly elastic. The analysis procedure is validated by comparing its results with those for the classical two contacting semicylinders obtained from the Hertz method. Agreement is excellent.
Non-linear analysis of stick/slip motion
NASA Astrophysics Data System (ADS)
Pratt, T. K.; Williams, R.
1981-02-01
The steady state relative motion of two masses with dry (Coulomb) friction contact is investigated. The bodies are assumed to have the same mass and stiffness and are subjected to harmonic excitation. By means of a combined analytical—numerical procedure, results are obtained for arbitrary values of Coulomb friction, excitation frequency, and natural frequencies of the bodies. For certain values of these parameters, multiple lockups per cycle are possible. In this respect, the problem investigated here is a natural extension of the one considered by Den Hartog, who in obtaining his closed form solution assumed a maximum of two lockups per cycle.
How Might the Thermosphere and Ionosphere React to an Extreme Space Weather Event?
NASA Astrophysics Data System (ADS)
Fuller-Rowell, T. J.; Fedrizzi, M.; Codrescu, M.; Maruyama, N.; Raeder, J.
2015-12-01
If a Carrington-type CME event of 1859 hit Earth how might the thermosphere, ionosphere, and plasmasphere respond? To start with, the response would be dependent on how the magnetosphere reacts and channels the energy into the upper atmosphere. For now we can assume the magnetospheric convection and auroral precipitation inputs would look similar to a 2003 Halloween storm but stronger and more expanded to mid-latitude, much like what the Weimer empirical model predicts if the solar wind Bz and velocity were -60nT and 1500km/s respectively. For a Halloween-level geomagnetic storm event, the sequence of physical process in the thermosphere and ionosphere are thought to be well understood. The physics-based coupled models, however, have been designed and somewhat tuned to simulate the response to this level of event that have been observed in the last two solar cycles. For an extreme solar storm, it is unclear if the response would be a natural linear extrapolation of the response or if non-linear processes would begin to dominate. A numerical simulation has been performed with a coupled thermosphere ionosphere model to quantify the likely response to an extreme space weather event. The simulation predict the neutral atmosphere would experience horizontal winds of 1500m/s, vertical winds exceeding 150m/s, and the "top" of the thermosphere well above 1000km. Predicting the ionosphere response is somewhat more challenging because there is significant uncertainty in quantifying some of the other driver-response relationships such as the magnitude and shielding time-scale of the penetration electric field, the possible feedback to the magnetosphere, and the amount of nitric oxide production. Within the limits of uncertainty of the drivers, the magnitude of the response can be quantified and both linear and non-linear responses are predicted.
Ferrari, Myriam; Pengo, Vittorio; Barolo, Massimiliano; Bezzo, Fabrizio; Padrini, Roberto
2017-06-01
The purpose of this study is to develop a new pharmacokinetic-pharmacodynamic (PK-PD) model to characterise the contribution of (S)- and (R)-warfarin to the anticoagulant effect on patients in treatment with rac-warfarin. Fifty-seven patients starting warfarin (W) therapy were studied, from the first dose and during chronic treatment at INR stabilization. Plasma concentrations of (S)- and (R)-W and INRs were measured 12, 36 and 60 h after the first dose and at steady state 12-14 h after dosing. Patients were also genotyped for the G>A VKORC1 polymorphism. The PK-PD model assumed a linear relationship between W enantiomer concentration and INR and included a scaling factor k to account for a different potency of (R)-W. Two parallel compartment chains with different transit times (MTT 1 and MTT 2 ) were used to model the delay in the W effect. PD parameters were estimated with the maximum likelihood approach. The model satisfactorily described the mean time-course of INR, both after the initial dose and during long-term treatment. (R)-W contributed to the rac-W anticoagulant effect with a potency of about 27% that of (S)-W. This effect was independent of VKORC1 genotype. As expected, the slope of the PK/PD linear correlation increased stepwise from GG to GA and from GA to AA VKORC1 genotype (0.71, 0.90 and 1.49, respectively). Our PK-PD linear model can quantify the partial pharmacodynamic activity of (R)-W in patients contemporaneously exposed to therapeutic (S)-W plasma levels. This concept may be useful in improving the performance of future algorithms aiming at identifying the most appropriate W maintenance dose.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weber, Damien C., E-mail: damien.weber@unige.ch; Johanson, Safora; Peguret, Nicolas
2011-10-01
Purpose: To assess the excess relative risk (ERR) of radiation-induced cancers (RIC) in female patients with Hodgkin lymphoma (HL) female patients treated with conformal (3DCRT), intensity modulated (IMRT), or volumetric modulated arc (RA) radiation therapy. Methods and Materials: Plans for 10 early-stage HL female patients were computed for 3DCRT, IMRT, and RA with involved field RT (IFRT) and involvednode RT (INRT) radiation fields. Organs at risk dose--volume histograms were computed and inter-compared for IFRT vs. INRT and 3DCRT vs. IMRT/RA, respectively. The ERR for cancer induction in breasts, lungs, and thyroid was estimated using both linear and nonlinear models. Results:more » The mean estimated ERR for breast, lung, and thyroid were significantly lower (p < 0.01) with INRT than with IFRT planning, regardless of the radiation delivery technique used, assuming a linear dose-risk relationship. We found that using the nonlinear model, the mean ERR values were significantly (p < 0.01) increased with IMRT or RA compared to those with 3DCRT planning for the breast, lung, and thyroid, using an IFRT paradigm. After INRT planning, IMRT or RA increased the risk of RIC for lung and thyroid only. Conclusions: In this comparative planning study, using a nonlinear dose--risk model, IMRT or RA increased the estimated risk of RIC for breast, lung, and thyroid for HL female patients. This study also suggests that INRT planning, compared to IFRT planning, may reduce the ERR of RIC when risk is predicted using a linear model. Observing the opposite effect, with a nonlinear model, however, questions the validity of these biologically parameterized models.« less
Smooth individual level covariates adjustment in disease mapping.
Huque, Md Hamidul; Anderson, Craig; Walton, Richard; Woolford, Samuel; Ryan, Louise
2018-05-01
Spatial models for disease mapping should ideally account for covariates measured both at individual and area levels. The newly available "indiCAR" model fits the popular conditional autoregresssive (CAR) model by accommodating both individual and group level covariates while adjusting for spatial correlation in the disease rates. This algorithm has been shown to be effective but assumes log-linear associations between individual level covariates and outcome. In many studies, the relationship between individual level covariates and the outcome may be non-log-linear, and methods to track such nonlinearity between individual level covariate and outcome in spatial regression modeling are not well developed. In this paper, we propose a new algorithm, smooth-indiCAR, to fit an extension to the popular conditional autoregresssive model that can accommodate both linear and nonlinear individual level covariate effects while adjusting for group level covariates and spatial correlation in the disease rates. In this formulation, the effect of a continuous individual level covariate is accommodated via penalized splines. We describe a two-step estimation procedure to obtain reliable estimates of individual and group level covariate effects where both individual and group level covariate effects are estimated separately. This distributed computing framework enhances its application in the Big Data domain with a large number of individual/group level covariates. We evaluate the performance of smooth-indiCAR through simulation. Our results indicate that the smooth-indiCAR method provides reliable estimates of all regression and random effect parameters. We illustrate our proposed methodology with an analysis of data on neutropenia admissions in New South Wales (NSW), Australia. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Jian; Guo, Pan; University of Chinese Academy of Sciences, Beijing 100049
Using molecular dynamics simulations, we show a fine linear relationship between surface energies and microscopic Lennard-Jones parameters of super-hydrophilic surfaces. The linear slope of the super-hydrophilic surfaces is consistent with the linear slope of the super-hydrophobic, hydrophobic, and hydrophilic surfaces where stable water droplets can stand, indicating that there is a universal linear behavior of the surface energies with the water-surface van der Waals interaction that extends from the super-hydrophobic to super-hydrophilic surfaces. Moreover, we find that the linear relationship exists for various substrate types, and the linear slopes of these different types of substrates are dependent on the surfacemore » atom density, i.e., higher surface atom densities correspond to larger linear slopes. These results enrich our understanding of water behavior on solid surfaces, especially the water wetting behaviors on uncharged super-hydrophilic metal surfaces.« less
Application of balancing methods in modeling the penicillin fermentation.
Heijnen, J J; Roels, J A; Stouthamer, A H
1979-12-01
This paper shows the application of elementary balancing methods in combination with simple kinetic equations in the formulation of an unstructured model for the fed-batch process for the production of penicillin. The rate of substrate uptake is modeled with a Monod-type relationship. The specific penicillin production rate is assumed to be a function of growth rate. Hydrolysis of penicillin to penicilloic acid is assumed to be first order in penicillin. In simulations with the present model it is shown that the model, although assuming a strict relationship between specific growth rate and penicillin productivity, allows for the commonly observed lag phase in the penicillin concentration curve and the apparent separation between growth and production phase (idiophase-trophophase concept). Furthermore it is shown that the feed rate profile during fermentation is of vital importance in the realization of a high production rate throughout the duration of the fermentation. It is emphasized that the method of modeling presented may also prove rewarding for an analysis of fermentation processes other than the penicillin fermentation.
Correlations between the disintegration of melt and the measured impulses in steam explosions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Froehlich, G.; Linca, A.; Schindler, M.
To find our correlations in steam explosions (melt water interactions) between the measured impulses and the disintegration of the melt, experiments were performed in three configurations i.e. stratified, entrapment and jet experiments. Linear correlations were detected between the impulse and the total surface of the fragments. Theoretical considerations point out that a linear correlation assumes superheating of a water layer around the fragments of a constant thickness during the fragmentation process to a constant temperature (here the homogeneous nucleation temperature of water was assumed) and a constant expansion velocity of the steam in the main expansion time. The correlation constantmore » does not depend on melt temperature and trigger pressure, but it depends on the configuration of the experiment or of a scenario of an accident. Further research is required concerning the correlation constant. For analysing steam explosion accidents the explosivity is introduced. The explosivity is a mass specific impulse. The explosivity is linear correlated with the degree of fragmentation. Knowing the degree of fragmentation with proper correlation constant the explosivity can be calculated and from the explosivity combined with the total mass of fragments the impulse is obtained which can be used to an estimation of the maximum force.« less
Statistical properties of the radiation from SASE FEL operating in the linear regime
NASA Astrophysics Data System (ADS)
Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.
1998-02-01
The paper presents comprehensive analysis of statistical properties of the radiation from self amplified spontaneous emission (SASE) free electron laser operating in linear mode. The investigation has been performed in a one-dimensional approximation, assuming the electron pulse length to be much larger than a coherence length of the radiation. The following statistical properties of the SASE FEL radiation have been studied: field correlations, distribution of the radiation energy after monochromator installed at the FEL amplifier exit and photoelectric counting statistics of SASE FEL radiation. It is shown that the radiation from SASE FEL operating in linear regime possesses all the features corresponding to completely chaotic polarized radiation.
The nonlinear effect of resistive inhomogeneities on van der Pauw measurements
NASA Astrophysics Data System (ADS)
Koon, Daniel W.
2005-03-01
The resistive weighting function [D. W. Koon and C. J. Knickerbocker, Rev. Sci. Instrum. 63, 207 (1992)] quantifies the effect of small local inhomogeneities on van der Pauw resistivity measurements, but assumes such effects to be linear. This talk will describe deviations from linearity for a square van der Pauw geometry, modeled using a 5 x 5 grid network of discrete resistors and introducing both positive and negative perturbations to local resistors, covering nearly two orders of magnitude in -δρ/ρ or -δσ/σ. While there is a relatively modest quadratic nonlinearity for inhomogeneities of decreasing conductivity, the nonlinear term for inhomogeneities of decreasing resistivity is approximately cubic and can exceed the linear term.
Linear instability of plane Couette and Poiseuille flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chefranov, S. G., E-mail: schefranov@mail.ru; Chefranov, A. G., E-mail: Alexander.chefranov@emu.edu.tr
2016-05-15
It is shown that linear instability of plane Couette flow can take place even at finite Reynolds numbers Re > Re{sub th} ≈ 139, which agrees with the experimental value of Re{sub th} ≈ 150 ± 5 [16, 17]. This new result of the linear theory of hydrodynamic stability is obtained by abandoning traditional assumption of the longitudinal periodicity of disturbances in the flow direction. It is established that previous notions about linear stability of this flow at arbitrarily large Reynolds numbers relied directly upon the assumed separation of spatial variables of the field of disturbances and their longitudinal periodicitymore » in the linear theory. By also abandoning these assumptions for plane Poiseuille flow, a new threshold Reynolds number Re{sub th} ≈ 1035 is obtained, which agrees to within 4% with experiment—in contrast to 500% discrepancy for the previous estimate of Re{sub th} ≈ 5772 obtained in the framework of the linear theory under assumption of the “normal” shape of disturbances [2].« less
NASA Astrophysics Data System (ADS)
Courdurier, M.; Monard, F.; Osses, A.; Romero, F.
2015-09-01
In medical single-photon emission computed tomography (SPECT) imaging, we seek to simultaneously obtain the internal radioactive sources and the attenuation map using not only ballistic measurements but also first-order scattering measurements and assuming a very specific scattering regime. The problem is modeled using the radiative transfer equation by means of an explicit non-linear operator that gives the ballistic and scattering measurements as a function of the radioactive source and attenuation distributions. First, by differentiating this non-linear operator we obtain a linearized inverse problem. Then, under regularity hypothesis for the source distribution and attenuation map and considering small attenuations, we rigorously prove that the linear operator is invertible and we compute its inverse explicitly. This allows proof of local uniqueness for the non-linear inverse problem. Finally, using the previous inversion result for the linear operator, we propose a new type of iterative algorithm for simultaneous source and attenuation recovery for SPECT based on the Neumann series and a Newton-Raphson algorithm.
The linear -- non-linear frontier for the Goldstone Higgs
Gavela, M. B.; Kanshin, K.; Machado, P. A. N.; ...
2016-12-01
The minimalmore » $SO(5)/SO(4)$ sigma model is used as a template for the ultraviolet completion of scenarios in which the Higgs particle is a low-energy remnant of some high-energy dynamics, enjoying a (pseudo) Nambu-Goldstone boson ancestry. Varying the $$\\sigma$$ mass allows to sweep from the perturbative regime to the customary non-linear implementations. The low-energy benchmark effective non-linear Lagrangian for bosons and fermions is obtained, determining as well the operator coefficients including linear corrections. At first order in the latter, three effective bosonic operators emerge which are independent of the explicit soft breaking assumed. The Higgs couplings to vector bosons and fermions turn out to be quite universal: the linear corrections are proportional to the explicit symmetry breaking parameters. Furthermore, we define an effective Yukawa operator which allows a simple parametrization and comparison of different heavy fermion ultraviolet completions. In addition, one particular fermionic completion is explored in detail, obtaining the corresponding leading low-energy fermionic operators.« less
EFFECT OF PH AND CONCENTRATION ON THE TRANSPORT OF NAPHTHALENE IN SATURATED AQUIFER MEDIA
Sorption is one of the primary mechanisms for retarding the movement of organic contaminants in groundwater. Sorption of hydrophobic compounds such as toluene, naphthalene, and DDT is generally assumed to be linearly proportional to solution phase concentration. In the present re...
Exploring Duopoly Markets with Conjectural Variations
ERIC Educational Resources Information Center
Julien, Ludovic A.; Musy, Olivier; Saïdi, Aurélien W.
2014-01-01
In this article, the authors investigate competitive firm behaviors in a two-firm environment assuming linear cost and demand functions. By introducing conjectural variations, they capture the different market structures as specific configurations of a more general model. Conjectural variations are based on the assumption that each firm believes…
Properties of AT Quartz Resonators on Wedgy Plates,
assuming a small linear thickness variation ( wedginess ) across the plate. The model predicts that the standing waves corresponding to the different an... wedginess that will lower order an harmonics. The observed consequence of this behavior is that the motional capacitance of the lowest mode (the desired
Career Development of Diverse Populations. ERIC Digest.
ERIC Educational Resources Information Center
Kerka, Sandra
Career development theories and approaches have been criticized for lack of applicability to diverse populations. Traditional career development theories and models assume that: everyone has a free choice among careers; career development is a linear, progressive, rational process; and individualism, autonomy and centrality of work are universal…
Ridge Regression for Interactive Models.
ERIC Educational Resources Information Center
Tate, Richard L.
1988-01-01
An exploratory study of the value of ridge regression for interactive models is reported. Assuming that the linear terms in a simple interactive model are centered to eliminate non-essential multicollinearity, a variety of common models, representing both ordinal and disordinal interactions, are shown to have "orientations" that are…
Optimal Control Strategies for Constrained Relative Orbits
2007-09-01
the chief. The work assumes the Clohessy - Wiltshire closeness assump- tion between the deputy and chief is valid, however, elliptical chief orbits are...133 Appendix G. A Closed-Form Solution of the Linear Clohessy - Wiltshire Equa- tions...Counterspace . . . . . . . . . . . . . . . . . . . 1 CW Clohessy - Wiltshire . . . . . . . . . . . . . . . . . . . . . . 4 DARPA Defense Advanced Research
The underlying philosophy of Unmix is to let the data speak for itself. Unmix seeks to solve the general mixture problem where the data are assumed to be a linear combination of an unknown number of sources of unknown composition, which contribute an unknown amount to each sample...
Lay obligations in professional relations.
Benjamin, M
1985-02-01
Little has been written recently about the obligations of lay people in professional relationships. Yet the Code of Medical Ethics adopted by the American Medical Association in 1847 included an extensive statement on "Obligations of patients to their physicians'. After critically examining the philosophical foundations of this statement, I provide an alternative account of lay obligations in professional relationships. Based on a hypothetical social contract and included in a full specification of professional as well as lay obligations, this account requires lay people to honor commitments and disclose relevant information. Ethically, the account assumes that all parties in lay-professional relationships should be given equal consideration and respect in determining rights and obligations. Factually, it assumes that the treatment of many illnesses and injuries required collaboration and cooperation among lay persons and health professionals, that medical resources and personnel are limited, and that medicine, nursing, and related health professions, are, in MacIntyre's sense, practices.
Highways and Population Change
ERIC Educational Resources Information Center
Voss, Paul R.; Chi, Guangqing
2006-01-01
In this paper we return to an issue often discussed in the literature regarding the relationship between highway expansion and population change. Typically it simply is assumed that this relationship is well established and understood. We argue, following a thorough review of the relevant literature, that the notion that highway expansion leads to…
Sibling Kinnections: A Clinical Visitation Program
ERIC Educational Resources Information Center
Pavao, Joyce Maguire; St. John, Melissa; Cannole, Rebecca Ford; Fischer, Tara; Maluccio, Anthony; Peining, Suzanne
2007-01-01
The growing literature on sibling relationships throughout their lifespans is of great importance to those working in the child welfare system, and in adoption services in particular. Sibling bonds are important to all of us, but they are particularly vital to children from disorganized or dysfunctional families. These relationships assume even…
Power-Solidarity Relationship of Teachers with Their Future Colleagues
ERIC Educational Resources Information Center
Acikalin, Isil
2007-01-01
Classroom talk is an example of institutional discourse, based on asymmetrical distribution of communicative rights and obligations between teachers and students. Teachers hold power and solidarity relationships with their students. It has been assumed that, in general, women are more concerned with solidarity while men are more interested in…
Evaluation of force-velocity and power-velocity relationship of arm muscles.
Sreckovic, Sreten; Cuk, Ivan; Djuric, Sasa; Nedeljkovic, Aleksandar; Mirkov, Dragan; Jaric, Slobodan
2015-08-01
A number of recent studies have revealed an approximately linear force-velocity (F-V) and, consequently, a parabolic power-velocity (P-V) relationship of multi-joint tasks. However, the measurement characteristics of their parameters have been neglected, particularly those regarding arm muscles, which could be a problem for using the linear F-V model in both research and routine testing. Therefore, the aims of the present study were to evaluate the strength, shape, reliability, and concurrent validity of the F-V relationship of arm muscles. Twelve healthy participants performed maximum bench press throws against loads ranging from 20 to 70 % of their maximum strength, and linear regression model was applied on the obtained range of F and V data. One-repetition maximum bench press and medicine ball throw tests were also conducted. The observed individual F-V relationships were exceptionally strong (r = 0.96-0.99; all P < 0.05) and fairly linear, although it remains unresolved whether a polynomial fit could provide even stronger relationships. The reliability of parameters obtained from the linear F-V regressions proved to be mainly high (ICC > 0.80), while their concurrent validity regarding directly measured F, P, and V ranged from high (for maximum F) to medium-to-low (for maximum P and V). The findings add to the evidence that the linear F-V and, consequently, parabolic P-V models could be used to study the mechanical properties of muscular systems, as well as to design a relatively simple, reliable, and ecologically valid routine test of the muscle ability of force, power, and velocity production.
Scaling effect of fraction of vegetation cover retrieved by algorithms based on linear mixture model
NASA Astrophysics Data System (ADS)
Obata, Kenta; Miura, Munenori; Yoshioka, Hiroki
2010-08-01
Differences in spatial resolution among sensors have been a source of error among satellite data products, known as a scaling effect. This study investigates the mechanism of the scaling effect on fraction of vegetation cover retrieved by a linear mixture model which employs NDVI as one of the constraints. The scaling effect is induced by the differences in texture, and the differences between the true endmember spectra and the endmember spectra assumed during retrievals. A mechanism of the scaling effect was analyzed by focusing on the monotonic behavior of spatially averaged FVC as a function of spatial resolution. The number of endmember is limited into two to proceed the investigation analytically. Although the spatially-averaged NDVI varies monotonically along with spatial resolution, the corresponding FVC values does not always vary monotonically. The conditions under which the averaged FVC varies monotonically for a certain sequence of spatial resolutions, were derived analytically. The increasing and decreasing trend of monotonic behavior can be predicted from the true and assumed endmember spectra of vegetation and non-vegetation classes regardless the distributions of the vegetation class within a fixed area. The results imply that the scaling effect on FVC is more complicated than that on NDVI, since, unlike NDVI, FVC becomes non-monotonic under a certain condition determined by the true and assumed endmember spectra.
The first ANDES elements: 9-DOF plate bending triangles
NASA Technical Reports Server (NTRS)
Militello, Carmelo; Felippa, Carlos A.
1991-01-01
New elements are derived to validate and assess the assumed natural deviatoric strain (ANDES) formulation. This is a brand new variant of the assumed natural strain (ANS) formulation of finite elements, which has recently attracted attention as an effective method for constructing high-performance elements for linear and nonlinear analysis. The ANDES formulation is based on an extended parametrized variational principle developed in recent publications. The key concept is that only the deviatoric part of the strains is assumed over the element whereas the mean strain part is discarded in favor of a constant stress assumption. Unlike conventional ANS elements, ANDES elements satisfy the individual element test (a stringent form of the patch test) a priori while retaining the favorable distortion-insensitivity properties of ANS elements. The first application of this formulation is the development of several Kirchhoff plate bending triangular elements with the standard nine degrees of freedom. Linear curvature variations are sampled along the three sides with the corners as gage reading points. These sample values are interpolated over the triangle using three schemes. Two schemes merge back to conventional ANS elements, one being identical to the Discrete Kirchhoff Triangle (DKT), whereas the third one produces two new ANDES elements. Numerical experiments indicate that one of the ANDES element is relatively insensitive to distortion compared to previously derived high-performance plate-bending elements, while retaining accuracy for nondistorted elements.
Free torsional vibrations of tapered cantilever I-beams
NASA Astrophysics Data System (ADS)
Rao, C. Kameswara; Mirza, S.
1988-08-01
Torsional vibration characteristics of linearly tapered cantilever I-beams have been studied by using the Galerkin finite element method. A third degree polynomial is assumed for the angle of twist. The analysis presented is valid for long beams and includes the effect of warping. The individual as well as combined effects of linear tapers in the width of the flanges and the depth of the web on the torsional vibration of cantilever I-beams are investigated. Numerical results generated for various values of taper ratios are presented in graphical form.
A linear model of population dynamics
NASA Astrophysics Data System (ADS)
Lushnikov, A. A.; Kagan, A. I.
2016-08-01
The Malthus process of population growth is reformulated in terms of the probability w(n,t) to find exactly n individuals at time t assuming that both the birth and the death rates are linear functions of the population size. The master equation for w(n,t) is solved exactly. It is shown that w(n,t) strongly deviates from the Poisson distribution and is expressed in terms either of Laguerre’s polynomials or a modified Bessel function. The latter expression allows for considerable simplifications of the asymptotic analysis of w(n,t).
Dynamics of thin-shell wormholes with different cosmological models
NASA Astrophysics Data System (ADS)
Sharif, Muhammad; Mumtaz, Saadia
This work is devoted to investigate the stability of thin-shell wormholes in Einstein-Hoffmann-Born-Infeld electrodynamics. We also study the attractive and repulsive characteristics of these configurations. A general equation-of-state is considered in the form of linear perturbation which explores the stability of the respective wormhole solutions. We assume Chaplygin, linear and logarithmic gas models to study exotic matter at thin-shell and evaluate stability regions for different values of the involved parameters. It is concluded that the Hoffmann-Born-Infeld parameter and electric charge enhance the stability regions.
Elliott, Michael R; Margulies, Susan S; Maltese, Matthew R; Arbogast, Kristy B
2015-09-18
There has been recent dramatic increase in the use of sensors affixed to the heads or helmets of athletes to measure the biomechanics of head impacts that lead to concussion. The relationship between injury and linear or rotational head acceleration measured by such sensors can be quantified with an injury risk curve. The utility of the injury risk curve relies on the accuracy of both the clinical diagnosis and the biomechanical measure. The focus of our analysis was to demonstrate the influence of three sources of error on the shape and interpretation of concussion injury risk curves: sampling variability associated with a rare event, concussion under-reporting, and sensor measurement error. We utilized Bayesian statistical methods to generate synthetic data from previously published concussion injury risk curves developed using data from helmet-based sensors on collegiate football players and assessed the effect of the three sources of error on the risk relationship. Accounting for sampling variability adds uncertainty or width to the injury risk curve. Assuming a variety of rates of unreported concussions in the non-concussed group, we found that accounting for under-reporting lowers the rotational acceleration required for a given concussion risk. Lastly, after accounting for sensor error, we find strengthened relationships between rotational acceleration and injury risk, further lowering the magnitude of rotational acceleration needed for a given risk of concussion. As more accurate sensors are designed and more sensitive and specific clinical diagnostic tools are introduced, our analysis provides guidance for the future development of comprehensive concussion risk curves. Copyright © 2015 Elsevier Ltd. All rights reserved.
An approximate generalized linear model with random effects for informative missing data.
Follmann, D; Wu, M
1995-03-01
This paper develops a class of models to deal with missing data from longitudinal studies. We assume that separate models for the primary response and missingness (e.g., number of missed visits) are linked by a common random parameter. Such models have been developed in the econometrics (Heckman, 1979, Econometrica 47, 153-161) and biostatistics (Wu and Carroll, 1988, Biometrics 44, 175-188) literature for a Gaussian primary response. We allow the primary response, conditional on the random parameter, to follow a generalized linear model and approximate the generalized linear model by conditioning on the data that describes missingness. The resultant approximation is a mixed generalized linear model with possibly heterogeneous random effects. An example is given to illustrate the approximate approach, and simulations are performed to critique the adequacy of the approximation for repeated binary data.
Active distribution network planning considering linearized system loss
NASA Astrophysics Data System (ADS)
Li, Xiao; Wang, Mingqiang; Xu, Hao
2018-02-01
In this paper, various distribution network planning techniques with DGs are reviewed, and a new distribution network planning method is proposed. It assumes that the location of DGs and the topology of the network are fixed. The proposed model optimizes the capacities of DG and the optimal distribution line capacity simultaneously by a cost/benefit analysis and the benefit is quantified by the reduction of the expected interruption cost. Besides, the network loss is explicitly analyzed in the paper. For simplicity, the network loss is appropriately simplified as a quadratic function of difference of voltage phase angle. Then it is further piecewise linearized. In this paper, a piecewise linearization technique with different segment lengths is proposed. To validate its effectiveness and superiority, the proposed distribution network planning model with elaborate linearization technique is tested on the IEEE 33-bus distribution network system.
Elastic robot control - Nonlinear inversion and linear stabilization
NASA Technical Reports Server (NTRS)
Singh, S. N.; Schy, A. A.
1986-01-01
An approach to the control of elastic robot systems for space applications using inversion, servocompensation, and feedback stabilization is presented. For simplicity, a robot arm (PUMA type) with three rotational joints is considered. The third link is assumed to be elastic. Using an inversion algorithm, a nonlinear decoupling control law u(d) is derived such that in the closed-loop system independent control of joint angles by the three joint torquers is accomplished. For the stabilization of elastic oscillations, a linear feedback torquer control law u(s) is obtained applying linear quadratic optimization to the linearized arm model augmented with a servocompensator about the terminal state. Simulation results show that in spite of uncertainties in the payload and vehicle angular velocity, good joint angle control and damping of elastic oscillations are obtained with the torquer control law u = u(d) + u(s).
`Un-Darkening' the Cosmos: New laws of physics for an expanding universe
NASA Astrophysics Data System (ADS)
George, William
2017-11-01
Dark matter is believed to exist because Newton's Laws are inconsistent with the visible matter in galaxies. Dark energy is necessary to explain the universe expansion. (also available from www.turbulence-online.com) suggested that the equations themselves might be in error because they implicitly assume that time is measured in linear increments. This presentation couples the possible non-linearity of time with an expanding universe. Maxwell's equations for an expanding universe with constant speed of light are shown to be invariant only if time itself is non-linear. Both linear and exponential expansion rates are considered. A linearly expanding universe corresponds to logarithmic time, while exponential expansion corresponds to exponentially varying time. Revised Newton's laws using either leads to different definitions of mass and kinetic energy, both of which appear time-dependent if expressed in linear time. And provide the possibility of explaining the astronomical observations without either dark matter or dark energy. We would have never noticed the differences on earth, since the leading term in both expansions is linear in δ /to where to is the current age.
Anderson, Carl A; McRae, Allan F; Visscher, Peter M
2006-07-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.
The Mapping Model: A Cognitive Theory of Quantitative Estimation
ERIC Educational Resources Information Center
von Helversen, Bettina; Rieskamp, Jorg
2008-01-01
How do people make quantitative estimations, such as estimating a car's selling price? Traditionally, linear-regression-type models have been used to answer this question. These models assume that people weight and integrate all information available to estimate a criterion. The authors propose an alternative cognitive theory for quantitative…
Strategic by Design: Iterative Approaches to Educational Planning
ERIC Educational Resources Information Center
Chance, Shannon
2010-01-01
Linear planning and decision-making models assume a level of predictability that is uncommon today. Such models inadequately address the complex variables found in higher education. When academic organizations adopt paired-down business strategies, they restrict their own vision. They fail to harness emerging opportunities or learn from their own…
Spacecraft Debris Avoidance Using Positively Invariant Constraint Admissible Sets (Postprint)
2013-08-14
LTV); Clohessy - Wiltshire -Hill (CWH) 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a. NAME OF RESPONSIBLE...linear time-invariant Clohessy - Wiltshire -Hill (CWH) equations. See Ref. 13. A nominal circular orbit is assumed in this work. 2 Approved for public
Predictor Combination in Binary Decision-Making Situations
ERIC Educational Resources Information Center
McGrath, Robert E.
2008-01-01
Professional psychologists are often confronted with the task of making binary decisions about individuals, such as predictions about future behavior or employee selection. Test users familiar with linear models and Bayes's theorem are likely to assume that the accuracy of decisions is consistently improved by combination of outcomes across valid…
USDA-ARS?s Scientific Manuscript database
Abstract: Despite the fact that permafrost soils contain up to half of the carbon (C) in terrestrial pools, we have a poor understanding of the controls on decomposition in thawed permafrost. Global climate models assume that decomposition increases linearly with temperature, yet decomposition in th...
Threshold Hypothesis: Fact or Artifact?
ERIC Educational Resources Information Center
Karwowski, Maciej; Gralewski, Jacek
2013-01-01
The threshold hypothesis (TH) assumes the existence of complex relations between creative abilities and intelligence: linear associations below 120 points of IQ and weaker or lack of associations above the threshold. However, diverse results have been obtained over the last six decades--some confirmed the hypothesis and some rejected it. In this…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wenyang; Cheung, Yam; Sawant, Amit
2016-05-15
Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparsemore » regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.« less
Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan
2016-05-01
To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.
Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan
2016-01-01
Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications. PMID:27147347
Threat Appeals: The Fear-Persuasion Relationship is Linear and Curvilinear.
Dillard, James Price; Li, Ruobing; Huang, Yan
2017-11-01
Drive theory may be seen as the first scientific theory of health and risk communication. However, its prediction of a curvilinear association between fear and persuasion is generally held to be incorrect. A close rereading of Hovland et al. reveals that within- and between-persons processes were conflated. Using a message that advocated obtaining a screening for colonoscopy, this study (N = 259) tested both forms of the inverted-U hypothesis. In the between-persons data, analyses revealed a linear effect that was consistent with earlier investigations. However, the data showed an inverted-U relationship in within-persons data. Hence, the relationship between fear and persuasion is linear or curvilinear depending on the level of analysis.
Busch, Michael; Wodrich, Matthew D; Corminboeuf, Clémence
2015-12-01
Linear free energy scaling relationships and volcano plots are common tools used to identify potential heterogeneous catalysts for myriad applications. Despite the striking simplicity and predictive power of volcano plots, they remain unknown in homogeneous catalysis. Here, we construct volcano plots to analyze a prototypical reaction from homogeneous catalysis, the Suzuki cross-coupling of olefins. Volcano plots succeed both in discriminating amongst different catalysts and reproducing experimentally known trends, which serves as validation of the model for this proof-of-principle example. These findings indicate that the combination of linear scaling relationships and volcano plots could serve as a valuable methodology for identifying homogeneous catalysts possessing a desired activity through a priori computational screening.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karmakar, Partha; Das, Pradip Kumar; Mondal, Seema Sarkar
2010-10-26
Pb pollution from automobile exhausts around highways is a persistent problem in India. Pb intoxication in mammalian body is a complex phenomenon which is influence by agonistic and antagonistic interactions of several other heavy metals and micronutrients. An attempt has been made to study the association between Pb and Zn accumulation in different physiological systems of cattles (n = 200) by application of both canonical correlation and canonical correspondence analyses. Pb was estimated from plasma, liver, bone, muscle, kidney, blood and milk where as Zn was measured from all these systems except bone, blood and milk. Both statistical techniques demonstratedmore » that there was a strong association among blood-Pb, liver-Zn, kidney-Zn and muscle-Zn. From observations, it can be assumed that Zn accumulation in cattles' muscle, liver and kidney directs Pb mobilization from those organs which in turn increases Pb pool in blood. It indicates antagonistic activity of Zn to the accumulation of Pb. Although there were some contradictions between the observations obtained from the two different statistical methods, the overall pattern of Pb accumulation in various organs as influenced by Zn were same. It is mainly due to the fact that canonical correlation is actually a special type of canonical correspondence analyses where linear relationship is followed between two groups of variables instead of Gaussian relationship.« less
NASA Astrophysics Data System (ADS)
Karmakar, Partha; Das, Pradip Kumar; Mondal, Seema Sarkar; Karmakar, Sougata; Mazumdar, Debasis
2010-10-01
Pb pollution from automobile exhausts around highways is a persistent problem in India. Pb intoxication in mammalian body is a complex phenomenon which is influence by agonistic and antagonistic interactions of several other heavy metals and micronutrients. An attempt has been made to study the association between Pb and Zn accumulation in different physiological systems of cattles (n = 200) by application of both canonical correlation and canonical correspondence analyses. Pb was estimated from plasma, liver, bone, muscle, kidney, blood and milk where as Zn was measured from all these systems except bone, blood and milk. Both statistical techniques demonstrated that there was a strong association among blood-Pb, liver-Zn, kidney-Zn and muscle-Zn. From observations, it can be assumed that Zn accumulation in cattles' muscle, liver and kidney directs Pb mobilization from those organs which in turn increases Pb pool in blood. It indicates antagonistic activity of Zn to the accumulation of Pb. Although there were some contradictions between the observations obtained from the two different statistical methods, the overall pattern of Pb accumulation in various organs as influenced by Zn were same. It is mainly due to the fact that canonical correlation is actually a special type of canonical correspondence analyses where linear relationship is followed between two groups of variables instead of Gaussian relationship.
NASA Technical Reports Server (NTRS)
Griggs, M.; Ludwig, C. B.; Malkmus, W. (Principal Investigator)
1973-01-01
The author has identified the following significant results. Significant results, relating the radiance over water surfaces to the atmospheric aerosol content, have been obtained. The results indicate that the MSS channels 4, 5, and 6 centered at 0.55, 0.65, and 0.75 microns have comparable sensitivity, and that the aerosol content can be determined within + or - 10% with the assumed measurement errors of the MSS. The fourth channel, MSS 7, is not useful for aerosol determination due to the water radiance values from this channel generally being less than the instrument noise. The accuracy of the aerosol content measurement could be increased by using an instrument specifically designed for this purpose. This radiance-aerosol content relationship can possibly provide a basis for monitoring the atmospheric aerosol content on a global basis, allowing a base-line value of aerosols to be established. The contrast-aerosol content investigation shows useful linear relationships in MSS channels 4 and 5, allowing the aerosol content to be determined within + or - 10%. MSS 7 is not useful due to the low accuracy in the water radiance, and MSS 6 is found to be too insensitive. These results rely on several assumptions due to the lack of ground truth data, but do serve to indicate which channels are most useful.
Overman, Allen R.; Scholtz, Richard V.
2011-01-01
The expanded growth model is developed to describe accumulation of plant biomass (Mg ha−1) and mineral elements (kg ha−1) in with calendar time (wk). Accumulation of plant biomass with calendar time occurs as a result of photosynthesis for green land-based plants. A corresponding accumulation of mineral elements such as nitrogen, phosphorus, and potassium occurs from the soil through plant roots. In this analysis, the expanded growth model is tested against high quality, published data on corn (Zea mays L.) growth. Data from a field study in South Carolina was used to evaluate the application of the model, where the planting time of April 2 in the field study maximized the capture of solar energy for biomass production. The growth model predicts a simple linear relationship between biomass yield and the growth quantifier, which is confirmed with the data. The growth quantifier incorporates the unit processes of distribution of solar energy which drives biomass accumulation by photosynthesis, partitioning of biomass between light-gathering and structural components of the plants, and an aging function. A hyperbolic relationship between plant nutrient uptake and biomass yield is assumed, and is confirmed for the mineral elements nitrogen (N), phosphorus (P), and potassium (K). It is concluded that the rate limiting process in the system is biomass accumulation by photosynthesis and that nutrient accumulation occurs in virtual equilibrium with biomass accumulation. PMID:22194842
NASA Astrophysics Data System (ADS)
Mu, G. Y.; Mi, X. Z.; Wang, F.
2018-01-01
The high temperature low cycle fatigue tests of TC4 titanium alloy and TC11 titanium alloy are carried out under strain controlled. The relationships between cyclic stress-life and strain-life are analyzed. The high temperature low cycle fatigue life prediction model of two kinds of titanium alloys is established by using Manson-Coffin method. The relationship between failure inverse number and plastic strain range presents nonlinear in the double logarithmic coordinates. Manson-Coffin method assumes that they have linear relation. Therefore, there is bound to be a certain prediction error by using the Manson-Coffin method. In order to solve this problem, a new method based on exponential function is proposed. The results show that the fatigue life of the two kinds of titanium alloys can be predicted accurately and effectively by using these two methods. Prediction accuracy is within ±1.83 times scatter zone. The life prediction capability of new methods based on exponential function proves more effective and accurate than Manson-Coffin method for two kinds of titanium alloys. The new method based on exponential function can give better fatigue life prediction results with the smaller standard deviation and scatter zone than Manson-Coffin method. The life prediction results of two methods for TC4 titanium alloy prove better than TC11 titanium alloy.
Quantifying Square Membrane Wrinkle Behavior Using MITC Shell Elements
NASA Technical Reports Server (NTRS)
Jacobson, Mindy B.; Iwasa, Takashi; Natori, M. C.
2004-01-01
For future membrane based structures, quantified predictions of membrane wrinkling behavior in terms of amplitude, angle and wavelength are needed to optimize the efficiency and integrity of such structures, as well as their associated control systems. For numerical analyses performed in the past, limitations on the accuracy of membrane distortion simulations have often been related to the assumptions made while using finite elements. Specifically, this work demonstrates that critical assumptions include: effects of gravity. supposed initial or boundary conditions, and the type of element used to model the membrane. In this work, a 0.2 square meter membrane is treated as a structural material with non-negligible bending stiffness. Mixed Interpolation of Tensorial Components (MTTC) shell elements are used to simulate wrinkling behavior due to a constant applied in-plane shear load. Membrane thickness, gravity effects, and initial imperfections with respect to flatness were varied in numerous nonlinear analysis cases. Significant findings include notable variations in wrinkle modes for thickness in the range of 50 microns to 1000 microns, which also depend on the presence of an applied gravity field. However, it is revealed that relationships between overall strain energy density for cases with differing initial conditions are independent of assumed initial con&tions. In addition, analysis results indicate that the relationship between amplitude scale (W/t) and structural scale (L/t) is linear in the presence of a gravity field.
Assuming a Pharmacy Organization Leadership Position: A Guide for Pharmacy Leaders.
Shay, Blake; Weber, Robert J
2015-11-01
Important and influential pharmacy organization leadership positions, such as president, board member, or committee chair, are volunteer positions and require a commitment of personal and professional time. These positions provide excellent opportunities for leadership development, personal promotion, and advancement of the profession. In deciding to assume a leadership position, interested individuals must consider the impact on their personal and professional commitments and relationships, career planning, employer support, current and future department projects, employee support, and personal readiness. This article reviews these factors and also provides an assessment tool that leaders can use to determine their readiness to assume leadership positions. By using an assessment tool, pharmacy leaders can better understand their ability to assume an important and influential leadership position while achieving job and personal goals.
Assuming a Pharmacy Organization Leadership Position: A Guide for Pharmacy Leaders
Shay, Blake; Weber, Robert J.
2015-01-01
Important and influential pharmacy organization leadership positions, such as president, board member, or committee chair, are volunteer positions and require a commitment of personal and professional time. These positions provide excellent opportunities for leadership development, personal promotion, and advancement of the profession. In deciding to assume a leadership position, interested individuals must consider the impact on their personal and professional commitments and relationships, career planning, employer support, current and future department projects, employee support, and personal readiness. This article reviews these factors and also provides an assessment tool that leaders can use to determine their readiness to assume leadership positions. By using an assessment tool, pharmacy leaders can better understand their ability to assume an important and influential leadership position while achieving job and personal goals. PMID:27621512
A Physics Based Vehicle Terrain Interaction Model for Soft Soil off-Road Vehicle Simulations
2012-01-01
assumed terrain deformation, use of empirical relationships for the deformation, or finite/discrete element approaches for the terrain. A real-time...vertical columns of soil, and the deformation of each is modeled using visco-elasto-plastic compressibility relationships that relate subsoil pressures to...produced by tractive and turning forces will also be incorporated into the model. Both the vertical and horizontal force/displacement relationships
NASA Astrophysics Data System (ADS)
Zhang, Y.; Guanter, L.; Berry, J. A.; Tol, C. V. D.
2016-12-01
Solar-induced chlorophyll fluorescence (SIF) is a novel optical tool for assessment of terrestrial photosynthesis (GPP). Recent work have shown the strong link between GPP and satellite retrievals of SIF at broad scales. However, critical gaps remain between short term small-scale mechanistic understanding and seasonal global observations. In this presentation, we provide a model-based analysis of the relationship between SIF and GPP across scales for diverse vegetation types and a range of meteorological conditions, with the ultimate focus on reproducing the environmental conditions during remote sensing measurements. The coupled fluorescence-photosynthesis model SCOPE is used to simulate GPP and SIF at the both leaf and canopy levels for 13 flux sites. Analyses were conducted to investigate the effects of temporal scaling, canopy structure, overpass time, and spectral domain on the relationship between SIF and GPP. The simulated SIF is highly non-linear with GPP at the leaf level and instantaneous time scale and tends to linearize when scaling to the canopy level and daily to seasonal scales. These relationships are consistent across a wide range of vegetation types. The relationship between SIF and GPP is primarily driven by absorbed photosynthetically active radiation (APAR), especially at the seasonal scale, although the photosynthetic efficiency also contributes to strengthen the link between them. The linearization of their relationship from leaf to canopy and averaging over time is because the overall conditions of the canopy fall within the range of the linear responses of GPP and SIF to light and the photosynthetic capacity. Our results further show that the top-of-canopy relationships between simulated SIF and GPP have similar linearity regardless of whether we used the morning or midday satellite overpass times. These findings are confirmed by field measurements. In addition, the simulated red SIF at 685 nm has a similar relationship with GPP as that of far-red SIF at 740 nm at the canopy level.
Competition Experiments as a Means of Evaluating Linear Free Energy Relationships
ERIC Educational Resources Information Center
Mullins, Richard J.; Vedernikov, Andrei; Viswanathan, Rajesh
2004-01-01
The use of competition experiments as a means of evaluating linear free energy relationship in the undergraduate teaching laboratory is reported. The use of competition experiments proved to be a reliable method for the construction of Hammett plots with good correlation providing great flexibility with regard to the compounds and reactions that…
Looking for Connections between Linear and Exponential Functions
ERIC Educational Resources Information Center
Lo, Jane-Jane; Kratky, James L.
2012-01-01
Students frequently have difficulty determining whether a given real-life situation is best modeled as a linear relationship or as an exponential relationship. One root of such difficulty is the lack of deep understanding of the very concept of "rate of change." The authors will provide a lesson that allows students to reveal their misconceptions…
Attachment and Effortful Control: Relationships With Maladjustment in Early Adolescence
ERIC Educational Resources Information Center
Heylen, Joke; Vasey, Michael W.; Dujardin, Adinda; Vandevivere, Eva; Braet, Caroline; De Raedt, Rudi; Bosmans, Guy
2017-01-01
Based on former research, it can be assumed that attachment relationships provide a context in which children develop both the effortful control (EC) capacity and the repertoire of responses to regulate distress. Both are important to understand children's (mal)adjustment. While the latter assumption has been supported in several studies, less is…
USDA-ARS?s Scientific Manuscript database
Many irrigation scheduling methods utilized in commercial production settings rely on soil water sensors that are normally purchased as off-the-shelf technology or through contracted services that install and monitor readings throughout the season. These systems often assume a direct relationship be...
Peer Helping Relationships in Urban Schools. ERIC Digest.
ERIC Educational Resources Information Center
Webb, Michael
Research has shown that students and teachers can benefit from structured in-school helping relationships in which peers assume formal roles as tutors. For the student in need of academic help, peer tutoring programs provide an opportunity to learn in a more nonthreatening environment than the classroom. Immediate feedback and clarification of…
Social Relationships and Delinquency: Revisiting Parent and Peer Influence during Adolescence
ERIC Educational Resources Information Center
Brauer, Jonathan R.; De Coster, Stacy
2015-01-01
Scholars interested in delinquency have focused much attention on the influence of parent and peer relationships. Prior research has assumed that parents control delinquency because they value convention, whereas peers promote delinquency because they value and model nonconvention. We argue that it is important to assess the normative and…
The Influence of Family Relationships on Creativity in the Workplace
ERIC Educational Resources Information Center
Szopinski, Józef; Szopinski, Tomasz
2013-01-01
The article is rooted in the thesis that good family relationships foster creative behaviour in those responsible for the management of an organization. An underlying assumption of the study is that creativity is vital in any leadership role or managerial position requiring interaction with groups of people. Furthermore, it is assumed that…
Porta, A; Gasperi, C; Nollo, G; Lucini, D; Pizzinelli, P; Antolini, R; Pagani, M
2006-04-01
Global linear analysis has been traditionally performed to verify the relationship between pulse transit time (PTT) and systolic arterial pressure (SAP) at the level of their spontaneous beat-to-beat variabilities: PTT and SAP have been plotted in the plane (PTT,SAP) and a significant linear correlation has been found. However, this relationship is weak and in specific individuals cannot be found. This result prevents the utilization of the SAP-PTT relationship to derive arterial pressure changes from PTT measures on an individual basis. We propose a local linear approach to study the SAP-PTT relationship. This approach is based on the definition of short SAP-PTT sequences characterized by SAP increase (decrease) and PTT decrease (increase) and on their search in the SAP and PTT beat-to-beat series. This local approach was applied to PTT and SAP series derived from 13 healthy humans during incremental supine dynamic exercise (at 10, 20 and 30% of the nominal individual maximum effort) and compared to the global approach. While global approach failed in some subjects, local analysis allowed the extraction of the gain of the SAP-PTT relationship in all subjects both at rest and during exercise. When both local and global analyses were successful, the local SAP-PTT gain is more negative than the global one as a likely result of noise reduction.
Laxy, Michael; Stark, Renée; Peters, Annette; Hauner, Hans; Holle, Rolf; Teuner, Christina M
2017-08-30
This study aims to analyse the non-linear relationship between Body Mass Index (BMI) and direct health care costs, and to quantify the resulting cost fraction attributable to obesity in Germany. Five cross-sectional surveys of cohort studies in southern Germany were pooled, resulting in data of 6757 individuals (31-96 years old). Self-reported information on health care utilisation was used to estimate direct health care costs for the year 2011. The relationship between measured BMI and annual costs was analysed using generalised additive models, and the cost fraction attributable to obesity was calculated. We found a non-linear association of BMI and health care costs with a continuously increasing slope for increasing BMI without any clear threshold. Under the consideration of the non-linear BMI-cost relationship, a shift in the BMI distribution so that the BMI of each individual is lowered by one point is associated with a 2.1% reduction of mean direct costs in the population. If obesity was eliminated, and the BMI of all obese individuals were lowered to 29.9 kg/m², this would reduce the mean direct costs by 4.0% in the population. Results show a non-linear relationship between BMI and health care costs, with very high costs for a few individuals with high BMI. This indicates that population-based interventions in combination with selective measures for very obese individuals might be the preferred strategy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wachowicz, K., E-mail: keith.wachowicz@albertaheal
2016-08-15
Purpose: This work examines the subject of contrast-to-noise ratio (CNR), specifically between tumor and tissue background, and its dependence on the MRI field strength, B{sub 0}. This examination is motivated by the recent interest and developments in MRI/radiotherapy hybrids where real-time imaging can be used to guide treatment beams. The ability to distinguish a tumor from background tissue is of primary importance in this field, and this work seeks to elucidate the complex relationship between the CNR and B{sub 0} that is too often assumed to be purely linear. Methods: Experimentally based models of B{sub 0}-dependant relaxation for various tumormore » and normal tissues from the literature were used in conjunction with signal equations for MR sequences suitable for rapid real-time imaging to develop field-dependent predictions for CNR. These CNR models were developed for liver, lung, breast, glioma, and kidney tumors for spoiled gradient-echo, balanced steady-state free precession (bSSFP), and single-shot half-Fourier fast spin echo sequences. Results: Due to the pattern in which the relaxation properties of tissues are found to vary over B{sub 0} field (specifically the T{sub 1} time), there was always an improved CNR at lower fields compared to linear dependency. Further, in some tumor sites, the CNR at lower fields was found to be comparable to, or sometimes higher than those at higher fields (i.e., bSSFP CNR for glioma, kidney, and liver tumors). Conclusions: In terms of CNR, lower B{sub 0} fields have been shown to perform as well or better than higher fields for some tumor sites due to superior T{sub 1} contrast. In other sites this effect was less pronounced, reversing the CNR advantage. This complex relationship between CNR and B{sub 0} reveals both low and high magnetic fields as viable options for tumor tracking in MRI/radiotherapy hybrids.« less
Bishai, David; Opuni, Marjorie
2009-01-01
Background Time trends in infant mortality for the 20th century show a curvilinear pattern that most demographers have assumed to be approximately exponential. Virtually all cross-country comparisons and time series analyses of infant mortality have studied the logarithm of infant mortality to account for the curvilinear time trend. However, there is no evidence that the log transform is the best fit for infant mortality time trends. Methods We use maximum likelihood methods to determine the best transformation to fit time trends in infant mortality reduction in the 20th century and to assess the importance of the proper transformation in identifying the relationship between infant mortality and gross domestic product (GDP) per capita. We apply the Box Cox transform to infant mortality rate (IMR) time series from 18 countries to identify the best fitting value of lambda for each country and for the pooled sample. For each country, we test the value of λ against the null that λ = 0 (logarithmic model) and against the null that λ = 1 (linear model). We then demonstrate the importance of selecting the proper transformation by comparing regressions of ln(IMR) on same year GDP per capita against Box Cox transformed models. Results Based on chi-squared test statistics, infant mortality decline is best described as an exponential decline only for the United States. For the remaining 17 countries we study, IMR decline is neither best modelled as logarithmic nor as a linear process. Imposing a logarithmic transform on IMR can lead to bias in fitting the relationship between IMR and GDP per capita. Conclusion The assumption that IMR declines are exponential is enshrined in the Preston curve and in nearly all cross-country as well as time series analyses of IMR data since Preston's 1975 paper, but this assumption is seldom correct. Statistical analyses of IMR trends should assess the robustness of findings to transformations other than the log transform. PMID:19698144
NASA Astrophysics Data System (ADS)
Campos Braga, Ramon; Rosenfeld, Daniel; Weigel, Ralf; Jurkat, Tina; Andreae, Meinrat O.; Wendisch, Manfred; Pöschl, Ulrich; Voigt, Christiane; Mahnke, Christoph; Borrmann, Stephan; Albrecht, Rachel I.; Molleker, Sergej; Vila, Daniel A.; Machado, Luiz A. T.; Grulich, Lucas
2017-12-01
We have investigated how aerosols affect the height above cloud base of rain and ice hydrometeor initiation and the subsequent vertical evolution of cloud droplet size and number concentrations in growing convective cumulus. For this purpose we used in situ data of hydrometeor size distributions measured with instruments mounted on HALO aircraft during the ACRIDICON-CHUVA campaign over the Amazon during September 2014. The results show that the height of rain initiation by collision and coalescence processes (Dr, in units of meters above cloud base) is linearly correlated with the number concentration of droplets (Nd in cm-3) nucleated at cloud base (Dr ≈ 5 ṡ Nd). Additional cloud processes associated with Dr, such as GCCN, cloud, and mixing with ambient air and other processes, produce deviations of ˜ 21 % in the linear relationship, but it does not mask the clear relationship between Dr and Nd, which was also found at different regions around the globe (e.g., Israel and India). When Nd exceeded values of about 1000 cm-3, Dr became greater than 5000 m, and the first observed precipitation particles were ice hydrometeors. Therefore, no liquid water raindrops were observed within growing convective cumulus during polluted conditions. Furthermore, the formation of ice particles also took place at higher altitudes in the clouds in polluted conditions because the resulting smaller cloud droplets froze at colder temperatures compared to the larger drops in the unpolluted cases. The measured vertical profiles of droplet effective radius (re) were close to those estimated by assuming adiabatic conditions (rea), supporting the hypothesis that the entrainment and mixing of air into convective clouds is nearly inhomogeneous. Additional CCN activation on aerosol particles from biomass burning and air pollution reduced re below rea, which further inhibited the formation of raindrops and ice particles and resulted in even higher altitudes for rain and ice initiation.
Saito, Masatoshi
2015-07-01
For accurate tissue inhomogeneity correction in radiotherapy treatment planning, the author previously proposed a simple conversion of the energy-subtracted computed tomography (CT) number to an electron density (ΔHU-ρe conversion), which provides a single linear relationship between ΔHU and ρe over a wide ρe range. The purpose of the present study was to reveal the relation between the ΔHU image for ρe calibration and a virtually monochromatic CT image by performing numerical analyses based on the basis material decomposition in dual-energy CT. The author determined the weighting factor, α0, of the ΔHU-ρe conversion through numerical analyses of the International Commission on Radiation Units and Measurements Report-46 human body tissues using their attenuation coefficients and given ρe values. Another weighting factor, α(E), for synthesizing a virtual monochromatic CT image from high- and low-kV CT images, was also calculated in the energy range of 0.03 < E < 5 MeV, assuming that cortical bone and water were the basis materials. The mass attenuation coefficients for these materials were obtained using the xcom photon cross sections database. The effective x-ray energies used to calculate the attenuation were chosen to imitate a dual-source CT scanner operated at 80-140 and 100-140 kV/Sn. The determined α0 values were 0.455 for 80-140 kV/Sn and 0.743 for 100-140 kV/Sn. These values coincided almost perfectly with the respective maximal points of the calculated α(E) curves located at approximately 1 MeV, in which the photon-matter interaction in human body tissues is exclusively the incoherent (Compton) scattering. The ΔHU image could be regarded substantially as a CT image acquired with monoenergetic 1-MeV photons, which provides a linear relationship between CT numbers and electron densities.
Particle orbits in two-dimensional equilibrium models for the magnetotail
NASA Technical Reports Server (NTRS)
Karimabadi, H.; Pritchett, P. L.; Coroniti, F. V.
1990-01-01
Assuming that there exist an equilibrium state for the magnetotail, particle orbits are investigated in two-dimensional kinetic equilibrium models for the magnetotail. Particle orbits in the equilibrium field are compared with those calculated earlier with one-dimensional models, where the main component of the magnetic field (Bx) was approximated as either a hyperbolic tangent or a linear function of z with the normal field (Bz) assumed to be a constant. It was found that the particle orbits calculated with the two types of models are significantly different, mainly due to the neglect of the variation of Bx with x in the one-dimensional fields.
Kurtosis Approach Nonlinear Blind Source Separation
NASA Technical Reports Server (NTRS)
Duong, Vu A.; Stubbemd, Allen R.
2005-01-01
In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation Keywords: Independent Component Analysis, Kurtosis, Higher order statistics.
Identification and control of structures in space
NASA Technical Reports Server (NTRS)
Meirovitch, L.; Quinn, R. D.; Norris, M. A.
1984-01-01
The derivation of the equations of motion for the Spacecraft Control Laboratory Experiment (SCOLE) is reported and the equations of motion of a similar structure orbiting the earth are also derived. The structure is assumed to undergo large rigid-body maneuvers and small elastic deformations. A perturbation approach is proposed whereby the quantities defining the rigid-body maneuver are assumed to be relatively large, with the elastic deformations and deviations from the rigid-body maneuver being relatively small. The perturbation equations have the form of linear equations with time-dependent coefficients. An active control technique can then be formulated to permit maneuvering of the spacecraft and simultaneously suppressing the elastic vibration.
Least-Squares Data Adjustment with Rank-Deficient Data Covariance Matrices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, J.G.
2011-07-01
A derivation of the linear least-squares adjustment formulae is required that avoids the assumption that the covariance matrix of prior parameters can be inverted. Possible proofs are of several kinds, including: (i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. In this paper, the least-squares adjustment equations are derived in both these ways, while explicitly assuming that the covariance matrix of prior parameters is singular. It will be proved that the solutions are unique and that, contrary to statements that have appeared inmore » the literature, the least-squares adjustment problem is not ill-posed. No modification is required to the adjustment formulae that have been used in the past in the case of a singular covariance matrix for the priors. In conclusion: The linear least-squares adjustment formula that has been used in the past is valid in the case of a singular covariance matrix for the covariance matrix of prior parameters. Furthermore, it provides a unique solution. Statements in the literature, to the effect that the problem is ill-posed are wrong. No regularization of the problem is required. This has been proved in the present paper by two methods, while explicitly assuming that the covariance matrix of prior parameters is singular: i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. No modification is needed to the adjustment formulae that have been used in the past. (author)« less
A chaotic view of behavior change: a quantum leap for health promotion.
Resnicow, Ken; Vaughan, Roger
2006-09-12
The study of health behavior change, including nutrition and physical activity behaviors, has been rooted in a cognitive-rational paradigm. Change is conceptualized as a linear, deterministic process where individuals weigh pros and cons, and at the point at which the benefits outweigh the cost change occurs. Consistent with this paradigm, the associated statistical models have almost exclusively assumed a linear relationship between psychosocial predictors and behavior. Such a perspective however, fails to account for non-linear, quantum influences on human thought and action. Consider why after years of false starts and failed attempts, a person succeeds at increasing their physical activity, eating healthier or losing weight. Or, why after years of success a person relapses. This paper discusses a competing view of health behavior change that was presented at the 2006 annual ISBNPA meeting in Boston. Rather than viewing behavior change from a linear perspective it can be viewed as a quantum event that can be understood through the lens of Chaos Theory and Complex Dynamic Systems. Key principles of Chaos Theory and Complex Dynamic Systems relevant to understanding health behavior change include: 1) Chaotic systems can be mathematically modeled but are nearly impossible to predict; 2) Chaotic systems are sensitive to initial conditions; 3) Complex Systems involve multiple component parts that interact in a nonlinear fashion; and 4) The results of Complex Systems are often greater than the sum of their parts. Accordingly, small changes in knowledge, attitude, efficacy, etc may dramatically alter motivation and behavioral outcomes. And the interaction of such variables can yield almost infinite potential patterns of motivation and behavior change. In the linear paradigm unaccounted for variance is generally relegated to the catch all "error" term, when in fact such "error" may represent the chaotic component of the process. The linear and chaotic paradigms are however, not mutually exclusive, as behavior change may include both chaotic and cognitive processes. Studies of addiction suggest that many decisions to change are quantum rather than planned events; motivation arrives as opposed to being planned. Moreover, changes made through quantum processes appear more enduring than those that involve more rational, planned processes. How such processes may apply to nutrition and physical activity behavior and related interventions merits examination.
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Hixon, Ray; Mankbadi, Reda R.
2003-01-01
An approximate technique is presented for the prediction of the large-scale turbulent structure sound source in a supersonic jet. A linearized Euler equations code is used to solve for the flow disturbances within and near a jet with a given mean flow. Assuming a normal mode composition for the wave-like disturbances, the linear radial profiles are used in an integration of the Navier-Stokes equations. This results in a set of ordinary differential equations representing the weakly nonlinear self-interactions of the modes along with their interaction with the mean flow. Solutions are then used to correct the amplitude of the disturbances that represent the source of large-scale turbulent structure sound in the jet.
Sorting by Cuts, Joins, and Whole Chromosome Duplications.
Zeira, Ron; Shamir, Ron
2017-02-01
Genome rearrangement problems have been extensively studied due to their importance in biology. Most studied models assumed a single copy per gene. However, in reality, duplicated genes are common, most notably in cancer. In this study, we make a step toward handling duplicated genes by considering a model that allows the atomic operations of cut, join, and whole chromosome duplication. Given two linear genomes, [Formula: see text] with one copy per gene and [Formula: see text] with two copies per gene, we give a linear time algorithm for computing a shortest sequence of operations transforming [Formula: see text] into [Formula: see text] such that all intermediate genomes are linear. We also show that computing an optimal sequence with fewest duplications is NP-hard.
NASA Astrophysics Data System (ADS)
Fatiha, M.; Rahmat, A.; Solihat, R.
2017-09-01
The delivery of concepts in studying Biology often represented through a diagram to easily makes student understand about Biology material. One way to knowing the students’ understanding about diagram can be seen from causal relationship that is constructed by student in the propositional network representation form. This research reveal the trend of students’ propositional network representation patterns when confronted with convention diagram. This descriptive research involved 32 students at one of senior high school in Bandung. The research data was acquired by worksheet that was filled by diagram and it was developed according on information processing standards. The result of this research revealed three propositional network representation patterns are linear relationship, simple reciprocal relationship, and complex reciprocal relationship. The dominating pattern is linear form that is simply connect some information components in diagram by 59,4% students, the reciprocal relationship form with medium level by 28,1% students while the complex reciprocal relationship by only 3,1% and the rest was students who failed to connect information components by 9,4%. Based on results, most of student only able to connect information components on the picture in linear form and a few student constructing reciprocal relationship between information components on convention diagram.
NASA Astrophysics Data System (ADS)
Aguirre, E. E.; Karchewski, B.
2017-12-01
DC resistivity surveying is a geophysical method that quantifies the electrical properties of the subsurface of the earth by applying a source current between two electrodes and measuring potential differences between electrodes at known distances from the source. Analytical solutions for a homogeneous half-space and simple subsurface models are well known, as the former is used to define the concept of apparent resistivity. However, in situ properties are heterogeneous meaning that simple analytical models are only an approximation, and ignoring such heterogeneity can lead to misinterpretation of survey results costing time and money. The present study examines the extent to which random variations in electrical properties (i.e. electrical conductivity) affect potential difference readings and therefore apparent resistivities, relative to an assumed homogeneous subsurface model. We simulate the DC resistivity survey using a Finite Difference (FD) approximation of an appropriate simplification of Maxwell's equations implemented in Matlab. Electrical resistivity values at each node in the simulation were defined as random variables with a given mean and variance, and are assumed to follow a log-normal distribution. The Monte Carlo analysis for a given variance of electrical resistivity was performed until the mean and variance in potential difference measured at the surface converged. Finally, we used the simulation results to examine the relationship between variance in resistivity and variation in surface potential difference (or apparent resistivity) relative to a homogeneous half-space model. For relatively low values of standard deviation in the material properties (<10% of mean), we observed a linear correlation between variance of resistivity and variance in apparent resistivity.
The atmospheric lifetime of black carbon
NASA Astrophysics Data System (ADS)
Cape, J. N.; Coyle, M.; Dumitrean, P.
2012-11-01
Black carbon (BC) in the atmosphere contributes to the human health effects of particulate matter and contributes to radiative forcing of climate. The lifetime of BC, particularly the smaller particle sizes (PM2.5) which can be transported over long distances, is therefore an important factor in determining the range of such effects, and the spatial footprint of emission controls. Theory and models suggest that the typical lifetime of BC is around one week. The frequency distributions of measurements of a range of hydrocarbons at a remote rural site in southern Scotland (Auchencorth Moss) between 2007 and 2010 have been used to quantify the relationship between atmospheric lifetime and the geometric standard deviation of observed concentration. The analysis relies on an assumed common major emission source for hydrocarbons and BC, namely diesel-engined vehicles. The logarithm of the standard deviation of the log-transformed concentration data is linearly related to hydrocarbon lifetime, and the same statistic for BC can be used to assess the lifetime of BC relative to the hydrocarbons. Annual average data show BC lifetimes in the range 4-12 days, for an assumed OH concentration of 7 × 105 cm-3. At this site there is little difference in BC lifetime between winter and summer, despite a 3-fold difference in relative hydrocarbon lifetimes. This observation confirms the role of wet deposition as an important removal process for BC, as there is no difference in precipitation between winter and summer at this site. BC lifetime was significantly greater in 2010, which had 23% less rainfall than the preceding 3 years.
Tang, Hong; Ruan, Chengjie; Qiu, Tianshuang; Park, Yongwan; Xiao, Shouzhong
2013-08-01
The relationships between the amplitude of the first heart sound (S1) and the rising rate of left ventricular pressure (LVP) concluded in previous studies were not consistent. Some researchers believed the relationship was positively linear; others stated the relationship was only positively correlated. To further investigate this relationship, this study simultaneously sampled the external phonocardiogram, electrocardiogram, and intracardiac pressure in the left ventricle in three anesthetized dogs, while invoking wide hemodynamic changes using various doses of epinephrine. The relationship between the maximum amplitude of S1 and the maximum rising rate of LVP and the relationship between the amplitude of dominant peaks/valleys and the corresponding rising rate of LVP were examined by linear, quadratic, cubic, and exponential models. The results showed that the relationships are best fit by nonlinear exponential models.
Determination of stresses in RC eccentrically compressed members using optimization methods
NASA Astrophysics Data System (ADS)
Lechman, Marek; Stachurski, Andrzej
2018-01-01
The paper presents an optimization method for determining the strains and stresses in reinforced concrete (RC) members subjected to the eccentric compression. The governing equations for strains in the rectangular cross-sections are derived by integrating the equilibrium equations of cross-sections, taking account of the effect of concrete softening in plastic range and the mean compressive strength of concrete. The stress-strain relationship for concrete in compression for short term uniaxial loading is assumed according to Eurocode 2 for nonlinear analysis. For reinforcing steel linear-elastic model with hardening in plastic range is applied. The task consists in the solving the set of the derived equations s.t. box constraints. The resulting problem was solved by means of fmincon function implemented from the Matlab's Optimization Toolbox. Numerical experiments have shown the existence of many points verifying the equations with a very good accuracy. Therefore, some operations from the global optimization were included: start of fmincon from many points and clusterization. The model is verified on the set of data encountered in the engineering practice.
Dependence of tropical cyclone development on coriolis parameter: A theoretical model
NASA Astrophysics Data System (ADS)
Deng, Liyuan; Li, Tim; Bi, Mingyu; Liu, Jia; Peng, Melinda
2018-03-01
A simple theoretical model was formulated to investigate how tropical cyclone (TC) intensification depends on the Coriolis parameter. The theoretical framework includes a two-layer free atmosphere and an Ekman boundary layer at the bottom. The linkage between the free atmosphere and the boundary layer is through the Ekman pumping vertical velocity in proportion to the vorticity at the top of the boundary layer. The closure of this linear system assumes a simple relationship between the free atmosphere diabatic heating and the boundary layer moisture convergence. Under a set of realistic atmospheric parameter values, the model suggests that the most preferred latitude for TC development is around 5° without considering other factors. The theoretical result is confirmed by high-resolution WRF model simulations in a zero-mean flow and a constant SST environment on an f -plane with different Coriolis parameters. Given an initially balanced weak vortex, the TC-like vortex intensifies most rapidly at the reference latitude of 5°. Thus, the WRF model simulations confirm the f-dependent characteristics of TC intensification rate as suggested by the theoretical model.
Functional mixture regression.
Yao, Fang; Fu, Yuejiao; Lee, Thomas C M
2011-04-01
In functional linear models (FLMs), the relationship between the scalar response and the functional predictor process is often assumed to be identical for all subjects. Motivated by both practical and methodological considerations, we relax this assumption and propose a new class of functional regression models that allow the regression structure to vary for different groups of subjects. By projecting the predictor process onto its eigenspace, the new functional regression model is simplified to a framework that is similar to classical mixture regression models. This leads to the proposed approach named as functional mixture regression (FMR). The estimation of FMR can be readily carried out using existing software implemented for functional principal component analysis and mixture regression. The practical necessity and performance of FMR are illustrated through applications to a longevity analysis of female medflies and a human growth study. Theoretical investigations concerning the consistent estimation and prediction properties of FMR along with simulation experiments illustrating its empirical properties are presented in the supplementary material available at Biostatistics online. Corresponding results demonstrate that the proposed approach could potentially achieve substantial gains over traditional FLMs.
Post-seismic relaxation theory on laterally heterogeneous viscoelastic model
Pollitz, F.F.
2003-01-01
Investigation was carried out into the problem of relaxation of a laterally heterogeneous viscoelastic Earth following an impulsive moment release event. The formal solution utilizes a semi-analytic solution for post-seismic deformation on a laterally homogeneous Earth constructed from viscoelastic normal modes, followed by application of mode coupling theory to derive the response on the aspherical Earth. The solution is constructed in the Laplace transform domain using the correspondence principle and is valid for any linear constitutive relationship between stress and strain. The specific implementation described in this paper is a semi-analytic discretization method which assumes isotropic elastic structure and a Maxwell constitutive relation. It accounts for viscoelastic-gravitational coupling under lateral variations in elastic parameters and viscosity. For a given viscoelastic structure and minimum wavelength scale, the computational effort involved with the numerical algorithm is proportional to the volume of the laterally heterogeneous region. Examples are presented of the calculation of post-seismic relaxation with a shallow, laterally heterogeneous volume following synthetic impulsive seismic events, and they illustrate the potentially large effect of regional 3-D heterogeneities on regional deformation patterns.
Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time-to-Event Analysis.
Gong, Xiajing; Hu, Meng; Zhao, Liang
2018-05-01
Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time-to-event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high-dimensional data featured by a large number of predictor variables. Our results showed that ML-based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high-dimensional data. The prediction performances of ML-based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML-based methods provide a powerful tool for time-to-event analysis, with a built-in capacity for high-dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. © 2018 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
Martian crater counts on Elysium Mons
NASA Technical Reports Server (NTRS)
Mcbride, Kathleen; Barlow, Nadine G.
1990-01-01
Without returned samples from the Martian surface, relative age chronologies and stratigraphic relationships provide the best information for determining the ages of geomorphic features and surface regions. Crater-size frequency distributions of six recently mapped geological units of Elysium Mons were measured to establish their relative ages. Most of the craters on Elysium Mons and the adjacent plains units are between 500 and 1000 meters in diameter. However, only craters 1 km in diameter or larger were used because of inadequate spatial resolution of some of the Viking images and to reduce probability of counting secondary craters. The six geologic units include all of the Elysium Mons construct and a portion of the plains units west of the volcano. The surface area of the units studied is approximately 128,000 sq km. Four of the geologic units were used to create crater distribution curves. There are no craters larger than 1 km within the Elysium Mons caldera. Craters that lacked raised rims, were irregularly shaped, or were arranged in a linear pattern were assumed to be endogenic in origin and not counted. A crater frequency distribution analysis is presented.
Tree structure and cavity microclimate: implications for bats and birds.
Clement, Matthew J; Castleberry, Steven B
2013-05-01
It is widely assumed that tree cavity structure and microclimate affect cavity selection and use in cavity-dwelling bats and birds. Despite the interest in tree structure and microclimate, the relationship between the two has rarely been quantified. Currently available data often comes from artificial structures that may not accurately represent conditions in natural cavities. We collected data on tree cavity structure and microclimate from 45 trees in five cypress-gum swamps in the Coastal Plain of Georgia in the United States in 2008. We used hierarchical linear models to predict cavity microclimate from tree structure and ambient temperature and humidity, and used Aikaike's information criterion to select the most parsimonious models. We found large differences in microclimate among trees, but tree structure variables explained <28% of the variation, while ambient conditions explained >80% of variation common to all trees. We argue that the determinants of microclimate are complex and multidimensional, and therefore cavity microclimate cannot be deduced easily from simple tree structures. Furthermore, we found that daily fluctuations in ambient conditions strongly affect microclimate, indicating that greater weather fluctuations will cause greater differences among tree cavities.
García Einschlag, Fernando S; Carlos, Luciano; Capparelli, Alberto L
2003-10-01
The rate constants for hydroxyl radical reaction toward a set of nitroaromatic substrates kS, have been measured at 25 degrees C using competition experiments in the UV/H2O2 process. For a given pair of substrates S1 and S2, the relative reactivity beta (defined as kS1/kS2) was calculated from the slope of the corresponding double logarithmic plot, i.e., of ln[S1] vs. ln[S2]. This method is more accurate and remained linear for larger conversions in comparison with the plots of ln[S1] and ln[S2] against time. The rate constants measured ranged from 0.33 to 8.6 x 10(9) M(-1)s(-1). A quantitative structure-reactivity relationship was found using the Hammett equation. Assuming sigma values to be additive, a value of -0.60 was obtained for the reaction constant rho. This value agrees with the high reactivity and the electrophilic nature of HO* radical.
Perceptions of sexual partner safety.
Masaro, C L; Dahinten, V S; Johnson, J; Ogilvie, G; Patrick, D M
2008-06-01
Many individuals select sexual partners based on assumed partner STI/HIV safety, yet few studies have investigated how these assumptions are formed. The objective of this research was to determine the extent to which partner safety beliefs were used to evaluate partner safety, and whether these beliefs influenced perceptions of personal STI/HIV risk. Participants (n = 317) recruited from an STI clinic completed a structured self-report questionnaire. A Partner Safety Beliefs Scale (PSBS) was developed to determine the factors that most influenced perceived partner safety. Exploratory factor analysis showed that a single factor accounted for 46% of the variance in the PSBS; with an internal consistency of 0.92. Linear regression was used to determine factors predictive of perceived personal STI/HIV risk. Participants endorsed statements indicating that knowing or trusting a sexual partner influences their beliefs about their partner's safety. Linear regression analysis indicated that education, income, number of sexual partners, and PSBS scores were significant predictors of perceived personal STI/HIV risk. The results of this study indicate that many individuals are relying on partner attributes and relationship characteristics when assessing the STI/HIV status of a sexual partner, and that this reliance is associated with a decreased perception of personal STI/HIV risk. Prevention campaigns need to acknowledge that people are likely to evaluate sexual partners whom they know and trust as safe. Dispelling erroneous beliefs about the ability to select safe partners is needed to promote safer sexual behavior.
Spatial patterns of agricultural expansion determine impacts on biodiversity and carbon storage.
Chaplin-Kramer, Rebecca; Sharp, Richard P; Mandle, Lisa; Sim, Sarah; Johnson, Justin; Butnar, Isabela; Milà I Canals, Llorenç; Eichelberger, Bradley A; Ramler, Ivan; Mueller, Carina; McLachlan, Nikolaus; Yousefi, Anahita; King, Henry; Kareiva, Peter M
2015-06-16
The agricultural expansion and intensification required to meet growing food and agri-based product demand present important challenges to future levels and management of biodiversity and ecosystem services. Influential actors such as corporations, governments, and multilateral organizations have made commitments to meeting future agricultural demand sustainably and preserving critical ecosystems. Current approaches to predicting the impacts of agricultural expansion involve calculation of total land conversion and assessment of the impacts on biodiversity or ecosystem services on a per-area basis, generally assuming a linear relationship between impact and land area. However, the impacts of continuing land development are often not linear and can vary considerably with spatial configuration. We demonstrate what could be gained by spatially explicit analysis of agricultural expansion at a large scale compared with the simple measure of total area converted, with a focus on the impacts on biodiversity and carbon storage. Using simple modeling approaches for two regions of Brazil, we find that for the same amount of land conversion, the declines in biodiversity and carbon storage can vary two- to fourfold depending on the spatial pattern of conversion. Impacts increase most rapidly in the earliest stages of agricultural expansion and are more pronounced in scenarios where conversion occurs in forest interiors compared with expansion into forests from their edges. This study reveals the importance of spatially explicit information in the assessment of land-use change impacts and for future land management and conservation.
NASA Astrophysics Data System (ADS)
Wang, Pin; Zhao, Han; You, Fangxin; Zhou, Hailong; Goggins, William B.
2017-08-01
Hand, foot, and mouth disease (HFMD) is an enterovirus-induced infectious disease, mainly affecting children under 5 years old. Outbreaks of HFMD in recent years indicate the disease interacts with both the weather and season. This study aimed to investigate the seasonal association between HFMD and weather variation in Chongqing, China. Generalized additive models and distributed lag non-linear models based on a maximum lag of 14 days, with negative binomial distribution assumed to account for overdispersion, were constructed to model the association between reporting HFMD cases from 2009 to 2014 and daily mean temperature, relative humidity, total rainfall and sun duration, adjusting for trend, season, and day of the week. The year-round temperature and relative humidity, rainfall in summer, and sun duration in winter were all significantly associated with HFMD. An inverted-U relationship was found between mean temperature and HFMD above 19 °C in summer, with a maximum morbidity at 27 °C, while the risk increased linearly with the temperature in winter. A hockey-stick association was found for relative humidity in summer with increasing risks over 60%. Heavy rainfall, relative to no rain, was found to be associated with reduced HFMD risk in summer and 2 h of sunshine could decrease the risk by 21% in winter. The present study showed meteorological variables were differentially associated with HFMD incidence in two seasons. Short-term weather variation surveillance and forecasting could be employed as an early indicator for potential HFMD outbreaks.
Dovlo, Edem; Lashkari, Bahman; Soo Sean Choi, Sung; Mandelis, Andreas; Shi, Wei; Liu, Fei-Fei
2017-09-01
Overcoming the limitations of conventional linear spectroscopy used in multispectral photoacoustic imaging, wherein a linear relationship is assumed between the absorbed optical energy and the absorption spectra of the chromophore at a specific location, is crucial for obtaining accurate spatially-resolved quantitative functional information by exploiting known chromophore-specific spectral characteristics. This study introduces a non-invasive phase-filtered differential photoacoustic technique, wavelength-modulated differential photoacoustic radar (WM-DPAR) imaging that addresses this issue by eliminating the effect of the unknown wavelength-dependent fluence. It employs two laser wavelengths modulated out-of-phase to significantly suppress background absorption while amplifying the difference between the two photoacoustic signals. This facilitates pre-malignant tumor identification and hypoxia monitoring, as minute changes in total hemoglobin concentration and hemoglobin oxygenation are detectable. The system can be tuned for specific applications such as cancer screening and SO 2 quantification by regulating the amplitude ratio and phase shift of the signal. The WM-DPAR imaging of a head and neck carcinoma tumor grown in the thigh of a nude rat demonstrates the functional PA imaging of small animals in vivo. The PA appearance of the tumor in relation to tumor vascularity is investigated by immunohistochemistry. Phase-filtered WM-DPAR imaging is also illustrated, maximizing quantitative SO 2 imaging fidelity of tissues. Oxygenation levels within a tumor grown in the thigh of a nude rat using the two-wavelength phase-filtered differential PAR method. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Spatial patterns of agricultural expansion determine impacts on biodiversity and carbon storage
Chaplin-Kramer, Rebecca; Sharp, Richard P.; Mandle, Lisa; Sim, Sarah; Johnson, Justin; Butnar, Isabela; Milà i Canals, Llorenç; Eichelberger, Bradley A.; Ramler, Ivan; Mueller, Carina; McLachlan, Nikolaus; Yousefi, Anahita; King, Henry; Kareiva, Peter M.
2015-01-01
The agricultural expansion and intensification required to meet growing food and agri-based product demand present important challenges to future levels and management of biodiversity and ecosystem services. Influential actors such as corporations, governments, and multilateral organizations have made commitments to meeting future agricultural demand sustainably and preserving critical ecosystems. Current approaches to predicting the impacts of agricultural expansion involve calculation of total land conversion and assessment of the impacts on biodiversity or ecosystem services on a per-area basis, generally assuming a linear relationship between impact and land area. However, the impacts of continuing land development are often not linear and can vary considerably with spatial configuration. We demonstrate what could be gained by spatially explicit analysis of agricultural expansion at a large scale compared with the simple measure of total area converted, with a focus on the impacts on biodiversity and carbon storage. Using simple modeling approaches for two regions of Brazil, we find that for the same amount of land conversion, the declines in biodiversity and carbon storage can vary two- to fourfold depending on the spatial pattern of conversion. Impacts increase most rapidly in the earliest stages of agricultural expansion and are more pronounced in scenarios where conversion occurs in forest interiors compared with expansion into forests from their edges. This study reveals the importance of spatially explicit information in the assessment of land-use change impacts and for future land management and conservation. PMID:26082547
Performance Metrics, Error Modeling, and Uncertainty Quantification
NASA Technical Reports Server (NTRS)
Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling
2016-01-01
A common set of statistical metrics has been used to summarize the performance of models or measurements- the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying uncertainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling methodology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.
Oxygen Vacancy Linear Clustering in a Perovskite Oxide
Eom, Kitae; Choi, Euiyoung; Choi, Minsu; ...
2017-07-14
Oxygen vacancies have been implicitly assumed isolated ones, and understanding oxide materials possibly containing oxygen vacancies remains elusive within the scheme of the isolated vacancies, although the oxygen vacancies have been playing a decisive role in oxide materials. We report the presence of oxygen vacancy linear clusters and their orientation along a specific crystallographic direction in SrTiO 3, a representative of a perovskite oxide. The presence of the linear clusters and associated electron localization was revealed by an electronic structure represented in the increase in the Ti 2+ valence state or corresponding Ti 3d 2 electronic configuration along with divacancymore » cluster model analysis and transport measurement. The orientation of the linear clusters along the [001] direction in perovskite SrTiO 3 was verified by further X-ray diffuse scattering analysis. And because SrTiO 3 is an archetypical perovskite oxide, the vacancy linear clustering with the specific aligned direction and electron localization can be extended to a wide variety of the perovskite oxides.« less
Oxygen Vacancy Linear Clustering in a Perovskite Oxide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eom, Kitae; Choi, Euiyoung; Choi, Minsu
Oxygen vacancies have been implicitly assumed isolated ones, and understanding oxide materials possibly containing oxygen vacancies remains elusive within the scheme of the isolated vacancies, although the oxygen vacancies have been playing a decisive role in oxide materials. We report the presence of oxygen vacancy linear clusters and their orientation along a specific crystallographic direction in SrTiO 3, a representative of a perovskite oxide. The presence of the linear clusters and associated electron localization was revealed by an electronic structure represented in the increase in the Ti 2+ valence state or corresponding Ti 3d 2 electronic configuration along with divacancymore » cluster model analysis and transport measurement. The orientation of the linear clusters along the [001] direction in perovskite SrTiO 3 was verified by further X-ray diffuse scattering analysis. And because SrTiO 3 is an archetypical perovskite oxide, the vacancy linear clustering with the specific aligned direction and electron localization can be extended to a wide variety of the perovskite oxides.« less
Linear free energy relationships for selected phthalate esters were used to estimate the rate constants for hydrolysis, biolysis, sediment-water partition coefficients, and biosorption required for modeling. The fate and transport behavior of dimethyl, diethyl, di-n-butyl, di-n-o...
The Effects of Multiple Linked Representations on Students' Learning of Linear Relationships
ERIC Educational Resources Information Center
Ozgun-Koca, S. Asli
2004-01-01
The focus of this study was on comparing three groups of Algebra I 9th-year students: one group using linked representation software, the second group using similar software but with semi-linked representations, and the control group in order to examine the effects on students' understanding of linear relationships. Data collection methods…
ERIC Educational Resources Information Center
Liou, Pey-Yan; Ho, Hsin-Ning Jessie
2018-01-01
The purpose of this study is to examine students' perceptions of instructional practices in the classroom, and to further investigate the relationships among instructional practices, motivational beliefs and science achievement. Hierarchical linear modelling was utilised to examine the Trends in International Mathematics and Science Study 2007…
NASA Astrophysics Data System (ADS)
Zhang, L.; Han, X. X.; Ge, J.; Wang, C. H.
2018-01-01
To determine the relationship between compressive strength and flexural strength of pavement geopolymer grouting material, 20 groups of geopolymer grouting materials were prepared, the compressive strength and flexural strength were determined by mechanical properties test. On the basis of excluding the abnormal values through boxplot, the results show that, the compressive strength test results were normal, but there were two mild outliers in 7days flexural strength test. The compressive strength and flexural strength were linearly fitted by SPSS, six regression models were obtained by linear fitting of compressive strength and flexural strength. The linear relationship between compressive strength and flexural strength can be better expressed by the cubic curve model, and the correlation coefficient was 0.842.