Sample records for change-point linear models

  1. Optimal Number and Allocation of Data Collection Points for Linear Spline Growth Curve Modeling: A Search for Efficient Designs

    ERIC Educational Resources Information Center

    Wu, Wei; Jia, Fan; Kinai, Richard; Little, Todd D.

    2017-01-01

    Spline growth modelling is a popular tool to model change processes with distinct phases and change points in longitudinal studies. Focusing on linear spline growth models with two phases and a fixed change point (the transition point from one phase to the other), we detail how to find optimal data collection designs that maximize the efficiency…

  2. Smooth random change point models.

    PubMed

    van den Hout, Ardo; Muniz-Terrera, Graciela; Matthews, Fiona E

    2011-03-15

    Change point models are used to describe processes over time that show a change in direction. An example of such a process is cognitive ability, where a decline a few years before death is sometimes observed. A broken-stick model consists of two linear parts and a breakpoint where the two lines intersect. Alternatively, models can be formulated that imply a smooth change between the two linear parts. Change point models can be extended by adding random effects to account for variability between subjects. A new smooth change point model is introduced and examples are presented that show how change point models can be estimated using functions in R for mixed-effects models. The Bayesian inference using WinBUGS is also discussed. The methods are illustrated using data from a population-based longitudinal study of ageing, the Cambridge City over 75 Cohort Study. The aim is to identify how many years before death individuals experience a change in the rate of decline of their cognitive ability. Copyright © 2010 John Wiley & Sons, Ltd.

  3. Development of a Linear Stirling Model with Varying Heat Inputs

    NASA Technical Reports Server (NTRS)

    Regan, Timothy F.; Lewandowski, Edward J.

    2007-01-01

    The linear model of the Stirling system developed by NASA Glenn Research Center (GRC) has been extended to include a user-specified heat input. Previously developed linear models were limited to the Stirling convertor and electrical load. They represented the thermodynamic cycle with pressure factors that remained constant. The numerical values of the pressure factors were generated by linearizing GRC s non-linear System Dynamic Model (SDM) of the convertor at a chosen operating point. The pressure factors were fixed for that operating point, thus, the model lost accuracy if a transition to a different operating point were simulated. Although the previous linear model was used in developing controllers that manipulated current, voltage, and piston position, it could not be used in the development of control algorithms that regulated hot-end temperature. This basic model was extended to include the thermal dynamics associated with a hot-end temperature that varies over time in response to external changes as well as to changes in the Stirling cycle. The linear model described herein includes not only dynamics of the piston, displacer, gas, and electrical circuit, but also the transient effects of the heater head thermal inertia. The linear version algebraically couples two separate linear dynamic models, one model of the Stirling convertor and one model of the thermal system, through the pressure factors. The thermal system model includes heat flow of heat transfer fluid, insulation loss, and temperature drops from the heat source to the Stirling convertor expansion space. The linear model was compared to a nonlinear model, and performance was very similar. The resulting linear model can be implemented in a variety of computing environments, and is suitable for analysis with classical and state space controls analysis techniques.

  4. Development of a Linear Stirling System Model with Varying Heat Inputs

    NASA Technical Reports Server (NTRS)

    Regan, Timothy F.; Lewandowski, Edward J.

    2007-01-01

    The linear model of the Stirling system developed by NASA Glenn Research Center (GRC) has been extended to include a user-specified heat input. Previously developed linear models were limited to the Stirling convertor and electrical load. They represented the thermodynamic cycle with pressure factors that remained constant. The numerical values of the pressure factors were generated by linearizing GRC's nonlinear System Dynamic Model (SDM) of the convertor at a chosen operating point. The pressure factors were fixed for that operating point, thus, the model lost accuracy if a transition to a different operating point were simulated. Although the previous linear model was used in developing controllers that manipulated current, voltage, and piston position, it could not be used in the development of control algorithms that regulated hot-end temperature. This basic model was extended to include the thermal dynamics associated with a hot-end temperature that varies over time in response to external changes as well as to changes in the Stirling cycle. The linear model described herein includes not only dynamics of the piston, displacer, gas, and electrical circuit, but also the transient effects of the heater head thermal inertia. The linear version algebraically couples two separate linear dynamic models, one model of the Stirling convertor and one model of the thermal system, through the pressure factors. The thermal system model includes heat flow of heat transfer fluid, insulation loss, and temperature drops from the heat source to the Stirling convertor expansion space. The linear model was compared to a nonlinear model, and performance was very similar. The resulting linear model can be implemented in a variety of computing environments, and is suitable for analysis with classical and state space controls analysis techniques.

  5. Comparison of linear, skewed-linear, and proportional hazard models for the analysis of lambing interval in Ripollesa ewes.

    PubMed

    Casellas, J; Bach, R

    2012-06-01

    Lambing interval is a relevant reproductive indicator for sheep populations under continuous mating systems, although there is a shortage of selection programs accounting for this trait in the sheep industry. Both the historical assumption of small genetic background and its unorthodox distribution pattern have limited its implementation as a breeding objective. In this manuscript, statistical performances of 3 alternative parametrizations [i.e., symmetric Gaussian mixed linear (GML) model, skew-Gaussian mixed linear (SGML) model, and piecewise Weibull proportional hazard (PWPH) model] have been compared to elucidate the preferred methodology to handle lambing interval data. More specifically, flock-by-flock analyses were performed on 31,986 lambing interval records (257.3 ± 0.2 d) from 6 purebred Ripollesa flocks. Model performances were compared in terms of deviance information criterion (DIC) and Bayes factor (BF). For all flocks, PWPH models were clearly preferred; they generated a reduction of 1,900 or more DIC units and provided BF estimates larger than 100 (i.e., PWPH models against linear models). These differences were reduced when comparing PWPH models with different number of change points for the baseline hazard function. In 4 flocks, only 2 change points were required to minimize the DIC, whereas 4 and 6 change points were needed for the 2 remaining flocks. These differences demonstrated a remarkable degree of heterogeneity across sheep flocks that must be properly accounted for in genetic evaluation models to avoid statistical biases and suboptimal genetic trends. Within this context, all 6 Ripollesa flocks revealed substantial genetic background for lambing interval with heritabilities ranging between 0.13 and 0.19. This study provides the first evidence of the suitability of PWPH models for lambing interval analysis, clearly discarding previous parametrizations focused on mixed linear models.

  6. Estuarine Response to River Flow and Sea-Level Rise under Future Climate Change and Human Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Zhaoqing; Wang, Taiping; Voisin, Nathalie

    Understanding the response of river flow and estuarine hydrodynamics to climate change, land-use/land-cover change (LULC), and sea-level rise is essential to managing water resources and stress on living organisms under these changing conditions. This paper presents a modeling study using a watershed hydrology model and an estuarine hydrodynamic model, in a one-way coupling, to investigate the estuarine hydrodynamic response to sea-level rise and change in river flow due to the effect of future climate and LULC changes in the Snohomish River estuary, Washington, USA. A set of hydrodynamic variables, including salinity intrusion points, average water depth, and salinity of themore » inundated area, were used to quantify the estuarine response to river flow and sea-level rise. Model results suggest that salinity intrusion points in the Snohomish River estuary and the average salinity of the inundated areas are a nonlinear function of river flow, although the average water depth in the inundated area is approximately linear with river flow. Future climate changes will shift salinity intrusion points further upstream under low flow conditions and further downstream under high flow conditions. In contrast, under the future LULC change scenario, the salinity intrusion point will shift downstream under both low and high flow conditions, compared to present conditions. The model results also suggest that the average water depth in the inundated areas increases linearly with sea-level rise but at a slower rate, and the average salinity in the inundated areas increases linearly with sea-level rise; however, the response of salinity intrusion points in the river to sea-level rise is strongly nonlinear.« less

  7. An application of change-point recursive models to the relationship between litter size and number of stillborns in pigs.

    PubMed

    Ibáñez-Escriche, N; López de Maturana, E; Noguera, J L; Varona, L

    2010-11-01

    We developed and implemented change-point recursive models and compared them with a linear recursive model and a standard mixed model (SMM), in the scope of the relationship between litter size (LS) and number of stillborns (NSB) in pigs. The proposed approach allows us to estimate the point of change in multiple-segment modeling of a nonlinear relationship between phenotypes. We applied the procedure to a data set provided by a commercial Large White selection nucleus. The data file consisted of LS and NSB records of 4,462 parities. The results of the analysis clearly identified the location of the change points between different structural regression coefficients. The magnitude of these coefficients increased with LS, indicating an increasing incidence of LS on the NSB ratio. However, posterior distributions of correlations were similar across subpopulations (defined by the change points on LS), except for those between residuals. The heritability estimates of NSB did not present differences between recursive models. Nevertheless, these heritabilities were greater than those obtained for SMM (0.05) with a posterior probability of 85%. These results suggest a nonlinear relationship between LS and NSB, which supports the adequacy of a change-point recursive model for its analysis. Furthermore, the results from model comparisons support the use of recursive models. However, the adequacy of the different recursive models depended on the criteria used: the linear recursive model was preferred on account of its smallest deviance value, whereas nonlinear recursive models provided a better fit and predictive ability based on the cross-validation approach.

  8. Bayesian change point analysis of abundance trends for pelagic fishes in the upper San Francisco Estuary.

    PubMed

    Thomson, James R; Kimmerer, Wim J; Brown, Larry R; Newman, Ken B; Mac Nally, Ralph; Bennett, William A; Feyrer, Frederick; Fleishman, Erica

    2010-07-01

    We examined trends in abundance of four pelagic fish species (delta smelt, longfin smelt, striped bass, and threadfin shad) in the upper San Francisco Estuary, California, USA, over 40 years using Bayesian change point models. Change point models identify times of abrupt or unusual changes in absolute abundance (step changes) or in rates of change in abundance (trend changes). We coupled Bayesian model selection with linear regression splines to identify biotic or abiotic covariates with the strongest associations with abundances of each species. We then refitted change point models conditional on the selected covariates to explore whether those covariates could explain statistical trends or change points in species abundances. We also fitted a multispecies change point model that identified change points common to all species. All models included hierarchical structures to model data uncertainties, including observation errors and missing covariate values. There were step declines in abundances of all four species in the early 2000s, with a likely common decline in 2002. Abiotic variables, including water clarity, position of the 2 per thousand isohaline (X2), and the volume of freshwater exported from the estuary, explained some variation in species' abundances over the time series, but no selected covariates could explain statistically the post-2000 change points for any species.

  9. A Vernacular for Linear Latent Growth Models

    ERIC Educational Resources Information Center

    Hancock, Gregory R.; Choi, Jaehwa

    2006-01-01

    In its most basic form, latent growth modeling (latent curve analysis) allows an assessment of individuals' change in a measured variable X over time. For simple linear models, as with other growth models, parameter estimates associated with the a construct (amount of X at a chosen temporal reference point) and b construct (growth in X per unit…

  10. Identification and agreement of first turn point by mathematical analysis applied to heart rate, carbon dioxide output and electromyography

    PubMed Central

    Zamunér, Antonio R.; Catai, Aparecida M.; Martins, Luiz E. B.; Sakabe, Daniel I.; Silva, Ester Da

    2013-01-01

    Background The second heart rate (HR) turn point has been extensively studied, however there are few studies determining the first HR turn point. Also, the use of mathematical and statistical models for determining changes in dynamic characteristics of physiological variables during an incremental cardiopulmonary test has been suggested. Objectives To determine the first turn point by analysis of HR, surface electromyography (sEMG), and carbon dioxide output () using two mathematical models and to compare the results to those of the visual method. Method Ten sedentary middle-aged men (53.9±3.2 years old) were submitted to cardiopulmonary exercise testing on an electromagnetic cycle ergometer until exhaustion. Ventilatory variables, HR, and sEMG of the vastus lateralis were obtained in real time. Three methods were used to determine the first turn point: 1) visual analysis based on loss of parallelism between and oxygen uptake (); 2) the linear-linear model, based on fitting the curves to the set of data (Lin-Lin ); 3) a bi-segmental linear regression of Hinkley' s algorithm applied to HR (HMM-HR), (HMM- ), and sEMG data (HMM-RMS). Results There were no differences between workload, HR, and ventilatory variable values at the first ventilatory turn point as determined by the five studied parameters (p>0.05). The Bland-Altman plot showed an even distribution of the visual analysis method with Lin-Lin , HMM-HR, HMM-CO2, and HMM-RMS. Conclusion The proposed mathematical models were effective in determining the first turn point since they detected the linear pattern change and the deflection point of , HR responses, and sEMG. PMID:24346296

  11. Identification and agreement of first turn point by mathematical analysis applied to heart rate, carbon dioxide output and electromyography.

    PubMed

    Zamunér, Antonio R; Catai, Aparecida M; Martins, Luiz E B; Sakabe, Daniel I; Da Silva, Ester

    2013-01-01

    The second heart rate (HR) turn point has been extensively studied, however there are few studies determining the first HR turn point. Also, the use of mathematical and statistical models for determining changes in dynamic characteristics of physiological variables during an incremental cardiopulmonary test has been suggested. To determine the first turn point by analysis of HR, surface electromyography (sEMG), and carbon dioxide output (VCO2) using two mathematical models and to compare the results to those of the visual method. Ten sedentary middle-aged men (53.9 ± 3.2 years old) were submitted to cardiopulmonary exercise testing on an electromagnetic cycle ergometer until exhaustion. Ventilatory variables, HR, and sEMG of the vastus lateralis were obtained in real time. Three methods were used to determine the first turn point: 1) visual analysis based on loss of parallelism between VCO2 and oxygen uptake (VO2); 2) the linear-linear model, based on fitting the curves to the set of VCO2 data (Lin-LinVCO2); 3) a bi-segmental linear regression of Hinkley's algorithm applied to HR (HMM-HR), VCO2 (HMM-VCO2), and sEMG data (HMM-RMS). There were no differences between workload, HR, and ventilatory variable values at the first ventilatory turn point as determined by the five studied parameters (p>0.05). The Bland-Altman plot showed an even distribution of the visual analysis method with Lin-LinVCO2, HMM-HR, HMM-VCO2, and HMM-RMS. The proposed mathematical models were effective in determining the first turn point since they detected the linear pattern change and the deflection point of VCO2, HR responses, and sEMG.

  12. Change point detection of the Persian Gulf sea surface temperature

    NASA Astrophysics Data System (ADS)

    Shirvani, A.

    2017-01-01

    In this study, the Student's t parametric and Mann-Whitney nonparametric change point models (CPMs) were applied to detect change point in the annual Persian Gulf sea surface temperature anomalies (PGSSTA) time series for the period 1951-2013. The PGSSTA time series, which were serially correlated, were transformed to produce an uncorrelated pre-whitened time series. The pre-whitened PGSSTA time series were utilized as the input file of change point models. Both the applied parametric and nonparametric CPMs estimated the change point in the PGSSTA in 1992. The PGSSTA follow the normal distribution up to 1992 and thereafter, but with a different mean value after year 1992. The estimated slope of linear trend in PGSSTA time series for the period 1951-1992 was negative; however, that was positive after the detected change point. Unlike the PGSSTA, the applied CPMs suggested no change point in the Niño3.4SSTA time series.

  13. Forecasting longitudinal changes in oropharyngeal tumor morphology throughout the course of head and neck radiation therapy

    PubMed Central

    Yock, Adam D.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Kudchadker, Rajat J.; Court, Laurence E.

    2014-01-01

    Purpose: To create models that forecast longitudinal trends in changing tumor morphology and to evaluate and compare their predictive potential throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe 35 gross tumor volumes (GTVs) throughout the course of intensity-modulated radiation therapy for oropharyngeal tumors. The feature vectors comprised the coordinates of the GTV centroids and a description of GTV shape using either interlandmark distances or a spherical harmonic decomposition of these distances. The change in the morphology feature vector observed at 33 time points throughout the course of treatment was described using static, linear, and mean models. Models were adjusted at 0, 1, 2, 3, or 5 different time points (adjustment points) to improve prediction accuracy. The potential of these models to forecast GTV morphology was evaluated using leave-one-out cross-validation, and the accuracy of the models was compared using Wilcoxon signed-rank tests. Results: Adding a single adjustment point to the static model without any adjustment points decreased the median error in forecasting the position of GTV surface landmarks by the largest amount (1.2 mm). Additional adjustment points further decreased the forecast error by about 0.4 mm each. Selection of the linear model decreased the forecast error for both the distance-based and spherical harmonic morphology descriptors (0.2 mm), while the mean model decreased the forecast error for the distance-based descriptor only (0.2 mm). The magnitude and statistical significance of these improvements decreased with each additional adjustment point, and the effect from model selection was not as large as that from adding the initial points. Conclusions: The authors present models that anticipate longitudinal changes in tumor morphology using various models and model adjustment schemes. The accuracy of these models depended on their form, and the utility of these models includes the characterization of patient-specific response with implications for treatment management and research study design. PMID:25086518

  14. Forecasting longitudinal changes in oropharyngeal tumor morphology throughout the course of head and neck radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yock, Adam D.; Kudchadker, Rajat J.; Rao, Arvind

    2014-08-15

    Purpose: To create models that forecast longitudinal trends in changing tumor morphology and to evaluate and compare their predictive potential throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe 35 gross tumor volumes (GTVs) throughout the course of intensity-modulated radiation therapy for oropharyngeal tumors. The feature vectors comprised the coordinates of the GTV centroids and a description of GTV shape using either interlandmark distances or a spherical harmonic decomposition of these distances. The change in the morphology feature vector observed at 33 time points throughout the course of treatment was described using static, linear,more » and mean models. Models were adjusted at 0, 1, 2, 3, or 5 different time points (adjustment points) to improve prediction accuracy. The potential of these models to forecast GTV morphology was evaluated using leave-one-out cross-validation, and the accuracy of the models was compared using Wilcoxon signed-rank tests. Results: Adding a single adjustment point to the static model without any adjustment points decreased the median error in forecasting the position of GTV surface landmarks by the largest amount (1.2 mm). Additional adjustment points further decreased the forecast error by about 0.4 mm each. Selection of the linear model decreased the forecast error for both the distance-based and spherical harmonic morphology descriptors (0.2 mm), while the mean model decreased the forecast error for the distance-based descriptor only (0.2 mm). The magnitude and statistical significance of these improvements decreased with each additional adjustment point, and the effect from model selection was not as large as that from adding the initial points. Conclusions: The authors present models that anticipate longitudinal changes in tumor morphology using various models and model adjustment schemes. The accuracy of these models depended on their form, and the utility of these models includes the characterization of patient-specific response with implications for treatment management and research study design.« less

  15. A travel time forecasting model based on change-point detection method

    NASA Astrophysics Data System (ADS)

    LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei

    2017-06-01

    Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.

  16. Nonlinear Dynamic Models in Advanced Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry

    2002-01-01

    To facilitate analysis, ALS systems are often assumed to be linear and time invariant, but they usually have important nonlinear and dynamic aspects. Nonlinear dynamic behavior can be caused by time varying inputs, changes in system parameters, nonlinear system functions, closed loop feedback delays, and limits on buffer storage or processing rates. Dynamic models are usually cataloged according to the number of state variables. The simplest dynamic models are linear, using only integration, multiplication, addition, and subtraction of the state variables. A general linear model with only two state variables can produce all the possible dynamic behavior of linear systems with many state variables, including stability, oscillation, or exponential growth and decay. Linear systems can be described using mathematical analysis. Nonlinear dynamics can be fully explored only by computer simulations of models. Unexpected behavior is produced by simple models having only two or three state variables with simple mathematical relations between them. Closed loop feedback delays are a major source of system instability. Exceeding limits on buffer storage or processing rates forces systems to change operating mode. Different equilibrium points may be reached from different initial conditions. Instead of one stable equilibrium point, the system may have several equilibrium points, oscillate at different frequencies, or even behave chaotically, depending on the system inputs and initial conditions. The frequency spectrum of an output oscillation may contain harmonics and the sums and differences of input frequencies, but it may also contain a stable limit cycle oscillation not related to input frequencies. We must investigate the nonlinear dynamic aspects of advanced life support systems to understand and counter undesirable behavior.

  17. Sample size and classification error for Bayesian change-point models with unlabelled sub-groups and incomplete follow-up.

    PubMed

    White, Simon R; Muniz-Terrera, Graciela; Matthews, Fiona E

    2018-05-01

    Many medical (and ecological) processes involve the change of shape, whereby one trajectory changes into another trajectory at a specific time point. There has been little investigation into the study design needed to investigate these models. We consider the class of fixed effect change-point models with an underlying shape comprised two joined linear segments, also known as broken-stick models. We extend this model to include two sub-groups with different trajectories at the change-point, a change and no change class, and also include a missingness model to account for individuals with incomplete follow-up. Through a simulation study, we consider the relationship of sample size to the estimates of the underlying shape, the existence of a change-point, and the classification-error of sub-group labels. We use a Bayesian framework to account for the missing labels, and the analysis of each simulation is performed using standard Markov chain Monte Carlo techniques. Our simulation study is inspired by cognitive decline as measured by the Mini-Mental State Examination, where our extended model is appropriate due to the commonly observed mixture of individuals within studies who do or do not exhibit accelerated decline. We find that even for studies of modest size ( n = 500, with 50 individuals observed past the change-point) in the fixed effect setting, a change-point can be detected and reliably estimated across a range of observation-errors.

  18. A stochastic model for stationary dynamics of prices in real estate markets. A case of random intensity for Poisson moments of prices changes

    NASA Astrophysics Data System (ADS)

    Rusakov, Oleg; Laskin, Michael

    2017-06-01

    We consider a stochastic model of changes of prices in real estate markets. We suppose that in a book of prices the changes happen in points of jumps of a Poisson process with a random intensity, i.e. moments of changes sequently follow to a random process of the Cox process type. We calculate cumulative mathematical expectations and variances for the random intensity of this point process. In the case that the process of random intensity is a martingale the cumulative variance has a linear grows. We statistically process a number of observations of real estate prices and accept hypotheses of a linear grows for estimations as well for cumulative average, as for cumulative variance both for input and output prises that are writing in the book of prises.

  19. Mapping the critical gestational age at birth that alters brain development in preterm-born infants using multi-modal MRI.

    PubMed

    Wu, Dan; Chang, Linda; Akazawa, Kentaro; Oishi, Kumiko; Skranes, Jon; Ernst, Thomas; Oishi, Kenichi

    2017-04-01

    Preterm birth adversely affects postnatal brain development. In order to investigate the critical gestational age at birth (GAB) that alters the developmental trajectory of gray and white matter structures in the brain, we investigated diffusion tensor and quantitative T2 mapping data in 43 term-born and 43 preterm-born infants. A novel multivariate linear model-the change point model, was applied to detect change points in fractional anisotropy, mean diffusivity, and T2 relaxation time. Change points captured the "critical" GAB value associated with a change in the linear relation between GAB and MRI measures. The analysis was performed in 126 regions across the whole brain using an atlas-based image quantification approach to investigate the spatial pattern of the critical GAB. Our results demonstrate that the critical GABs are region- and modality-specific, generally following a central-to-peripheral and bottom-to-top order of structural development. This study may offer unique insights into the postnatal neurological development associated with differential degrees of preterm birth. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Piecewise multivariate modelling of sequential metabolic profiling data.

    PubMed

    Rantalainen, Mattias; Cloarec, Olivier; Ebbels, Timothy M D; Lundstedt, Torbjörn; Nicholson, Jeremy K; Holmes, Elaine; Trygg, Johan

    2008-02-19

    Modelling the time-related behaviour of biological systems is essential for understanding their dynamic responses to perturbations. In metabolic profiling studies, the sampling rate and number of sampling points are often restricted due to experimental and biological constraints. A supervised multivariate modelling approach with the objective to model the time-related variation in the data for short and sparsely sampled time-series is described. A set of piecewise Orthogonal Projections to Latent Structures (OPLS) models are estimated, describing changes between successive time points. The individual OPLS models are linear, but the piecewise combination of several models accommodates modelling and prediction of changes which are non-linear with respect to the time course. We demonstrate the method on both simulated and metabolic profiling data, illustrating how time related changes are successfully modelled and predicted. The proposed method is effective for modelling and prediction of short and multivariate time series data. A key advantage of the method is model transparency, allowing easy interpretation of time-related variation in the data. The method provides a competitive complement to commonly applied multivariate methods such as OPLS and Principal Component Analysis (PCA) for modelling and analysis of short time-series data.

  1. Estimating the remaining useful life of bearings using a neuro-local linear estimator-based method.

    PubMed

    Ahmad, Wasim; Ali Khan, Sheraz; Kim, Jong-Myon

    2017-05-01

    Estimating the remaining useful life (RUL) of a bearing is required for maintenance scheduling. While the degradation behavior of a bearing changes during its lifetime, it is usually assumed to follow a single model. In this letter, bearing degradation is modeled by a monotonically increasing function that is globally non-linear and locally linearized. The model is generated using historical data that is smoothed with a local linear estimator. A neural network learns this model and then predicts future levels of vibration acceleration to estimate the RUL of a bearing. The proposed method yields reasonably accurate estimates of the RUL of a bearing at different points during its operational life.

  2. Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data

    ERIC Educational Resources Information Center

    Xu, Shu; Blozis, Shelley A.

    2011-01-01

    Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…

  3. Novel point estimation from a semiparametric ratio estimator (SPRE): long-term health outcomes from short-term linear data, with application to weight loss in obesity.

    PubMed

    Weissman-Miller, Deborah

    2013-11-02

    Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment.

  4. Analyzing industrial energy use through ordinary least squares regression models

    NASA Astrophysics Data System (ADS)

    Golden, Allyson Katherine

    Extensive research has been performed using regression analysis and calibrated simulations to create baseline energy consumption models for residential buildings and commercial institutions. However, few attempts have been made to discuss the applicability of these methodologies to establish baseline energy consumption models for industrial manufacturing facilities. In the few studies of industrial facilities, the presented linear change-point and degree-day regression analyses illustrate ideal cases. It follows that there is a need in the established literature to discuss the methodologies and to determine their applicability for establishing baseline energy consumption models of industrial manufacturing facilities. The thesis determines the effectiveness of simple inverse linear statistical regression models when establishing baseline energy consumption models for industrial manufacturing facilities. Ordinary least squares change-point and degree-day regression methods are used to create baseline energy consumption models for nine different case studies of industrial manufacturing facilities located in the southeastern United States. The influence of ambient dry-bulb temperature and production on total facility energy consumption is observed. The energy consumption behavior of industrial manufacturing facilities is only sometimes sufficiently explained by temperature, production, or a combination of the two variables. This thesis also provides methods for generating baseline energy models that are straightforward and accessible to anyone in the industrial manufacturing community. The methods outlined in this thesis may be easily replicated by anyone that possesses basic spreadsheet software and general knowledge of the relationship between energy consumption and weather, production, or other influential variables. With the help of simple inverse linear regression models, industrial manufacturing facilities may better understand their energy consumption and production behavior, and identify opportunities for energy and cost savings. This thesis study also utilizes change-point and degree-day baseline energy models to disaggregate facility annual energy consumption into separate industrial end-user categories. The baseline energy model provides a suitable and economical alternative to sub-metering individual manufacturing equipment. One case study describes the conjoined use of baseline energy models and facility information gathered during a one-day onsite visit to perform an end-point energy analysis of an injection molding facility conducted by the Alabama Industrial Assessment Center. Applying baseline regression model results to the end-point energy analysis allowed the AIAC to better approximate the annual energy consumption of the facility's HVAC system.

  5. Is Linear Displacement Information Or Angular Displacement Information Used During The Adaptation of Pointing Responses To An Optically Shifted Image?

    NASA Technical Reports Server (NTRS)

    Bautista, Abigail B.

    1994-01-01

    Twenty-four observers looked through a pair of 20 diopter wedge prisms and pointed to an image of a target which was displaced vertically from eye level by 6 cm at a distance of 30 cm. Observers pointed 40 times, using only their right hand, and received error-corrective feedback upon termination of each pointing response (terminal visual feedback). At three testing distances, 20, 30, and 40 cm, ten pre-exposure and ten post-exposure pointing responses were recorded for each hand as observers reached to a mirror-viewed target located at eye level. The difference between pre- and post-exposure pointing response (adaptive shift) was compared for both Exposed and Unexposed hands across all three testing distances. The data were assessed according to the results predicted by two alternative models for processing spatial-information: one using angular displacement information and another using linear displacement information. The angular model of spatial mapping best predicted the observer's pointing response for the Exposed hand. Although the angular adaptive shift did not change significantly as a function of distance (F(2,44) = 1.12, n.s.), the linear adaptive shift increased significantly over the three testing distances 02 44) = 4.90 p less than 0.01).

  6. Generation of High Frequency Response in a Dynamically Loaded, Nonlinear Soil Column

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spears, Robert Edward; Coleman, Justin Leigh

    2015-08-01

    Detailed guidance on linear seismic analysis of soil columns is provided in “Seismic Analysis of Safety-Related Nuclear Structures and Commentary (ASCE 4, 1998),” which is currently under revision. A new Appendix in ASCE 4-2014 (draft) is being added to provide guidance for nonlinear time domain analysis which includes evaluation of soil columns. When performing linear analysis, a given soil column is typically evaluated with a linear, viscous damped constitutive model. When submitted to a sine wave motion, this constitutive model produces a smooth hysteresis loop. For nonlinear analysis, the soil column can be modelled with an appropriate nonlinear hysteretic soilmore » model. For the model in this paper, the stiffness and energy absorption result from a defined post yielding shear stress versus shear strain curve. This curve is input with tabular data points. When submitted to a sine wave motion, this constitutive model produces a hysteresis loop that looks similar in shape to the input tabular data points on the sides with discontinuous, pointed ends. This paper compares linear and nonlinear soil column results. The results show that the nonlinear analysis produces additional high frequency response. The paper provides additional study to establish what portion of the high frequency response is due to numerical noise associated with the tabular input curve and what portion is accurately caused by the pointed ends of the hysteresis loop. Finally, the paper shows how the results are changed when a significant structural mass is added to the top of the soil column.« less

  7. Detection of kinetic change points in piece-wise linear single molecule motion

    NASA Astrophysics Data System (ADS)

    Hill, Flynn R.; van Oijen, Antoine M.; Duderstadt, Karl E.

    2018-03-01

    Single-molecule approaches present a powerful way to obtain detailed kinetic information at the molecular level. However, the identification of small rate changes is often hindered by the considerable noise present in such single-molecule kinetic data. We present a general method to detect such kinetic change points in trajectories of motion of processive single molecules having Gaussian noise, with a minimum number of parameters and without the need of an assumed kinetic model beyond piece-wise linearity of motion. Kinetic change points are detected using a likelihood ratio test in which the probability of no change is compared to the probability of a change occurring, given the experimental noise. A predetermined confidence interval minimizes the occurrence of false detections. Applying the method recursively to all sub-regions of a single molecule trajectory ensures that all kinetic change points are located. The algorithm presented allows rigorous and quantitative determination of kinetic change points in noisy single molecule observations without the need for filtering or binning, which reduce temporal resolution and obscure dynamics. The statistical framework for the approach and implementation details are discussed. The detection power of the algorithm is assessed using simulations with both single kinetic changes and multiple kinetic changes that typically arise in observations of single-molecule DNA-replication reactions. Implementations of the algorithm are provided in ImageJ plugin format written in Java and in the Julia language for numeric computing, with accompanying Jupyter Notebooks to allow reproduction of the analysis presented here.

  8. Experimental Robot Model Adjustments Based on Force–Torque Sensor Information

    PubMed Central

    2018-01-01

    The computational complexity of humanoid robot balance control is reduced through the application of simplified kinematics and dynamics models. However, these simplifications lead to the introduction of errors that add to other inherent electro-mechanic inaccuracies and affect the robotic system. Linear control systems deal with these inaccuracies if they operate around a specific working point but are less precise if they do not. This work presents a model improvement based on the Linear Inverted Pendulum Model (LIPM) to be applied in a non-linear control system. The aim is to minimize the control error and reduce robot oscillations for multiple working points. The new model, named the Dynamic LIPM (DLIPM), is used to plan the robot behavior with respect to changes in the balance status denoted by the zero moment point (ZMP). Thanks to the use of information from force–torque sensors, an experimental procedure has been applied to characterize the inaccuracies and introduce them into the new model. The experiments consist of balance perturbations similar to those of push-recovery trials, in which step-shaped ZMP variations are produced. The results show that the responses of the robot with respect to balance perturbations are more precise and the mechanical oscillations are reduced without comprising robot dynamics. PMID:29534477

  9. Linear summation of outputs in a balanced network model of motor cortex.

    PubMed

    Capaday, Charles; van Vreeswijk, Carl

    2015-01-01

    Given the non-linearities of the neural circuitry's elements, we would expect cortical circuits to respond non-linearly when activated. Surprisingly, when two points in the motor cortex are activated simultaneously, the EMG responses are the linear sum of the responses evoked by each of the points activated separately. Additionally, the corticospinal transfer function is close to linear, implying that the synaptic interactions in motor cortex must be effectively linear. To account for this, here we develop a model of motor cortex composed of multiple interconnected points, each comprised of reciprocally connected excitatory and inhibitory neurons. We show how non-linearities in neuronal transfer functions are eschewed by strong synaptic interactions within each point. Consequently, the simultaneous activation of multiple points results in a linear summation of their respective outputs. We also consider the effects of reduction of inhibition at a cortical point when one or more surrounding points are active. The network response in this condition is linear over an approximately two- to three-fold decrease of inhibitory feedback strength. This result supports the idea that focal disinhibition allows linear coupling of motor cortical points to generate movement related muscle activation patterns; albeit with a limitation on gain control. The model also explains why neural activity does not spread as far out as the axonal connectivity allows, whilst also explaining why distant cortical points can be, nonetheless, functionally coupled by focal disinhibition. Finally, we discuss the advantages that linear interactions at the cortical level afford to motor command synthesis.

  10. A new lattice hydrodynamic model based on control method considering the flux change rate and delay feedback signal

    NASA Astrophysics Data System (ADS)

    Qin, Shunda; Ge, Hongxia; Cheng, Rongjun

    2018-02-01

    In this paper, a new lattice hydrodynamic model is proposed by taking delay feedback and flux change rate effect into account in a single lane. The linear stability condition of the new model is derived by control theory. By using the nonlinear analysis method, the mKDV equation near the critical point is deduced to describe the traffic congestion. Numerical simulations are carried out to demonstrate the advantage of the new model in suppressing traffic jam with the consideration of flux change rate effect in delay feedback model.

  11. Linear and nonlinear response in sheared soft spheres

    NASA Astrophysics Data System (ADS)

    Tighe, Brian

    2013-11-01

    Packings of soft spheres provide an idealized model of foams, emulsions, and grains, while also serving as the canonical example of a system undergoing a jamming transition. Packings' mechanical response has now been studied exhaustively in the context of ``strict linear response,'' i.e. by linearizing about a stable static packing and solving the resulting equations of motion. Both because the system is close to a critical point and because the soft sphere pair potential is non-analytic at the point of contact, it is reasonable to ask under what circumstances strict linear response provides a good approximation to the actual response. We simulate sheared soft sphere packings close to jamming and identify two distinct strain scales: (i) the scale on which strict linear response fails, coinciding with a topological change in the packing's contact network; and (ii) the scale on which linear superposition of the averaged stress-strain curve breaks down. This latter scale provides a ``weak linear response'' criterion and is likely to be more experimentally relevant.

  12. Single point dilution method for the quantitative analysis of antibodies to the gag24 protein of HIV-1.

    PubMed

    Palenzuela, D O; Benítez, J; Rivero, J; Serrano, R; Ganzó, O

    1997-10-13

    In the present work a concept proposed in 1992 by Dopotka and Giesendorf was applied to the quantitative analysis of antibodies to the p24 protein of HIV-1 in infected asymptomatic individuals and AIDS patients. Two approaches were analyzed, a linear model OD = b0 + b1.log(titer) and a nonlinear log(titer) = alpha.OD beta, similar to the Dopotka-Giesendorf's model. The above two proposed models adequately fit the dependence of the optical density values at a single point dilution, and titers achieved by the end point dilution method (EPDM). Nevertheless, the nonlinear model better fits the experimental data, according to residuals analysis. Classical EPDM was compared with the new single point dilution method (SPDM) using both models. The best correlation between titers calculated using both models and titers achieved by EPDM was obtained with the nonlinear model. The correlation coefficients for the nonlinear and linear models were r = 0.85 and r = 0.77, respectively. A new correction factor was introduced into the nonlinear model and this reduced the day-to-day variation of titer values. In general, SPDM saves time, reagents and is more precise and sensitive to changes in antibody levels, and therefore has a higher resolution than EPDM.

  13. Mapping the Critical Gestational Age at Birth that Alters Brain Development in Preterm-born Infants using Multi-Modal MRI

    PubMed Central

    Wu, Dan; Chang, Linda; Akazawa, Kentaro; Oishi, Kumiko; Skranes, Jon; Ernst, Thomas; Oishi, Kenichi

    2017-01-01

    Preterm birth adversely affects postnatal brain development. In order to investigate the critical gestational age at birth (GAB) that alters the developmental trajectory of gray and white matter structures in the brain, we investigated diffusion tensor and quantitative T2 mapping data in 43 term-born and 43 preterm-born infants. A novel multivariate linear model—the change point model, was applied to detect change points in fractional anisotropy, mean diffusivity, and T2 relaxation time. Change points captured the “critical” GAB value associated with a change in the linear relation between GAB and MRI measures. The analysis was performed in 126 regions across the whole brain using an atlas-based image quantification approach to investigate the spatial pattern of the critical GAB. Our results demonstrate that the critical GABs are region- and modality-specific, generally following a central-to-peripheral and bottom-to-top order of structural development. This study may offer unique insights into the postnatal neurological development associated with differential degrees of preterm birth. PMID:28111189

  14. Linear summation of outputs in a balanced network model of motor cortex

    PubMed Central

    Capaday, Charles; van Vreeswijk, Carl

    2015-01-01

    Given the non-linearities of the neural circuitry's elements, we would expect cortical circuits to respond non-linearly when activated. Surprisingly, when two points in the motor cortex are activated simultaneously, the EMG responses are the linear sum of the responses evoked by each of the points activated separately. Additionally, the corticospinal transfer function is close to linear, implying that the synaptic interactions in motor cortex must be effectively linear. To account for this, here we develop a model of motor cortex composed of multiple interconnected points, each comprised of reciprocally connected excitatory and inhibitory neurons. We show how non-linearities in neuronal transfer functions are eschewed by strong synaptic interactions within each point. Consequently, the simultaneous activation of multiple points results in a linear summation of their respective outputs. We also consider the effects of reduction of inhibition at a cortical point when one or more surrounding points are active. The network response in this condition is linear over an approximately two- to three-fold decrease of inhibitory feedback strength. This result supports the idea that focal disinhibition allows linear coupling of motor cortical points to generate movement related muscle activation patterns; albeit with a limitation on gain control. The model also explains why neural activity does not spread as far out as the axonal connectivity allows, whilst also explaining why distant cortical points can be, nonetheless, functionally coupled by focal disinhibition. Finally, we discuss the advantages that linear interactions at the cortical level afford to motor command synthesis. PMID:26097452

  15. LINEAR - DERIVATION AND DEFINITION OF A LINEAR AIRCRAFT MODEL

    NASA Technical Reports Server (NTRS)

    Duke, E. L.

    1994-01-01

    The Derivation and Definition of a Linear Model program, LINEAR, provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models. LINEAR was developed to provide a standard, documented, and verified tool to derive linear models for aircraft stability analysis and control law design. Linear system models define the aircraft system in the neighborhood of an analysis point and are determined by the linearization of the nonlinear equations defining vehicle dynamics and sensors. LINEAR numerically determines a linear system model using nonlinear equations of motion and a user supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. LINEAR is capable of extracting both linearized engine effects, such as net thrust, torque, and gyroscopic effects and including these effects in the linear system model. The point at which this linear model is defined is determined either by completely specifying the state and control variables, or by specifying an analysis point on a trajectory and directing the program to determine the control variables and the remaining state variables. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to provide easy selection of state, control, and observation variables to be used in a particular model. Thus, the order of the system model is completely under user control. Further, the program provides the flexibility of allowing alternate formulations of both the state and observation equations. Data describing the aircraft and the test case is input to the program through a terminal or formatted data files. All data can be modified interactively from case to case. The aerodynamic model can be defined in two ways: a set of nondimensional stability and control derivatives for the flight point of interest, or a full non-linear aerodynamic model as used in simulations. LINEAR is written in FORTRAN and has been implemented on a DEC VAX computer operating under VMS with a virtual memory requirement of approximately 296K of 8 bit bytes. Both an interactive and batch version are included. LINEAR was developed in 1988.

  16. Detecting a Change in School Performance: A Bayesian Analysis for a Multilevel Join Point Problem. CSE Technical Report 542.

    ERIC Educational Resources Information Center

    Thum, Yeow Meng; Bhattacharya, Suman Kumar

    To better describe individual behavior within a system, this paper uses a sample of longitudinal test scores from a large urban school system to consider hierarchical Bayes estimation of a multilevel linear regression model in which each individual regression slope of test score on time switches at some unknown point in time, "kj."…

  17. Three-dimensional biometric study of palatine rugae in children with a mixed-model analysis: a 9-year longitudinal study.

    PubMed

    Kim, Hong-Kyun; Moon, Sung-Chul; Lee, Shin-Jae; Park, Young-Seok

    2012-05-01

    The palatine rugae have been suggested as stable reference points for superimposing 3-dimensional virtual models before and after orthodontic treatment. We investigated 3-dimensional changes in the palatine rugae of children over 9 years. Complete dental stone casts were biennially prepared for 56 subjects (42 girls, 14 boys) aged from 6 to 14 years. Using 3-dimensional laser scanning and reconstruction software, virtual casts were constructed. Medial and lateral points of the first anterior 3 rugae were defined as the 3-dimensional landmarks. The length of each ruga and the distance between the end points of the rugae were measured in virtual 3-dimensional space. The measurement changes over time were analyzed by using the mixed-effect method for longitudinal data. There were slight increases in the linear measurements in the rugae areas: the lengths of the rugae and the distances between them during the observation period. However, the amounts of the increments were relatively small when compared with the initial values and individual random variability. Although age affected the linear dimensions significantly, it was not clinically significant; the rugae were relatively stable. The use of the palatine rugae as reference points for superimposing and evaluating changes during orthodontic treatment was thought to be possible with special cautions. Copyright © 2012 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  18. Brain-heart linear and nonlinear dynamics during visual emotional elicitation in healthy subjects.

    PubMed

    Valenza, G; Greco, A; Gentili, C; Lanata, A; Toschi, N; Barbieri, R; Sebastiani, L; Menicucci, D; Gemignani, A; Scilingo, E P

    2016-08-01

    This study investigates brain-heart dynamics during visual emotional elicitation in healthy subjects through linear and nonlinear coupling measures of EEG spectrogram and instantaneous heart rate estimates. To this extent, affective pictures including different combinations of arousal and valence levels, gathered from the International Affective Picture System, were administered to twenty-two healthy subjects. Time-varying maps of cortical activation were obtained through EEG spectral analysis, whereas the associated instantaneous heartbeat dynamics was estimated using inhomogeneous point-process linear models. Brain-Heart linear and nonlinear coupling was estimated through the Maximal Information Coefficient (MIC), considering EEG time-varying spectra and point-process estimates defined in the time and frequency domains. As a proof of concept, we here show preliminary results considering EEG oscillations in the θ band (4-8 Hz). This band, indeed, is known in the literature to be involved in emotional processes. MIC highlighted significant arousal-dependent changes, mediated by the prefrontal cortex interplay especially occurring at intermediate arousing levels. Furthermore, lower and higher arousing elicitations were associated to not significant brain-heart coupling changes in response to pleasant/unpleasant elicitations.

  19. Optimization of the time series NDVI-rainfall relationship using linear mixed-effects modeling for the anti-desertification area in the Beijing and Tianjin sandstorm source region

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Sun, Tao; Fu, Anmin; Xu, Hao; Wang, Xinjie

    2018-05-01

    Degradation in drylands is a critically important global issue that threatens ecosystem and environmental in many ways. Researchers have tried to use remote sensing data and meteorological data to perform residual trend analysis and identify human-induced vegetation changes. However, complex interactions between vegetation and climate, soil units and topography have not yet been considered. Data used in the study included annual accumulated Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m normalized difference vegetation index (NDVI) from 2002 to 2013, accumulated rainfall from September to August, digital elevation model (DEM) and soil units. This paper presents linear mixed-effect (LME) modeling methods for the NDVI-rainfall relationship. We developed linear mixed-effects models that considered the random effects of sample points nested in soil units for nested two-level modeling and single-level modeling of soil units and sample points, respectively. Additionally, three functions, including the exponential function (exp), the power function (power), and the constant plus power function (CPP), were tested to remove heterogeneity, and an additional three correlation structures, including the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)] and the compound symmetry structure (CS), were used to address the spatiotemporal correlations. It was concluded that the nested two-level model considering both heteroscedasticity with (CPP) and spatiotemporal correlation with [ARMA(1,1)] showed the best performance (AMR = 0.1881, RMSE = 0.2576, adj- R 2 = 0.9593). Variations between soil units and sample points that may have an effect on the NDVI-rainfall relationship should be included in model structures, and linear mixed-effects modeling achieves this in an effective and accurate way.

  20. Linear ground-water flow, flood-wave response program for programmable calculators

    USGS Publications Warehouse

    Kernodle, John Michael

    1978-01-01

    Two programs are documented which solve a discretized analytical equation derived to determine head changes at a point in a one-dimensional ground-water flow system. The programs, written for programmable calculators, are in widely divergent but commonly encountered languages and serve to illustrate the adaptability of the linear model to use in situations where access to true computers is not possible or economical. The analytical method assumes a semi-infinite aquifer which is uniform in thickness and hydrologic characteristics, bounded on one side by an impermeable barrier and on the other parallel side by a fully penetrating stream in complete hydraulic connection with the aquifer. Ground-water heads may be calculated for points along a line which is perpendicular to the impermeable barrie and the fully penetrating stream. Head changes at the observation point are dependent on (1) the distance between that point and the impermeable barrier, (2) the distance between the line of stress (the stream) and the impermeable barrier, (3) aquifer diffusivity, (4) time, and (5) head changes along the line of stress. The primary application of the programs is to determine aquifer diffusivity by the flood-wave response technique. (Woodard-USGS)

  1. Rigorous Photogrammetric Processing of CHANG'E-1 and CHANG'E-2 Stereo Imagery for Lunar Topographic Mapping

    NASA Astrophysics Data System (ADS)

    Di, K.; Liu, Y.; Liu, B.; Peng, M.

    2012-07-01

    Chang'E-1(CE-1) and Chang'E-2(CE-2) are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D coordinate of a ground point in lunar body-fixed (LBF) coordinate system can be calculated by space intersection from the image coordinates of con-jugate points in stereo images, and the image coordinates can be calculated from 3D coordinates by back-projection. Due to uncer-tainties of the orbit and the camera, the back-projected image points are different from the measured points. In order to reduce these inconsistencies and improve precision, we proposed two methods to refine the rigorous sensor model: 1) refining EOPs by correcting the attitude angle bias, 2) refining the interior orientation model by calibration of the relative position of the two linear CCD arrays. Experimental results show that the mean back-projection residuals of CE-1 images are reduced to better than 1/100 pixel by method 1 and the mean back-projection residuals of CE-2 images are reduced from over 20 pixels to 0.02 pixel by method 2. Consequently, high precision DEM (Digital Elevation Model) and DOM (Digital Ortho Map) are automatically generated.

  2. Image Processing Research

    DTIC Science & Technology

    1975-09-30

    systems a linear model results in an object f being mappad into an image _ by a point spread function matrix H. Thus with noise j +Hf +n (1) The simplest... linear models for imaging systems are given by space invariant point spread functions (SIPSF) in which case H is block circulant. If the linear model is...Ij,...,k-IM1 is a set of two dimensional indices each distinct and prior to k. Modeling Procedare: To derive the linear predictor (block LP of figure

  3. Durango delta: Complications on San Juan basin Cretaceous linear strandline theme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zech, R.S.; Wright, R.

    1989-09-01

    The Upper Cretaceous Point Lookout Sandstone generally conforms to a predictable cyclic shoreface model in which prograding linear strandline lithosomes dominate formation architecture. Multiple transgressive-regressive cycles results in systematic repetition of lithologies deposited in beach to inner shelf environments. Deposits of approximately five cycles are locally grouped into bundles. Such bundles extend at least 20 km along depositional strike and change from foreshore sandstone to offshore, time-equivalent Mancos mud rock in a downdip distance of 17 to 20 km. Excellent hydrocarbon reservoirs exist where well-sorted shoreface sandstone bundles stack and the formation thickens. This depositional model breaks down in themore » vicinity of Durango, Colorado, where a fluvial-dominated delta front and associated large distributary channels characterize the Point Lookout Sandstone and overlying Menefee Formation.« less

  4. Linear parameter varying identification of ankle joint intrinsic stiffness during imposed walking movements.

    PubMed

    Sobhani Tehrani, Ehsan; Jalaleddini, Kian; Kearney, Robert E

    2013-01-01

    This paper describes a novel model structure and identification method for the time-varying, intrinsic stiffness of human ankle joint during imposed walking (IW) movements. The model structure is based on the superposition of a large signal, linear, time-invariant (LTI) model and a small signal linear-parameter varying (LPV) model. The methodology is based on a two-step algorithm; the LTI model is first estimated using data from an unperturbed IW trial. Then, the LPV model is identified using data from a perturbed IW trial with the output predictions of the LTI model removed from the measured torque. Experimental results demonstrate that the method accurately tracks the continuous-time variation of normal ankle intrinsic stiffness when the joint position changes during the IW movement. Intrinsic stiffness gain decreases from full plantarflexion to near the mid-point of plantarflexion and then increases substantially as the ankle is dosriflexed.

  5. Earth elevation map production and high resolution sensing camera imaging analysis

    NASA Astrophysics Data System (ADS)

    Yang, Xiubin; Jin, Guang; Jiang, Li; Dai, Lu; Xu, Kai

    2010-11-01

    The Earth's digital elevation which impacts space camera imaging has prepared and imaging has analysed. Based on matching error that TDI CCD integral series request of the speed of image motion, statistical experimental methods-Monte Carlo method is used to calculate the distribution histogram of Earth's elevation in image motion compensated model which includes satellite attitude changes, orbital angular rate changes, latitude, longitude and the orbital inclination changes. And then, elevation information of the earth's surface from SRTM is read. Earth elevation map which produced for aerospace electronic cameras is compressed and spliced. It can get elevation data from flash according to the shooting point of latitude and longitude. If elevation data between two data, the ways of searching data uses linear interpolation. Linear interpolation can better meet the rugged mountains and hills changing requests. At last, the deviant framework and camera controller are used to test the character of deviant angle errors, TDI CCD camera simulation system with the material point corresponding to imaging point model is used to analyze the imaging's MTF and mutual correlation similarity measure, simulation system use adding cumulation which TDI CCD imaging exceeded the corresponding pixel horizontal and vertical offset to simulate camera imaging when stability of satellite attitude changes. This process is practicality. It can effectively control the camera memory space, and meet a very good precision TDI CCD camera in the request matches the speed of image motion and imaging.

  6. Soft tissue modelling through autowaves for surgery simulation.

    PubMed

    Zhong, Yongmin; Shirinzadeh, Bijan; Alici, Gursel; Smith, Julian

    2006-09-01

    Modelling of soft tissue deformation is of great importance to virtual reality based surgery simulation. This paper presents a new methodology for simulation of soft tissue deformation by drawing an analogy between autowaves and soft tissue deformation. The potential energy stored in a soft tissue as a result of a deformation caused by an external force is propagated among mass points of the soft tissue by non-linear autowaves. The novelty of the methodology is that (i) autowave techniques are established to describe the potential energy distribution of a deformation for extrapolating internal forces, and (ii) non-linear materials are modelled with non-linear autowaves other than geometric non-linearity. Integration with a haptic device has been achieved to simulate soft tissue deformation with force feedback. The proposed methodology not only deals with large-range deformations, but also accommodates isotropic, anisotropic and inhomogeneous materials by simply changing diffusion coefficients.

  7. Gain scheduled linear quadratic control for quadcopter

    NASA Astrophysics Data System (ADS)

    Okasha, M.; Shah, J.; Fauzi, W.; Hanouf, Z.

    2017-12-01

    This study exploits the dynamics and control of quadcopters using Linear Quadratic Regulator (LQR) control approach. The quadcopter’s mathematical model is derived using the Newton-Euler method. It is a highly manoeuvrable, nonlinear, coupled with six degrees of freedom (DOF) model, which includes aerodynamics and detailed gyroscopic moments that are often ignored in many literatures. The linearized model is obtained and characterized by the heading angle (i.e. yaw angle) of the quadcopter. The adopted control approach utilizes LQR method to track several reference trajectories including circle and helix curves with significant variation in the yaw angle. The controller is modified to overcome difficulties related to the continuous changes in the operating points and eliminate chattering and discontinuity that is observed in the control input signal. Numerical non-linear simulations are performed using MATLAB and Simulink to illustrate to accuracy and effectiveness of the proposed controller.

  8. Trends in hydrological extremes in the Senegal and the Niger Rivers

    NASA Astrophysics Data System (ADS)

    Wilcox, C.; Bodian, A.; Vischel, T.; Panthou, G.; Quantin, G.

    2017-12-01

    In recent years, West Africa has witnessed several floods of unprecedented magnitude. Although the evolution of hydrological extremes has been evaluated in the region to some extent, results lack regional coverage, significance levels, uncertainty estimations, model selection criteria, or a combination of the above. In this study, Generalized Extreme Value (GEV) distributions with and without various non-stationary temporal covariates are applied to annual maxima of daily discharge (AMAX) data sets in the Sudano-Guinean part of the Senegal River basin and in the Sahelian part of the Niger River basin. The data ranges from the 1950s to the 2010s. The two models of best fit most often selected (with an alpha=0.05 certainty level) were 1) a double-linear model for the central tendency parameter (μ) with stationary dispersion (σ) and 2) a double-linear model for both parameters. Change points are relatively consistent for the Senegal basin, with stations switching from a decreasing streamflow trend to an increasing streamflow trend in the early 1980s. In the Niger basin the trend in μ was generally positive with an increase in slope after the change point, but the change point location was less consistent. The study clearly demonstrates the significant trends in extreme discharge values in West Africa over the past six decades. Moreover, it proposes a clear methodology for comparing GEV models and selecting the best for use. The return levels generated from the chosen models can be applied to river basin management and hydraulic works sizing. The results provide a first evaluation of non-stationarity in extreme hydrological values in West Africa that is accompanied by significance levels, uncertainties, and non-stationary return level estimations .

  9. Relationship between the clinical global impression of severity for schizoaffective disorder scale and established mood scales for mania and depression.

    PubMed

    Turkoz, Ibrahim; Fu, Dong-Jing; Bossie, Cynthia A; Sheehan, John J; Alphs, Larry

    2013-08-15

    This analysis explored the relationship between ratings on HAM-D-17 or YMRS and those on the depressive or manic subscale of CGI-S for schizoaffective disorder (CGI-S-SCA). This post hoc analysis used the database (N=614) from two 6-week, randomized, placebo-controlled studies of paliperidone ER versus placebo in symptomatic subjects with schizoaffective disorder assessed using HAM-D-17, YMRS, and CGI-S-SCA scales. Parametric and nonparametric regression models explored the relationships between ratings on YMRS and HAM-D-17 and on depressive and manic domains of the CGI-S-SCA from baseline to the 6-week end point. A clinically meaningful improvement was defined as a change of 1 point in the CGI-S-SCA score. No adjustment was made for multiplicity. Multiple linear regression models suggested that a 1-point change in the depressive domain of CGI-S-SCA corresponded to an average 3.6-point (SE=0.2) change in HAM-D-17 score. Similarly, a 1-point change in the manic domain of CGI-S-SCA corresponded to an average 5.8-point (SE=0.2) change in YMRS score. Results were confirmed using local and cumulative logistic regression models in addition to equipercentile linking. Lack of subjects scoring over the complete range of possible scores may limit broad application of the analyses. Clinically meaningful score changes in depressive and manic domains of CGI-S-SCA corresponded to approximately 4- and 6-point score changes on HAM-D-17 and YMRS, respectively, in symptomatic subjects with schizoaffective disorder. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Linear CCD attitude measurement system based on the identification of the auxiliary array CCD

    NASA Astrophysics Data System (ADS)

    Hu, Yinghui; Yuan, Feng; Li, Kai; Wang, Yan

    2015-10-01

    Object to the high precision flying target attitude measurement issues of a large space and large field of view, comparing existing measurement methods, the idea is proposed of using two array CCD to assist in identifying the three linear CCD with multi-cooperative target attitude measurement system, and to address the existing nonlinear system errors and calibration parameters and more problems with nine linear CCD spectroscopic test system of too complicated constraints among camera position caused by excessive. The mathematical model of binocular vision and three linear CCD test system are established, co-spot composition triangle utilize three red LED position light, three points' coordinates are given in advance by Cooperate Measuring Machine, the red LED in the composition of the three sides of a triangle adds three blue LED light points as an auxiliary, so that array CCD is easier to identify three red LED light points, and linear CCD camera is installed of a red filter to filter out the blue LED light points while reducing stray light. Using array CCD to measure the spot, identifying and calculating the spatial coordinates solutions of red LED light points, while utilizing linear CCD to measure three red LED spot for solving linear CCD test system, which can be drawn from 27 solution. Measured with array CCD coordinates auxiliary linear CCD has achieved spot identification, and has solved the difficult problems of multi-objective linear CCD identification. Unique combination of linear CCD imaging features, linear CCD special cylindrical lens system is developed using telecentric optical design, the energy center of the spot position in the depth range of convergence in the direction is perpendicular to the optical axis of the small changes ensuring highprecision image quality, and the entire test system improves spatial object attitude measurement speed and precision.

  11. MO-C-17A-04: Forecasting Longitudinal Changes in Oropharyngeal Tumor Morphology Throughout the Course of Head and Neck Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yock, A; UT Graduate School of Biomedical Sciences, Houston, TX; Rao, A

    2014-06-15

    Purpose: To generate, evaluate, and compare models that predict longitudinal changes in tumor morphology throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe the size, shape, and position of 35 oropharyngeal GTVs at each treatment fraction during intensity-modulated radiation therapy. The feature vectors comprised the coordinates of the GTV centroids and one of two shape descriptors. One shape descriptor was based on radial distances between the GTV centroid and 614 GTV surface landmarks. The other was based on a spherical harmonic decomposition of these distances. Feature vectors over the course of therapy were describedmore » using static, linear, and mean models. The error of these models in forecasting GTV morphology was evaluated with leave-one-out cross-validation, and their accuracy was compared using Wilcoxon signed-rank tests. The effect of adjusting model parameters at 1, 2, 3, or 5 time points (adjustment points) was also evaluated. Results: The addition of a single adjustment point to the static model decreased the median error in forecasting the position of GTV surface landmarks by 1.2 mm (p<0.001). Additional adjustment points further decreased forecast error by about 0.4 mm each. The linear model decreased forecast error compared to the static model for feature vectors based on both shape descriptors (0.2 mm), while the mean model did so only for those based on the inter-landmark distances (0.2 mm). The decrease in forecast error due to adding adjustment points was greater than that due to model selection. Both effects diminished with subsequent adjustment points. Conclusion: Models of tumor morphology that include information from prior patients and/or prior treatment fractions are able to predict the tumor surface at each treatment fraction during radiation therapy. The predicted tumor morphology can be compared with patient anatomy or dose distributions, opening the possibility of anticipatory re-planning. American Legion Auxiliary Fellowship; The University of Texas Graduate School of Biomedical Sciences at Houston.« less

  12. Envelope of coda waves for a double couple source due to non-linear elasticity

    NASA Astrophysics Data System (ADS)

    Calisto, Ignacia; Bataille, Klaus

    2014-10-01

    Non-linear elasticity has recently been considered as a source of scattering, therefore contributing to the coda of seismic waves, in particular for the case of explosive sources. This idea is analysed further here, theoretically solving the expression for the envelope of coda waves generated by a point moment tensor in order to compare with earthquake data. For weak non-linearities, one can consider each point of the non-linear medium as a source of scattering within a homogeneous and linear medium, for which Green's functions can be used to compute the total displacement of scattered waves. These sources of scattering have specific radiation patterns depending on the incident and scattered P or S waves, respectively. In this approach, the coda envelope depends on three scalar parameters related to the specific non-linearity of the medium; however these parameters only change the scale of the coda envelope. The shape of the coda envelope is sensitive to both the source time function and the intrinsic attenuation. We compare simulations using this model with data from earthquakes in Taiwan, with a good fit.

  13. Modelling the association of dengue fever cases with temperature and relative humidity in Jeddah, Saudi Arabia-A generalised linear model with break-point analysis.

    PubMed

    Alkhaldy, Ibrahim

    2017-04-01

    The aim of this study was to examine the role of environmental factors in the temporal distribution of dengue fever in Jeddah, Saudi Arabia. The relationship between dengue fever cases and climatic factors such as relative humidity and temperature was investigated during 2006-2009 to determine whether there is any relationship between dengue fever cases and climatic parameters in Jeddah City, Saudi Arabia. A generalised linear model (GLM) with a break-point was used to determine how different levels of temperature and relative humidity affected the distribution of the number of cases of dengue fever. Break-point analysis was performed to modelled the effect before and after a break-point (change point) in the explanatory parameters under various scenarios. Akaike information criterion (AIC) and cross validation (CV) were used to assess the performance of the models. The results showed that maximum temperature and mean relative humidity are most probably the better predictors of the number of dengue fever cases in Jeddah. In this study three scenarios were modelled: no time lag, 1-week lag and 2-weeks lag. Among these scenarios, the 1-week lag model using mean relative humidity as an explanatory variable showed better performance. This study showed a clear relationship between the meteorological variables and the number of dengue fever cases in Jeddah. The results also demonstrated that meteorological variables can be successfully used to estimate the number of dengue fever cases for a given period of time. Break-point analysis provides further insight into the association between meteorological parameters and dengue fever cases by dividing the meteorological parameters into certain break-points. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Two Propositions on the Application of Point Elasticities to Finite Price Changes.

    ERIC Educational Resources Information Center

    Daskin, Alan J.

    1992-01-01

    Considers counterintuitive propositions about using point elasticities to estimate quantity changes in response to price changes. Suggests that elasticity increases with price along a linear demand curve, but falling quantity demand offsets it. Argues that point elasticity with finite percentage change in price only approximates percentage change…

  15. An Investigation of the Fit of Linear Regression Models to Data from an SAT[R] Validity Study. Research Report 2011-3

    ERIC Educational Resources Information Center

    Kobrin, Jennifer L.; Sinharay, Sandip; Haberman, Shelby J.; Chajewski, Michael

    2011-01-01

    This study examined the adequacy of a multiple linear regression model for predicting first-year college grade point average (FYGPA) using SAT[R] scores and high school grade point average (HSGPA). A variety of techniques, both graphical and statistical, were used to examine if it is possible to improve on the linear regression model. The results…

  16. Tropical precipitation extremes: Response to SST-induced warming in aquaplanet simulations

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Ritthik; Bordoni, Simona; Teixeira, João.

    2017-04-01

    Scaling of tropical precipitation extremes in response to warming is studied in aquaplanet experiments using the global Weather Research and Forecasting (WRF) model. We show how the scaling of precipitation extremes is highly sensitive to spatial and temporal averaging: while instantaneous grid point extreme precipitation scales more strongly than the percentage increase (˜7% K-1) predicted by the Clausius-Clapeyron (CC) relationship, extremes for zonally and temporally averaged precipitation follow a slight sub-CC scaling, in agreement with results from Climate Model Intercomparison Project (CMIP) models. The scaling depends crucially on the employed convection parameterization. This is particularly true when grid point instantaneous extremes are considered. These results highlight how understanding the response of precipitation extremes to warming requires consideration of dynamic changes in addition to the thermodynamic response. Changes in grid-scale precipitation, unlike those in convective-scale precipitation, scale linearly with the resolved flow. Hence, dynamic changes include changes in both large-scale and convective-scale motions.

  17. nonlinMIP contribution to CMIP6: model intercomparison project for non-linear mechanisms: physical basis, experimental design and analysis principles (v1.0)

    NASA Astrophysics Data System (ADS)

    Good, Peter; Andrews, Timothy; Chadwick, Robin; Dufresne, Jean-Louis; Gregory, Jonathan M.; Lowe, Jason A.; Schaller, Nathalie; Shiogama, Hideo

    2016-11-01

    nonlinMIP provides experiments that account for state-dependent regional and global climate responses. The experiments have two main applications: (1) to focus understanding of responses to CO2 forcing on states relevant to specific policy or scientific questions (e.g. change under low-forcing scenarios, the benefits of mitigation, or from past cold climates to the present day), or (2) to understand the state dependence (non-linearity) of climate change - i.e. why doubling the forcing may not double the response. State dependence (non-linearity) of responses can be large at regional scales, with important implications for understanding mechanisms and for general circulation model (GCM) emulation techniques (e.g. energy balance models and pattern-scaling methods). However, these processes are hard to explore using traditional experiments, which explains why they have had so little attention in previous studies. Some single model studies have established novel analysis principles and some physical mechanisms. There is now a need to explore robustness and uncertainty in such mechanisms across a range of models (point 2 above), and, more broadly, to focus work on understanding the response to CO2 on climate states relevant to specific policy/science questions (point 1). nonlinMIP addresses this using a simple, small set of CO2-forced experiments that are able to separate linear and non-linear mechanisms cleanly, with a good signal-to-noise ratio - while being demonstrably traceable to realistic transient scenarios. The design builds on the CMIP5 (Coupled Model Intercomparison Project Phase 5) and CMIP6 DECK (Diagnostic, Evaluation and Characterization of Klima) protocols, and is centred around a suite of instantaneous atmospheric CO2 change experiments, with a ramp-up-ramp-down experiment to test traceability to gradual forcing scenarios. In all cases the models are intended to be used with CO2 concentrations rather than CO2 emissions as the input. The understanding gained will help interpret the spread in policy-relevant scenario projections. Here we outline the basic physical principles behind nonlinMIP, and the method of establishing traceability from abruptCO2 to gradual forcing experiments, before detailing the experimental design, and finally some analysis principles. The test of traceability from abruptCO2 to transient experiments is recommended as a standard analysis within the CMIP5 and CMIP6 DECK protocols.

  18. A Communication Intervention to Reduce Resistiveness in Dementia Care: A Cluster Randomized Controlled Trial.

    PubMed

    Williams, Kristine N; Perkhounkova, Yelena; Herman, Ruth; Bossen, Ann

    2017-08-01

    Nursing home (NH) residents with dementia exhibit challenging behaviors or resistiveness to care (RTC) that increase staff time, stress, and NH costs. RTC is linked to elderspeak communication. Communication training (Changing Talk [CHAT]) was provided to staff to reduce their use of elderspeak. We hypothesized that CHAT would improve staff communication and subsequently reduce RTC. Thirteen NHs were randomized to intervention and control groups. Dyads (n = 42) including 29 staff and 27 persons with dementia were videorecorded during care before and/or after the intervention and at a 3-month follow-up. Videos were behaviorally coded for (a) staff communication (normal, elderspeak, or silence) and (b) resident behaviors (cooperative or RTC). Linear mixed modeling was used to evaluate training effects. On average, elderspeak declined from 34.6% (SD = 18.7) at baseline by 13.6% points (SD = 20.00) post intervention and 12.2% points (SD = 22.0) at 3-month follow-up. RTC declined from 35.7% (SD = 23.2) by 15.3% points (SD = 32.4) post intervention and 13.4% points (SD = 33.7) at 3 months. Linear mixed modeling determined that change in elderspeak was predicted by the intervention (b = -12.20, p = .028) and baseline elderspeak (b = -0.65, p < .001), whereas RTC change was predicted by elderspeak change (b = 0.43, p < .001); baseline RTC (b = -0.58, p < .001); and covariates. A brief intervention can improve communication and reduce RTC, providing an effective nonpharmacological intervention to manage behavior and improve the quality of dementia care. No adverse events occurred. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    PubMed

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  20. Point- and line-based transformation models for high resolution satellite image rectification

    NASA Astrophysics Data System (ADS)

    Abd Elrahman, Ahmed Mohamed Shaker

    Rigorous mathematical models with the aid of satellite ephemeris data can present the relationship between the satellite image space and the object space. With government funded satellites, access to calibration and ephemeris data has allowed the development and use of these models. However, for commercial high-resolution satellites, which have been recently launched, these data are withheld from users, and therefore alternative empirical models should be used. In general, the existing empirical models are based on the use of control points and involve linking points in the image space and the corresponding points in the object space. But the lack of control points in some remote areas and the questionable accuracy of the identified discrete conjugate points provide a catalyst for the development of algorithms based on features other than control points. This research, concerned with image rectification and 3D geo-positioning determination using High-Resolution Satellite Imagery (HRSI), has two major objectives. First, the effects of satellite sensor characteristics, number of ground control points (GCPs), and terrain elevation variations on the performance of several point based empirical models are studied. Second, a new mathematical model, using only linear features as control features, or linear features with a minimum number of GCPs, is developed. To meet the first objective, several experiments for different satellites such as Ikonos, QuickBird, and IRS-1D have been conducted using different point based empirical models. Various data sets covering different terrain types are presented and results from representative sets of the experiments are shown and analyzed. The results demonstrate the effectiveness and the superiority of these models under certain conditions. From the results obtained, several alternatives to circumvent the effects of the satellite sensor characteristics, the number of GCPs, and the terrain elevation variations are introduced. To meet the second objective, a new model named the Line Based Transformation Model (LBTM) is developed for HRSI rectification. The model has the flexibility to either solely use linear features or use linear features and a number of control points to define the image transformation parameters. Unlike point features, which must be explicitly defined, linear features have the advantage that they can be implicitly defined by any segment along the line. (Abstract shortened by UMI.)

  1. Linear network representation of multistate models of transport.

    PubMed Central

    Sandblom, J; Ring, A; Eisenman, G

    1982-01-01

    By introducing external driving forces in rate-theory models of transport we show how the Eyring rate equations can be transformed into Ohm's law with potentials that obey Kirchhoff's second law. From such a formalism the state diagram of a multioccupancy multicomponent system can be directly converted into linear network with resistors connecting nodal (branch) points and with capacitances connecting each nodal point with a reference point. The external forces appear as emf or current generators in the network. This theory allows the algebraic methods of linear network theory to be used in solving the flux equations for multistate models and is particularly useful for making proper simplifying approximation in models of complex membrane structure. Some general properties of linear network representation are also deduced. It is shown, for instance, that Maxwell's reciprocity relationships of linear networks lead directly to Onsager's relationships in the near equilibrium region. Finally, as an example of the procedure, the equivalent circuit method is used to solve the equations for a few transport models. PMID:7093425

  2. Design and laboratory testing of a prototype linear temperature sensor

    NASA Astrophysics Data System (ADS)

    Dube, C. M.; Nielsen, C. M.

    1982-07-01

    This report discusses the basic theory, design, and laboratory testing of a prototype linear temperature sensor (or "line sensor'), which is an instrument for measuring internal waves in the ocean. The operating principle of the line sensor consists of measuring the average resistance change of a vertically suspended wire (or coil of wire) induced by the passage of an internal wave in a thermocline. The advantage of the line sensor over conventional internal wave measurement techniques is that it is insensitive to thermal finestructure which contaminates point sensor measurements, and its output is approximately linearly proportional to the internal wave displacement. An approximately one-half scale prototype line sensor module was teste in the laboratory. The line sensor signal was linearly related to the actual fluid displacement to within 10%. Furthermore, the absolute output was well predicted (within 25%) from the theoretical model and the sensor material properties alone. Comparisons of the line sensor and a point sensor in a wavefield with superimposed turbulence (finestructure) revealed negligible distortion in the line sensor signal, while the point sensor signal was swamped by "turbulent noise'. The effects of internal wave strain were also found to be negligible.

  3. Analysis of trend changes in Northern African palaeo-climate by using Bayesian inference

    NASA Astrophysics Data System (ADS)

    Schütz, Nadine; Trauth, Martin H.; Holschneider, Matthias

    2010-05-01

    Climate variability of Northern Africa is of high interest due to climate-evolutionary linkages under study. The reconstruction of the palaeo-climate over long time scales, including the expected linkages (> 3 Ma), is mainly accessible by proxy data from deep sea drilling cores. By concentrating on published data sets, we try to decipher rhythms and trends to detect correlations between different proxy time series by advanced mathematical methods. Our preliminary data is dust concentration, as an indicator for climatic changes such as humidity, from the ODP sites 659, 721 and 967 situated around Northern Africa. Our interest is in challenging the available time series with advanced statistical methods to detect significant trend changes and to compare different model assumptions. For that purpose, we want to avoid the rescaling of the time axis to obtain equidistant time steps for filtering methods. Additionally we demand an plausible description of the errors for the estimated parameters, in terms of confidence intervals. Finally, depending on what model we restrict on, we also want an insight in the parameter structure of the assumed models. To gain this information, we focus on Bayesian inference by formulating the problem as a linear mixed model, so that the expectation and deviation are of linear structure. By using the Bayesian method we can formulate the posteriori density as a function of the model parameters and calculate this probability density in the parameter space. Depending which parameters are of interest, we analytically and numerically marginalize the posteriori with respect to the remaining parameters of less interest. We apply a simple linear mixed model to calculate the posteriori densities of the ODP sites 659 and 721 concerning the last 5 Ma at maximum. From preliminary calculations on these data sets, we can confirm results gained by the method of breakfit regression combined with block bootstrapping ([1]). We obtain a significant change point around (1.63 - 1.82) Ma, which correlates with a global climate transition due to the establishment of the Walker circulation ([2]). Furthermore we detect another significant change point around (2.7 - 3.2) Ma, which correlates with the end of the Pliocene warm period (permanent El Niño-like conditions) and the onset of a colder global climate ([3], [4]). The discussion on the algorithm, the results of calculated confidence intervals, the available information about the applied model in the parameter space and the comparison of multiple change point models will be presented. [1] Trauth, M.H., et al., Quaternary Science Reviews, 28, 2009 [2] Wara, M.W., et al., Science, Vol. 309, 2005 [3] Chiang, J.C.H., Annual Review of Earth and Planetary Sciences, Vol. 37, 2009 [4] deMenocal, P., Earth and Planetary Science Letters, 220, 2004

  4. Stability analysis of the phytoplankton effect model on changes in nitrogen concentration on integrated multi-trophic aquaculture systems

    NASA Astrophysics Data System (ADS)

    Widowati; Putro, S. P.; Silfiana

    2018-05-01

    Integrated Multi-Trophic Aquaculture (IMTA) is a polyculture with several biotas maintained in it to optimize waste recycling as a food source. The interaction between phytoplankton and nitrogen as waste in fish cultivation including ammonia, nitrite, and nitrate studied in the form of mathematical models. The form model is non-linear systems of differential equations with the four variables. The analytical analysis was used to study the dynamic behavior of this model. Local stability analysis is performed at the equilibrium point with the first step linearized model by using Taylor series, then determined the Jacobian matrix. If all eigenvalues have negative real parts, then the equilibrium of the system is locally asymptotic stable. Some numerical simulations were also demonstrated to verify our analytical result.

  5. CFD analysis of linear compressors considering load conditions

    NASA Astrophysics Data System (ADS)

    Bae, Sanghyun; Oh, Wonsik

    2017-08-01

    This paper is a study on computational fluid dynamics (CFD) analysis of linear compressor considering load conditions. In the conventional CFD analysis of the linear compressor, the load condition was not considered in the behaviour of the piston. In some papers, behaviour of piston is assumed as sinusoidal motion provided by user defined function (UDF). In the reciprocating type compressor, the stroke of the piston is restrained by the rod, while the stroke of the linear compressor is not restrained, and the stroke changes depending on the load condition. The greater the pressure difference between the discharge refrigerant and the suction refrigerant, the more the centre point of the stroke is pushed backward. And the behaviour of the piston is not a complete sine wave. For this reason, when the load condition changes in the CFD analysis of the linear compressor, it may happen that the ANSYS code is changed or unfortunately the modelling is changed. In addition, a separate analysis or calculation is required to find a stroke that meets the load condition, which may contain errors. In this study, the coupled mechanical equations and electrical equations are solved using the UDF, and the behaviour of the piston is solved considering the pressure difference across the piston. Using the above method, the stroke of the piston with respect to the motor specification of the analytical model can be calculated according to the input voltage, and the piston behaviour can be realized considering the thrust amount due to the pressure difference.

  6. Nonlinear price impact from linear models

    NASA Astrophysics Data System (ADS)

    Patzelt, Felix; Bouchaud, Jean-Philippe

    2017-12-01

    The impact of trades on asset prices is a crucial aspect of market dynamics for academics, regulators, and practitioners alike. Recently, universal and highly nonlinear master curves were observed for price impacts aggregated on all intra-day scales (Patzelt and Bouchaud 2017 arXiv:1706.04163). Here we investigate how well these curves, their scaling, and the underlying return dynamics are captured by linear ‘propagator’ models. We find that the classification of trades as price-changing versus non-price-changing can explain the price impact nonlinearities and short-term return dynamics to a very high degree. The explanatory power provided by the change indicator in addition to the order sign history increases with increasing tick size. To obtain these results, several long-standing technical issues for model calibration and testing are addressed. We present new spectral estimators for two- and three-point cross-correlations, removing the need for previously used approximations. We also show when calibration is unbiased and how to accurately reveal previously overlooked biases. Therefore, our results contribute significantly to understanding both recent empirical results and the properties of a popular class of impact models.

  7. Population response to climate change: linear vs. non-linear modeling approaches.

    PubMed

    Ellis, Alicia M; Post, Eric

    2004-03-31

    Research on the ecological consequences of global climate change has elicited a growing interest in the use of time series analysis to investigate population dynamics in a changing climate. Here, we compare linear and non-linear models describing the contribution of climate to the density fluctuations of the population of wolves on Isle Royale, Michigan from 1959 to 1999. The non-linear self excitatory threshold autoregressive (SETAR) model revealed that, due to differences in the strength and nature of density dependence, relatively small and large populations may be differentially affected by future changes in climate. Both linear and non-linear models predict a decrease in the population of wolves with predicted changes in climate. Because specific predictions differed between linear and non-linear models, our study highlights the importance of using non-linear methods that allow the detection of non-linearity in the strength and nature of density dependence. Failure to adopt a non-linear approach to modelling population response to climate change, either exclusively or in addition to linear approaches, may compromise efforts to quantify ecological consequences of future warming.

  8. A Unified Point Process Probabilistic Framework to Assess Heartbeat Dynamics and Autonomic Cardiovascular Control

    PubMed Central

    Chen, Zhe; Purdon, Patrick L.; Brown, Emery N.; Barbieri, Riccardo

    2012-01-01

    In recent years, time-varying inhomogeneous point process models have been introduced for assessment of instantaneous heartbeat dynamics as well as specific cardiovascular control mechanisms and hemodynamics. Assessment of the model’s statistics is established through the Wiener-Volterra theory and a multivariate autoregressive (AR) structure. A variety of instantaneous cardiovascular metrics, such as heart rate (HR), heart rate variability (HRV), respiratory sinus arrhythmia (RSA), and baroreceptor-cardiac reflex (baroreflex) sensitivity (BRS), are derived within a parametric framework and instantaneously updated with adaptive and local maximum likelihood estimation algorithms. Inclusion of second-order non-linearities, with subsequent bispectral quantification in the frequency domain, further allows for definition of instantaneous metrics of non-linearity. We here present a comprehensive review of the devised methods as applied to experimental recordings from healthy subjects during propofol anesthesia. Collective results reveal interesting dynamic trends across the different pharmacological interventions operated within each anesthesia session, confirming the ability of the algorithm to track important changes in cardiorespiratory elicited interactions, and pointing at our mathematical approach as a promising monitoring tool for an accurate, non-invasive assessment in clinical practice. We also discuss the limitations and other alternative modeling strategies of our point process approach. PMID:22375120

  9. Analyzing Seasonal Variations in Suicide With Fourier Poisson Time-Series Regression: A Registry-Based Study From Norway, 1969-2007.

    PubMed

    Bramness, Jørgen G; Walby, Fredrik A; Morken, Gunnar; Røislien, Jo

    2015-08-01

    Seasonal variation in the number of suicides has long been acknowledged. It has been suggested that this seasonality has declined in recent years, but studies have generally used statistical methods incapable of confirming this. We examined all suicides occurring in Norway during 1969-2007 (more than 20,000 suicides in total) to establish whether seasonality decreased over time. Fitting of additive Fourier Poisson time-series regression models allowed for formal testing of a possible linear decrease in seasonality, or a reduction at a specific point in time, while adjusting for a possible smooth nonlinear long-term change without having to categorize time into discrete yearly units. The models were compared using Akaike's Information Criterion and analysis of variance. A model with a seasonal pattern was significantly superior to a model without one. There was a reduction in seasonality during the period. Both the model assuming a linear decrease in seasonality and the model assuming a change at a specific point in time were both superior to a model assuming constant seasonality, thus confirming by formal statistical testing that the magnitude of the seasonality in suicides has diminished. The additive Fourier Poisson time-series regression model would also be useful for studying other temporal phenomena with seasonal components. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Convex set and linear mixing model

    NASA Technical Reports Server (NTRS)

    Xu, P.; Greeley, R.

    1993-01-01

    A major goal of optical remote sensing is to determine surface compositions of the earth and other planetary objects. For assessment of composition, single pixels in multi-spectral images usually record a mixture of the signals from various materials within the corresponding surface area. In this report, we introduce a closed and bounded convex set as a mathematical model for linear mixing. This model has a clear geometric implication because the closed and bounded convex set is a natural generalization of a triangle in n-space. The endmembers are extreme points of the convex set. Every point in the convex closure of the endmembers is a linear mixture of those endmembers, which is exactly how linear mixing is defined. With this model, some general criteria for selecting endmembers could be described. This model can lead to a better understanding of linear mixing models.

  11. Analysis of point-to-point lung motion with full inspiration and expiration CT data using non-linear optimization method: optimal geometric assumption model for the effective registration algorithm

    NASA Astrophysics Data System (ADS)

    Kim, Namkug; Seo, Joon Beom; Heo, Jeong Nam; Kang, Suk-Ho

    2007-03-01

    The study was conducted to develop a simple model for more robust lung registration of volumetric CT data, which is essential for various clinical lung analysis applications, including the lung nodule matching in follow up CT studies, semi-quantitative assessment of lung perfusion, and etc. The purpose of this study is to find the most effective reference point and geometric model based on the lung motion analysis from the CT data sets obtained in full inspiration (In.) and expiration (Ex.). Ten pairs of CT data sets in normal subjects obtained in full In. and Ex. were used in this study. Two radiologists were requested to draw 20 points representing the subpleural point of the central axis in each segment. The apex, hilar point, and center of inertia (COI) of each unilateral lung were proposed as the reference point. To evaluate optimal expansion point, non-linear optimization without constraints was employed. The objective function is sum of distances from the line, consist of the corresponding points between In. and Ex. to the optimal point x. By using the nonlinear optimization, the optimal points was evaluated and compared between reference points. The average distance between the optimal point and each line segment revealed that the balloon model was more suitable to explain the lung expansion model. This lung motion analysis based on vector analysis and non-linear optimization shows that balloon model centered on the center of inertia of lung is most effective geometric model to explain lung expansion by breathing.

  12. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wenyang; Cheung, Yam; Sawant, Amit

    2016-05-15

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparsemore » regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.« less

  13. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system.

    PubMed

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-05-01

    To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.

  14. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    PubMed Central

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-01-01

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications. PMID:27147347

  15. Linear approximations of nonlinear systems

    NASA Technical Reports Server (NTRS)

    Hunt, L. R.; Su, R.

    1983-01-01

    The development of a method for designing an automatic flight controller for short and vertical take off aircraft is discussed. This technique involves transformations of nonlinear systems to controllable linear systems and takes into account the nonlinearities of the aircraft. In general, the transformations cannot always be given in closed form. Using partial differential equations, an approximate linear system called the modified tangent model was introduced. A linear transformation of this tangent model to Brunovsky canonical form can be constructed, and from this the linear part (about a state space point x sub 0) of an exact transformation for the nonlinear system can be found. It is shown that a canonical expansion in Lie brackets about the point x sub 0 yields the same modified tangent model.

  16. The Trend Odds Model for Ordinal Data‡

    PubMed Central

    Capuano, Ana W.; Dawson, Jeffrey D.

    2013-01-01

    Ordinal data appear in a wide variety of scientific fields. These data are often analyzed using ordinal logistic regression models that assume proportional odds. When this assumption is not met, it may be possible to capture the lack of proportionality using a constrained structural relationship between the odds and the cut-points of the ordinal values (Peterson and Harrell, 1990). We consider a trend odds version of this constrained model, where the odds parameter increases or decreases in a monotonic manner across the cut-points. We demonstrate algebraically and graphically how this model is related to latent logistic, normal, and exponential distributions. In particular, we find that scale changes in these potential latent distributions are consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. We show how to fit this model using SAS Proc Nlmixed, and perform simulations under proportional odds and trend odds processes. We find that the added complexity of the trend odds model gives improved power over the proportional odds model when there are moderate to severe departures from proportionality. A hypothetical dataset is used to illustrate the interpretation of the trend odds model, and we apply this model to a Swine Influenza example where the proportional odds assumption appears to be violated. PMID:23225520

  17. The trend odds model for ordinal data.

    PubMed

    Capuano, Ana W; Dawson, Jeffrey D

    2013-06-15

    Ordinal data appear in a wide variety of scientific fields. These data are often analyzed using ordinal logistic regression models that assume proportional odds. When this assumption is not met, it may be possible to capture the lack of proportionality using a constrained structural relationship between the odds and the cut-points of the ordinal values. We consider a trend odds version of this constrained model, wherein the odds parameter increases or decreases in a monotonic manner across the cut-points. We demonstrate algebraically and graphically how this model is related to latent logistic, normal, and exponential distributions. In particular, we find that scale changes in these potential latent distributions are consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. We show how to fit this model using SAS Proc NLMIXED and perform simulations under proportional odds and trend odds processes. We find that the added complexity of the trend odds model gives improved power over the proportional odds model when there are moderate to severe departures from proportionality. A hypothetical data set is used to illustrate the interpretation of the trend odds model, and we apply this model to a swine influenza example wherein the proportional odds assumption appears to be violated. Copyright © 2012 John Wiley & Sons, Ltd.

  18. Statistical approach to the analysis of olive long-term pollen season trends in southern Spain.

    PubMed

    García-Mozo, H; Yaezel, L; Oteros, J; Galán, C

    2014-03-01

    Analysis of long-term airborne pollen counts makes it possible not only to chart pollen-season trends but also to track changing patterns in flowering phenology. Changes in higher plant response over a long interval are considered among the most valuable bioindicators of climate change impact. Phenological-trend models can also provide information regarding crop production and pollen-allergen emission. The interest of this information makes essential the election of the statistical analysis for time series study. We analysed trends and variations in the olive flowering season over a 30-year period (1982-2011) in southern Europe (Córdoba, Spain), focussing on: annual Pollen Index (PI); Pollen Season Start (PSS), Peak Date (PD), Pollen Season End (PSE) and Pollen Season Duration (PSD). Apart from the traditional Linear Regression analysis, a Seasonal-Trend Decomposition procedure based on Loess (STL) and an ARIMA model were performed. Linear regression results indicated a trend toward delayed PSE and earlier PSS and PD, probably influenced by the rise in temperature. These changes are provoking longer flowering periods in the study area. The use of the STL technique provided a clearer picture of phenological behaviour. Data decomposition on pollination dynamics enabled the trend toward an alternate bearing cycle to be distinguished from the influence of other stochastic fluctuations. Results pointed to show a rising trend in pollen production. With a view toward forecasting future phenological trends, ARIMA models were constructed to predict PSD, PSS and PI until 2016. Projections displayed a better goodness of fit than those derived from linear regression. Findings suggest that olive reproductive cycle is changing considerably over the last 30years due to climate change. Further conclusions are that STL improves the effectiveness of traditional linear regression in trend analysis, and ARIMA models can provide reliable trend projections for future years taking into account the internal fluctuations in time series. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Modeling susceptibility difference artifacts produced by metallic implants in magnetic resonance imaging with point-based thin-plate spline image registration.

    PubMed

    Pauchard, Y; Smith, M; Mintchev, M

    2004-01-01

    Magnetic resonance imaging (MRI) suffers from geometric distortions arising from various sources. One such source are the non-linearities associated with the presence of metallic implants, which can profoundly distort the obtained images. These non-linearities result in pixel shifts and intensity changes in the vicinity of the implant, often precluding any meaningful assessment of the entire image. This paper presents a method for correcting these distortions based on non-rigid image registration techniques. Two images from a modelled three-dimensional (3D) grid phantom were subjected to point-based thin-plate spline registration. The reference image (without distortions) was obtained from a grid model including a spherical implant, and the corresponding test image containing the distortions was obtained using previously reported technique for spatial modelling of magnetic susceptibility artifacts. After identifying the nonrecoverable area in the distorted image, the calculated spline model was able to quantitatively account for the distortions, thus facilitating their compensation. Upon the completion of the compensation procedure, the non-recoverable area was removed from the reference image and the latter was compared to the compensated image. Quantitative assessment of the goodness of the proposed compensation technique is presented.

  20. Linearization: Students Forget the Operating Point

    ERIC Educational Resources Information Center

    Roubal, J.; Husek, P.; Stecha, J.

    2010-01-01

    Linearization is a standard part of modeling and control design theory for a class of nonlinear dynamical systems taught in basic undergraduate courses. Although linearization is a straight-line methodology, it is not applied correctly by many students since they often forget to keep the operating point in mind. This paper explains the topic and…

  1. Two wrongs make a right: linear increase of accuracy of visually-guided manual pointing, reaching, and height-matching with increase in hand-to-body distance.

    PubMed

    Li, Wenxun; Matin, Leonard

    2005-03-01

    Measurements were made of the accuracy of open-loop manual pointing and height-matching to a visual target whose elevation was perceptually mislocalized. Accuracy increased linearly with distance of the hand from the body, approaching complete accuracy at full extension; with the hand close to the body (within the midfrontal plane), the manual errors equaled the magnitude of the perceptual mislocalization. The visual inducing stimulus responsible for the perceptual errors was a single pitched-from-vertical line that was long (50 degrees), eccentrically-located (25 degrees horizontal), and viewed in otherwise total darkness. The line induced perceptual errors in the elevation of a small, circular visual target set to appear at eye level (VPEL), a setting that changed linearly with the change in the line's visual pitch as has been previously reported (pitch: -30 degrees topbackward to 30 degrees topforward); the elevation errors measured by VPEL settings varied systematically with pitch through an 18 degrees range. In a fourth experiment the visual inducing stimulus responsible for the perceptual errors was shown to induce separately-measured errors in the manual setting of the arm to feel horizontal that were also distance-dependent. The distance-dependence of the visually-induced changes in felt arm position accounts quantitatively for the distance-dependence of the manual errors in pointing/reaching and height matching to the visual target: The near equality of the changes in felt horizontal and changes in pointing/reaching with the finger at the end of the fully extended arm is responsible for the manual accuracy of the fully-extended point; with the finger in the midfrontal plane their large difference is responsible for the inaccuracies of the midfrontal-plane point. The results are inconsistent with the widely-held but controversial theory that visual spatial information employed for perception and action are dissociated and different with no illusory visual influence on action. A different two-system theory, the Proximal/Distal model, employing the same signals from vision and from the body-referenced mechanism with different weights for different hand-to-body distances, accounts for both the perceptual and the manual results in the present experiments.

  2. The Linear Bias in the Zeldovich Approximation and a Relation between the Number Density and the Linear Bias of Dark Halos

    NASA Astrophysics Data System (ADS)

    Fan, Zuhui

    2000-01-01

    The linear bias of the dark halos from a model under the Zeldovich approximation is derived and compared with the fitting formula of simulation results. While qualitatively similar to the Press-Schechter formula, this model gives a better description for the linear bias around the turnaround point. This advantage, however, may be compromised by the large uncertainty of the actual behavior of the linear bias near the turnaround point. For a broad class of structure formation models in the cold dark matter framework, a general relation exists between the number density and the linear bias of dark halos. This relation can be readily tested by numerical simulations. Thus, instead of laboriously checking these models one by one, numerical simulation studies can falsify a whole category of models. The general validity of this relation is important in identifying key physical processes responsible for the large-scale structure formation in the universe.

  3. TREFEX: Trend Estimation and Change Detection in the Response of MOX Gas Sensors

    PubMed Central

    Pashami, Sepideh; Lilienthal, Achim J.; Schaffernicht, Erik; Trincavelli, Marco

    2013-01-01

    Many applications of metal oxide gas sensors can benefit from reliable algorithms to detect significant changes in the sensor response. Significant changes indicate a change in the emission modality of a distant gas source and occur due to a sudden change of concentration or exposure to a different compound. As a consequence of turbulent gas transport and the relatively slow response and recovery times of metal oxide sensors, their response in open sampling configuration exhibits strong fluctuations that interfere with the changes of interest. In this paper we introduce TREFEX, a novel change point detection algorithm, especially designed for metal oxide gas sensors in an open sampling system. TREFEX models the response of MOX sensors as a piecewise exponential signal and considers the junctions between consecutive exponentials as change points. We formulate non-linear trend filtering and change point detection as a parameter-free convex optimization problem for single sensors and sensor arrays. We evaluate the performance of the TREFEX algorithm experimentally for different metal oxide sensors and several gas emission profiles. A comparison with the previously proposed GLR method shows a clearly superior performance of the TREFEX algorithm both in detection performance and in estimating the change time. PMID:23736853

  4. Path Dependence of Regional Climate Change

    NASA Astrophysics Data System (ADS)

    Herrington, Tyler; Zickfeld, Kirsten

    2013-04-01

    Path dependence of the climate response to CO2 forcing has been investigated from a global mean perspective, with evidence suggesting that long-term global mean temperature and precipitation changes are proportional to cumulative CO2 emissions, and independent of emissions pathway. Little research, however, has been done on path dependence of regional climate changes, particularly in areas that could be affected by tipping points. Here, we utilize the UVic Earth System Climate Model version 2.9, an Earth System Model of Intermediate Complexity. It consists of a 3-dimensional ocean general circulation model, coupled with a dynamic-thermodynamic sea ice model, and a thermodynamic energy-moisture balance model of the atmosphere. This is then coupled with a terrestrial carbon cycle model and an ocean carbon-cycle model containing an inorganic carbon and marine ecosystem component. Model coverage is global with a zonal resolution of 3.6 degrees and meridional resolution of 1.8 degrees. The model is forced with idealized emissions scenarios across five cumulative emission groups (1300 GtC, 2300 GtC, 3300 GtC, 4300 GtC, and 5300 GtC) to explore the path dependence of (and the possibility of hysteresis in) regional climate changes. Emission curves include both fossil carbon emissions and emissions from land use changes, and span a variety of peak and decline scenarios with varying emission rates, as well as overshoot and instantaneous pulse scenarios. Tipping points being explored include those responsible for the disappearance of summer Arctic sea-ice, the irreversible melt of the Greenland Ice Sheet, the collapse of the Atlantic Thermohaline Circulation, and the dieback of the Amazonian Rainforest. Preliminary results suggest that global mean climate change after cessation of CO2 emissions is independent of the emissions pathway, only varying with total cumulative emissions, in accordance with results from earlier studies. Forthcoming analysis will investigate path dependence of regional climate change. Some evidence exists to support the idea of hysteresis in the Greenland Ice Sheet, and since tipping points represent non-linear elements of the climate system, we suspect that the other tipping points might also show path dependence.

  5. Using the NASTRAN Thermal Analyzer to simulate a flight scientific instrument package

    NASA Technical Reports Server (NTRS)

    Lee, H.-P.; Jackson, C. E., Jr.

    1974-01-01

    The NASTRAN Thermal Analyzer has proven to be a unique and useful tool for thermal analyses involving large and complex structures where small, thermally induced deformations are critical. Among its major advantages are direct grid point-to-grid point compatibility with large structural models; plots of the model that may be generated for both conduction and boundary elements; versatility of applying transient thermal loads especially to repeat orbital cycles; on-line printer plotting of temperatures and rate of temperature changes as a function of time; and direct matrix input to solve linear differential equations on-line. These features provide a flexibility far beyond that available in most finite-difference thermal analysis computer programs.

  6. Measuring change for a multidimensional test using a generalized explanatory longitudinal item response model.

    PubMed

    Cho, Sun-Joo; Athay, Michele; Preacher, Kristopher J

    2013-05-01

    Even though many educational and psychological tests are known to be multidimensional, little research has been done to address how to measure individual differences in change within an item response theory framework. In this paper, we suggest a generalized explanatory longitudinal item response model to measure individual differences in change. New longitudinal models for multidimensional tests and existing models for unidimensional tests are presented within this framework and implemented with software developed for generalized linear models. In addition to the measurement of change, the longitudinal models we present can also be used to explain individual differences in change scores for person groups (e.g., learning disabled students versus non-learning disabled students) and to model differences in item difficulties across item groups (e.g., number operation, measurement, and representation item groups in a mathematics test). An empirical example illustrates the use of the various models for measuring individual differences in change when there are person groups and multiple skill domains which lead to multidimensionality at a time point. © 2012 The British Psychological Society.

  7. Object matching using a locally affine invariant and linear programming techniques.

    PubMed

    Li, Hongsheng; Huang, Xiaolei; He, Lei

    2013-02-01

    In this paper, we introduce a new matching method based on a novel locally affine-invariant geometric constraint and linear programming techniques. To model and solve the matching problem in a linear programming formulation, all geometric constraints should be able to be exactly or approximately reformulated into a linear form. This is a major difficulty for this kind of matching algorithm. We propose a novel locally affine-invariant constraint which can be exactly linearized and requires a lot fewer auxiliary variables than other linear programming-based methods do. The key idea behind it is that each point in the template point set can be exactly represented by an affine combination of its neighboring points, whose weights can be solved easily by least squares. Errors of reconstructing each matched point using such weights are used to penalize the disagreement of geometric relationships between the template points and the matched points. The resulting overall objective function can be solved efficiently by linear programming techniques. Our experimental results on both rigid and nonrigid object matching show the effectiveness of the proposed algorithm.

  8. Mapping the order and pattern of brain structural MRI changes using change-point analysis in premanifest Huntington's disease.

    PubMed

    Wu, Dan; Faria, Andreia V; Younes, Laurent; Mori, Susumu; Brown, Timothy; Johnson, Hans; Paulsen, Jane S; Ross, Christopher A; Miller, Michael I

    2017-10-01

    Huntington's disease (HD) is an autosomal dominant neurodegenerative disorder that progressively affects motor, cognitive, and emotional functions. Structural MRI studies have demonstrated brain atrophy beginning many years prior to clinical onset ("premanifest" period), but the order and pattern of brain structural changes have not been fully characterized. In this study, we investigated brain regional volumes and diffusion tensor imaging (DTI) measurements in premanifest HD, and we aim to determine (1) the extent of MRI changes in a large number of structures across the brain by atlas-based analysis, and (2) the initiation points of structural MRI changes in these brain regions. We adopted a novel multivariate linear regression model to detect the inflection points at which the MRI changes begin (namely, "change-points"), with respect to the CAG-age product (CAP, an indicator of extent of exposure to the effects of CAG repeat expansion). We used approximately 300 T1-weighted and DTI data from premanifest HD and control subjects in the PREDICT-HD study, with atlas-based whole brain segmentation and change-point analysis. The results indicated a distinct topology of structural MRI changes: the change-points of the volumetric measurements suggested a central-to-peripheral pattern of atrophy from the striatum to the deep white matter; and the change points of DTI measurements indicated the earliest changes in mean diffusivity in the deep white matter and posterior white matter. While interpretation needs to be cautious given the cross-sectional nature of the data, these findings suggest a spatial and temporal pattern of spread of structural changes within the HD brain. Hum Brain Mapp 38:5035-5050, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  9. Observer-dependent sign inversions of polarization singularities.

    PubMed

    Freund, Isaac

    2014-10-15

    We describe observer-dependent sign inversions of the topological charges of vector field polarization singularities: C points (points of circular polarization), L points (points of linear polarization), and two virtually unknown singularities we call γ(C) and α(L) points. In all cases, the sign of the charge seen by an observer can change as she changes the direction from which she views the singularity. Analytic formulas are given for all C and all L point sign inversions.

  10. Factors influencing superimposition error of 3D cephalometric landmarks by plane orientation method using 4 reference points: 4 point superimposition error regression model.

    PubMed

    Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul

    2014-01-01

    Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.

  11. A distributed lag approach to fitting non-linear dose-response models in particulate matter air pollution time series investigations.

    PubMed

    Roberts, Steven; Martin, Michael A

    2007-06-01

    The majority of studies that have investigated the relationship between particulate matter (PM) air pollution and mortality have assumed a linear dose-response relationship and have used either a single-day's PM or a 2- or 3-day moving average of PM as the measure of PM exposure. Both of these modeling choices have come under scrutiny in the literature, the linear assumption because it does not allow for non-linearities in the dose-response relationship, and the use of the single- or multi-day moving average PM measure because it does not allow for differential PM-mortality effects spread over time. These two problems have been dealt with on a piecemeal basis with non-linear dose-response models used in some studies and distributed lag models (DLMs) used in others. In this paper, we propose a method for investigating the shape of the PM-mortality dose-response relationship that combines a non-linear dose-response model with a DLM. This combined model will be shown to produce satisfactory estimates of the PM-mortality dose-response relationship in situations where non-linear dose response models and DLMs alone do not; that is, the combined model did not systemically underestimate or overestimate the effect of PM on mortality. The combined model is applied to ten cities in the US and a pooled dose-response model formed. When fitted with a change-point value of 60 microg/m(3), the pooled model provides evidence for a positive association between PM and mortality. The combined model produced larger estimates for the effect of PM on mortality than when using a non-linear dose-response model or a DLM in isolation. For the combined model, the estimated percentage increase in mortality for PM concentrations of 25 and 75 microg/m(3) were 3.3% and 5.4%, respectively. In contrast, the corresponding values from a DLM used in isolation were 1.2% and 3.5%, respectively.

  12. An analysis of neural receptive field plasticity by point process adaptive filtering

    PubMed Central

    Brown, Emery N.; Nguyen, David P.; Frank, Loren M.; Wilson, Matthew A.; Solo, Victor

    2001-01-01

    Neural receptive fields are plastic: with experience, neurons in many brain regions change their spiking responses to relevant stimuli. Analysis of receptive field plasticity from experimental measurements is crucial for understanding how neural systems adapt their representations of relevant biological information. Current analysis methods using histogram estimates of spike rate functions in nonoverlapping temporal windows do not track the evolution of receptive field plasticity on a fine time scale. Adaptive signal processing is an established engineering paradigm for estimating time-varying system parameters from experimental measurements. We present an adaptive filter algorithm for tracking neural receptive field plasticity based on point process models of spike train activity. We derive an instantaneous steepest descent algorithm by using as the criterion function the instantaneous log likelihood of a point process spike train model. We apply the point process adaptive filter algorithm in a study of spatial (place) receptive field properties of simulated and actual spike train data from rat CA1 hippocampal neurons. A stability analysis of the algorithm is sketched in the Appendix. The adaptive algorithm can update the place field parameter estimates on a millisecond time scale. It reliably tracked the migration, changes in scale, and changes in maximum firing rate characteristic of hippocampal place fields in a rat running on a linear track. Point process adaptive filtering offers an analytic method for studying the dynamics of neural receptive fields. PMID:11593043

  13. TH-AB-202-08: A Robust Real-Time Surface Reconstruction Method On Point Clouds Captured From a 3D Surface Photogrammetry System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W; Sawant, A; Ruan, D

    2016-06-15

    Purpose: Surface photogrammetry (e.g. VisionRT, C-Rad) provides a noninvasive way to obtain high-frequency measurement for patient motion monitoring in radiotherapy. This work aims to develop a real-time surface reconstruction method on the acquired point clouds, whose acquisitions are subject to noise and missing measurements. In contrast to existing surface reconstruction methods that are usually computationally expensive, the proposed method reconstructs continuous surfaces with comparable accuracy in real-time. Methods: The key idea in our method is to solve and propagate a sparse linear relationship from the point cloud (measurement) manifold to the surface (reconstruction) manifold, taking advantage of the similarity inmore » local geometric topology in both manifolds. With consistent point cloud acquisition, we propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, building the point correspondences by the iterative closest point (ICP) method. To accommodate changing noise levels and/or presence of inconsistent occlusions, we further propose a modified sparse regression (MSR) model to account for the large and sparse error built by ICP, with a Laplacian prior. We evaluated our method on both clinical acquired point clouds under consistent conditions and simulated point clouds with inconsistent occlusions. The reconstruction accuracy was evaluated w.r.t. root-mean-squared-error, by comparing the reconstructed surfaces against those from the variational reconstruction method. Results: On clinical point clouds, both the SR and MSR models achieved sub-millimeter accuracy, with mean reconstruction time reduced from 82.23 seconds to 0.52 seconds and 0.94 seconds, respectively. On simulated point cloud with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent performance despite the introduced occlusions. Conclusion: We have developed a real-time and robust surface reconstruction method on point clouds acquired by photogrammetry systems. It serves an important enabling step for real-time motion tracking in radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less

  14. Application of General Regression Neural Network to the Prediction of LOD Change

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-Hong; Wang, Qi-Jie; Zhu, Jian-Jun; Zhang, Hao

    2012-01-01

    Traditional methods for predicting the change in length of day (LOD change) are mainly based on some linear models, such as the least square model and autoregression model, etc. However, the LOD change comprises complicated non-linear factors and the prediction effect of the linear models is always not so ideal. Thus, a kind of non-linear neural network — general regression neural network (GRNN) model is tried to make the prediction of the LOD change and the result is compared with the predicted results obtained by taking advantage of the BP (back propagation) neural network model and other models. The comparison result shows that the application of the GRNN to the prediction of the LOD change is highly effective and feasible.

  15. A conformal approach for the analysis of the non-linear stability of radiation cosmologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luebbe, Christian, E-mail: c.luebbe@ucl.ac.uk; Department of Mathematics, University of Leicester, University Road, LE1 8RH; Valiente Kroon, Juan Antonio, E-mail: j.a.valiente-kroon@qmul.ac.uk

    2013-01-15

    The conformal Einstein equations for a trace-free (radiation) perfect fluid are derived in terms of the Levi-Civita connection of a conformally rescaled metric. These equations are used to provide a non-linear stability result for de Sitter-like trace-free (radiation) perfect fluid Friedman-Lemaitre-Robertson-Walker cosmological models. The solutions thus obtained exist globally towards the future and are future geodesically complete. - Highlights: Black-Right-Pointing-Pointer We study the Einstein-Euler system in General Relativity using conformal methods. Black-Right-Pointing-Pointer We analyze the structural properties of the associated evolution equations. Black-Right-Pointing-Pointer We establish the non-linear stability of pure radiation cosmological models.

  16. An open-population hierarchical distance sampling model

    USGS Publications Warehouse

    Sollmann, Rachel; Beth Gardner,; Richard B Chandler,; Royle, J. Andrew; T Scott Sillett,

    2015-01-01

    Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for direct estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for island scrub-jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying number of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.

  17. An open-population hierarchical distance sampling model.

    PubMed

    Sollmann, Rahel; Gardner, Beth; Chandler, Richard B; Royle, J Andrew; Sillett, T Scott

    2015-02-01

    Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for Island Scrub-Jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying numbers of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.

  18. Series length used during trend analysis affects sensitivity to changes in progression rate in the ocular hypertension treatment study.

    PubMed

    Gardiner, Stuart K; Demirel, Shaban; De Moraes, Carlos Gustavo; Liebmann, Jeffrey M; Cioffi, George A; Ritch, Robert; Gordon, Mae O; Kass, Michael A

    2013-02-15

    Trend analysis techniques to detect glaucomatous progression typically assume a constant rate of change. This study uses data from the Ocular Hypertension Treatment Study to assess whether this assumption decreases sensitivity to changes in progression rate, by including earlier periods of stability. Series of visual fields (mean 24 per eye) completed at 6-month intervals from participants randomized initially to observation were split into subseries before and after the initiation of treatment (the "split-point"). The mean deviation rate of change (MDR) was derived using these entire subseries, and using only the window length (W) tests nearest the split-point, for different window lengths of W tests. A generalized estimating equation model was used to detect changes in MDR occurring at the split-point. Using shortened subseries with W = 7 tests, the MDR slowed by 0.142 dB/y upon initiation of treatment (P < 0.001), and the proportion of eyes showing "rapid deterioration" (MDR <-0.5 dB/y with P < 5%) decreased from 11.8% to 6.5% (P < 0.001). Using the entire sequence, no significant change in MDR was detected (P = 0.796), and there was no change in the proportion of eyes progressing (P = 0.084). Window lengths 6 ≤ W ≤ 9 produced similar benefits. Event analysis revealed a beneficial treatment effect in this dataset. This effect was not detected by linear trend analysis applied to entire series, but was detected when using shorter subseries of length between six and nine fields. Using linear trend analysis on the entire field sequence may not be optimal for detecting and monitoring progression. Nonlinear analyses may be needed for long series of fields. (ClinicalTrials.gov number, NCT00000125.).

  19. Probabilistic change mapping from airborne LiDAR for post-disaster damage assessment

    NASA Astrophysics Data System (ADS)

    Jalobeanu, A.; Runyon, S. C.; Kruse, F. A.

    2013-12-01

    When both pre- and post-event LiDAR point clouds are available, change detection can be performed to identify areas that were most affected by a disaster event, and to obtain a map of quantitative changes in terms of height differences. In the case of earthquakes in built-up areas for instance, first responders can use a LiDAR change map to help prioritize search and recovery efforts. The main challenge consists of producing reliable change maps, robust to collection conditions, free of processing artifacts (due for instance to triangulation or gridding), and taking into account the various sources of uncertainty. Indeed, datasets acquired within a few years interval are often of different point density (sometimes an order of magnitude higher for recent data), different acquisition geometries, and very likely suffer from georeferencing errors and geometric discrepancies. All these differences might not be important for producing maps from each dataset separately, but they are crucial when performing change detection. We have developed a novel technique for the estimation of uncertainty maps from the LiDAR point clouds, using Bayesian inference, treating all variables as random. The main principle is to grid all points on a common grid before attempting any comparison, as working directly with point clouds is cumbersome and time consuming. A non-parametric approach based on local linear regression was implemented, assuming a locally linear model for the surface. This enabled us to derive error bars on gridded elevations, and then elevation differences. In this way, a map of statistically significant changes could be computed - whereas a deterministic approach would not allow testing of the significance of differences between the two datasets. This approach allowed us to take into account not only the observation noise (due to ranging, position and attitude errors) but also the intrinsic roughness of the observed surfaces occurring when scanning vegetation. As only elevation differences above a predefined noise level are accounted for (according to a specified confidence interval related to the allowable false alarm rate) the change detection is robust to all these sources of noise. To first validate the approach, we built small-scale models and scanned them using a terrestrial laser scanner to establish 'ground truth'. Changes were manually applied to the models then new scans were performed and analyzed. Additionally, two airborne datasets of the Monterey Peninsula, California, were processed and analyzed. The first one was acquired during 2010 (with relatively low point density, 1-3 pts/m2), and the second one was acquired during 2012 (with up to 30 pts/m2). To perform the comparison, a new point cloud registration technique was developed and the data were registered to a common 1 m grid. The goal was to correct systematic shifts due to GPS and INS errors, and focus on the actual height differences regardless of the absolute planimetric accuracy of the datasets. Though no major disaster event occurred between the two acquisition dates, sparse changes were detected and interpreted mostly as construction and natural landscape evolution.

  20. Cubature/ Unscented/ Sigma Point Kalman Filtering with Angular Measurement Models

    DTIC Science & Technology

    2015-07-06

    Cubature/ Unscented/ Sigma Point Kalman Filtering with Angular Measurement Models David Frederic Crouse Naval Research Laboratory 4555 Overlook Ave...measurement and process non- linearities, such as the cubature Kalman filter , can perform ex- tremely poorly in many applications involving angular... Kalman filtering is a realization of the best linear unbiased estimator (BLUE) that evaluates certain integrals for expected values using different forms

  1. A chaotic view of behavior change: a quantum leap for health promotion.

    PubMed

    Resnicow, Ken; Vaughan, Roger

    2006-09-12

    The study of health behavior change, including nutrition and physical activity behaviors, has been rooted in a cognitive-rational paradigm. Change is conceptualized as a linear, deterministic process where individuals weigh pros and cons, and at the point at which the benefits outweigh the cost change occurs. Consistent with this paradigm, the associated statistical models have almost exclusively assumed a linear relationship between psychosocial predictors and behavior. Such a perspective however, fails to account for non-linear, quantum influences on human thought and action. Consider why after years of false starts and failed attempts, a person succeeds at increasing their physical activity, eating healthier or losing weight. Or, why after years of success a person relapses. This paper discusses a competing view of health behavior change that was presented at the 2006 annual ISBNPA meeting in Boston. Rather than viewing behavior change from a linear perspective it can be viewed as a quantum event that can be understood through the lens of Chaos Theory and Complex Dynamic Systems. Key principles of Chaos Theory and Complex Dynamic Systems relevant to understanding health behavior change include: 1) Chaotic systems can be mathematically modeled but are nearly impossible to predict; 2) Chaotic systems are sensitive to initial conditions; 3) Complex Systems involve multiple component parts that interact in a nonlinear fashion; and 4) The results of Complex Systems are often greater than the sum of their parts. Accordingly, small changes in knowledge, attitude, efficacy, etc may dramatically alter motivation and behavioral outcomes. And the interaction of such variables can yield almost infinite potential patterns of motivation and behavior change. In the linear paradigm unaccounted for variance is generally relegated to the catch all "error" term, when in fact such "error" may represent the chaotic component of the process. The linear and chaotic paradigms are however, not mutually exclusive, as behavior change may include both chaotic and cognitive processes. Studies of addiction suggest that many decisions to change are quantum rather than planned events; motivation arrives as opposed to being planned. Moreover, changes made through quantum processes appear more enduring than those that involve more rational, planned processes. How such processes may apply to nutrition and physical activity behavior and related interventions merits examination.

  2. Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.

    PubMed

    Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine

    2010-09-01

    Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.

  3. Linear Power-Flow Models in Multiphase Distribution Networks: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, Andrey; Dall'Anese, Emiliano

    This paper considers multiphase unbalanced distribution systems and develops approximate power-flow models where bus-voltages, line-currents, and powers at the point of common coupling are linearly related to the nodal net power injections. The linearization approach is grounded on a fixed-point interpretation of the AC power-flow equations, and it is applicable to distribution systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. The proposed linear models can facilitate the development of computationally-affordable optimization and control applications -- frommore » advanced distribution management systems settings to online and distributed optimization routines. Performance of the proposed models is evaluated on different test feeders.« less

  4. Monte Carlo-based investigations on the impact of removing the flattening filter on beam quality specifiers for photon beam dosimetry.

    PubMed

    Czarnecki, Damian; Poppe, Björn; Zink, Klemens

    2017-06-01

    The impact of removing the flattening filter in clinical electron accelerators on the relationship between dosimetric quantities such as beam quality specifiers and the mean photon and electron energies of the photon radiation field was investigated by Monte Carlo simulations. The purpose of this work was to determine the uncertainties when using the well-known beam quality specifiers or energy-based beam specifiers as predictors of dosimetric photon field properties when removing the flattening filter. Monte Carlo simulations applying eight different linear accelerator head models with and without flattening filter were performed in order to generate realistic radiation sources and calculate field properties such as restricted mass collision stopping power ratios (L¯/ρ)airwater, mean photon and secondary electron energies. To study the impact of removing the flattening filter on the beam quality correction factors k Q , this factor for detailed ionization chamber models was calculated by Monte Carlo simulations. Stopping power ratios (L¯/ρ)airwater and k Q values for different ionization chambers as a function of TPR1020 and %dd(10) x were calculated. Moreover, mean photon energies in air and at the point of measurement in water as well as mean secondary electron energies at the point of measurement were calculated. The results revealed that removing the flattening filter led to a change within 0.3% in the relationship between %dd(10) x and (L¯/ρ)airwater, whereby the relationship between TPR1020 and (L¯/ρ)airwater changed up to 0.8% for high energy photon beams. However, TPR1020 was a good predictor of (L¯/ρ)airwater for both types of linear accelerator with energies < 10 MeV with a maximal deviation between both types of accelerators of 0.23%. According to the results, the mean photon energy below the linear accelerators head as well as at the point of measurement may not be suitable as a predictor of (L¯/ρ)airwater and k Q to merge the dosimetry of both linear accelerator types. It was possible to derive (L¯/ρ)airwater using the mean secondary electron energy at the point of measurement as a predictor with an accuracy of 0.17%. A bias between k Q for linear accelerators with and without flattening filter within 1.1% and 1.6% was observed for TPR1020 and %dd(10) x respectively. The results of this study have shown that removing the flattening filter led to a change in the relationship between the well-known beam quality specifiers and dosimetric quantities at the point of measurement, namely (L¯/ρ)airwater, mean photon and electron energy. Furthermore, the results show that a beam profile correction is important for dose measurements with large ionization chambers in flattening filter free beams. © 2017 American Association of Physicists in Medicine.

  5. Singularities of interference of three waves with different polarization states.

    PubMed

    Kurzynowski, Piotr; Woźniak, Władysław A; Zdunek, Marzena; Borwińska, Monika

    2012-11-19

    We presented the interference setup which can produce interesting two-dimensional patterns in polarization state of the resulting light wave emerging from the setup. The main element of our setup is the Wollaston prism which gives two plane, linearly polarized waves (eigenwaves of both Wollaston's wedges) with linearly changed phase difference between them (along the x-axis). The third wave coming from the second arm of proposed polarization interferometer is linearly or circularly polarized with linearly changed phase difference along the y-axis. The interference of three plane waves with different polarization states (LLL - linear-linear-linear or LLC - linear-linear-circular) and variable change difference produce two-dimensional light polarization and phase distributions with some characteristic points and lines which can be claimed to constitute singularities of different types. The aim of this article is to find all kind of these phase and polarization singularities as well as their classification. We postulated in our theoretical simulations and verified in our experiments different kinds of polarization singularities, depending on which polarization parameter was considered (the azimuth and ellipticity angles or the diagonal and phase angles). We also observed the phase singularities as well as the isolated zero intensity points which resulted from the polarization singularities when the proper analyzer was used at the end of the setup. The classification of all these singularities as well as their relationships were analyzed and described.

  6. Wavefront Sensing for WFIRST with a Linear Optical Model

    NASA Technical Reports Server (NTRS)

    Jurling, Alden S.; Content, David A.

    2012-01-01

    In this paper we develop methods to use a linear optical model to capture the field dependence of wavefront aberrations in a nonlinear optimization-based phase retrieval algorithm for image-based wavefront sensing. The linear optical model is generated from a ray trace model of the system and allows the system state to be described in terms of mechanical alignment parameters rather than wavefront coefficients. This approach allows joint optimization over images taken at different field points and does not require separate convergence of phase retrieval at individual field points. Because the algorithm exploits field diversity, multiple defocused images per field point are not required for robustness. Furthermore, because it is possible to simultaneously fit images of many stars over the field, it is not necessary to use a fixed defocus to achieve adequate signal-to-noise ratio despite having images with high dynamic range. This allows high performance wavefront sensing using in-focus science data. We applied this technique in a simulation model based on the Wide Field Infrared Survey Telescope (WFIRST) Intermediate Design Reference Mission (IDRM) imager using a linear optical model with 25 field points. We demonstrate sub-thousandth-wave wavefront sensing accuracy in the presence of noise and moderate undersampling for both monochromatic and polychromatic images using 25 high-SNR target stars. Using these high-quality wavefront sensing results, we are able to generate upsampled point-spread functions (PSFs) and use them to determine PSF ellipticity to high accuracy in order to reduce the systematic impact of aberrations on the accuracy of galactic ellipticity determination for weak-lensing science.

  7. Simulation of mechano-electrical transduction in the cochlea considering basilar membrane vibration and the ionic current of the inner hair cells

    NASA Astrophysics Data System (ADS)

    Lee, Sinyoung; Koike, Takuji

    2018-05-01

    The inner hair cells (IHCs) in the cochlea transduce mechanical vibration of the basilar membrane (BM), caused by sound pressure, to electrical signals that are transported along the acoustic nerve to the brain. The mechanical vibration of the BM and the ionic behaviors of the IHCs have been investigated. However, consideration of the ionic behavior of the IHCs related to mechanical vibration is necessary to investigate the mechano-electrical transduction of the cochlea. In this study, a finite-element model of the BM, which takes into account the non-linear activities of the outer hair cells (OHCs), and an ionic current model of IHC were combined. The amplitudes and phases of the vibration at several points on the BM were obtained from the finite-element model by applying sound pressure. These values were fed into the ionic current model, and changes in membrane potential and calcium ion concentration of the IHCs were calculated. The membrane potential of the IHC at the maximum amplitude point (CF point) was higher than that at the non-CF points. The calcium ion concentration at the CF point was also higher than that at the non-CF points. These results suggest that the cochlea achieves its good frequency discrimination ability through mechano-electrical transduction.

  8. Bone mineral density loss in thoracic and lumbar vertebrae following radiation for abdominal cancers.

    PubMed

    Wei, Randy L; Jung, Brian C; Manzano, Wilfred; Sehgal, Varun; Klempner, Samuel J; Lee, Steve P; Ramsinghani, Nilam S; Lall, Chandana

    2016-03-01

    To investigate the relationship between abdominal chemoradiation (CRT) for locally advanced cancers and bone mineral density (BMD) reduction in the vertebral spine. Data from 272 patients who underwent abdominal radiation therapy from January 1997 to May 2015 were retrospectively reviewed. Forty-two patients received computed tomography (CT) scans of the abdomen prior to initiation and at least twice after radiation therapy. Bone attenuation (in Hounsfield unit) (HU) measurements were collected for each vertebral level from T7 to L5 using sagittal CT images. Radiation point dose was obtained at each mid-vertebral body from the radiation treatment plan. Percent change in bone attenuation (Δ%HU) between baseline and post-radiation therapy were computed for each vertebral body. The Δ%HU was compared against radiation dose using Pearson's linear correlation. Abdominal radiotherapy caused significant reduction in vertebral BMD as measured by HU. Patients who received only chemotherapy did not show changes in their BMD in this study. The Δ%HU was significantly correlated with the radiation point dose to the vertebral body (R=-0.472, P<0.001) within 4-8 months following RT. The same relationship persisted in subsequent follow up scans 9 months following RT (R=-0.578, P<0.001). Based on the result of linear regression, 5 Gy, 15 Gy, 25 Gy, 35 Gy, and 45 Gy caused 21.7%, 31.1%, 40.5%, 49.9%, and 59.3% decrease in HU following RT, respectively. Our generalized linear model showed that pre-RT HU had a positive effect (β=0.830) on determining post-RT HU, while number of months post RT (β=-0.213) and radiation point dose (β=-1.475) had a negative effect. A comparison of the predicted versus actual HU showed significant correlation (R=0.883, P<0.001) with the slope of the best linear fit=0.81. Our model's predicted HU were within ±20 HU of the actual value in 53% of cases, 70% of the predictions were within ±30 HU, 81% were within ±40 HU, and 90% were within ±50 HU of the actual post-RT HU. Four of 42 patients were found to have vertebral body compression fractures in the field of radiation. Patients who receive abdominal chemoradiation develop significant BMD loss in the thoracic and lumbar vertebrae. Treatment-related BMD loss may contribute to the development of vertebral compression fractures. A predictive model for post-CRT BMD changes may inform bone protective strategies in patients planned for abdominal CRT. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. Repassivation Investigations on Aluminium: Physical Chemistry of the Passive State

    NASA Astrophysics Data System (ADS)

    Nagy, Tristan Oliver; Weimerskirch, Morris Jhängi Joseph; Pacher, Ulrich; Kautek, Wolfgang

    2016-09-01

    We show the temporal change in repassivation mechanism as a time-dependent linear combination of a high-field model of oxide growth (HFM) and the point defect model (PDM). The observed switch in transient repassivation current-decrease under potentiostatic control occurs independently of the active electrode size and effective repassivation time for all applied overpotentials. For that, in situ depassivation of plasma electrolytically oxidized (PEO) coatings on aluminium was performed with nanosecond laser pulses at 266 nm and the repassivation current transients were recorded as a function of pulse number. A mathematical model combines the well established theories of oxide-film formation and growth kinetics, giving insight in the non linear transient behaviour of micro-defect passivation. According to our findings, the repassivation process can be described as a charge consumption via two concurrent channels. While the major current-decay at the very beginning of the fast healing oxide follows a point-defect type exponential damping, the HFM mechanism supersedes gradually, the longer the repassivation evolves. Furthermore, the material seems to reminisce former laser treatments via defects built-in during depassivation, leading to a higher charge contribution of the PDM mechanism at higher pulse numbers.

  10. Shifts in river-floodplain relationship reveal the impacts of river regulation: A case study of Dongting Lake in China

    NASA Astrophysics Data System (ADS)

    Lu, Cai; Jia, Yifei; Jing, Lei; Zeng, Qing; Lei, Jialin; Zhang, Shuanghu; Lei, Guangchun; Wen, Li

    2018-04-01

    Better understanding of the dynamics of hydrological connectivity between river and floodplain is essential for the ecological integrity of river systems. In this study, we proposed a regime-switch modelling (RSM) framework, which integrates change point analysis with dynamic linear regression, to detect and date change points in linear regression, and to quantify the relative importance of natural variations and anthropogenic disturbances. The approach was applied to the long-term hydrological time series to investigate the evolution of river-floodplain relation in Dongting Lake in the last five decades, during which the Yangtze River system experienced unprecedented anthropogenic manipulations. Our results suggested that 1) there were five distinct regimes during which the influence of inflows and local climate on lake water level changed significantly. The detected change points were well corresponding to the major events occurred upon the Yangtze; 2) although the importance of inflows from the Yangtze was greater than that of the tributaries flows over the five regimes, the relative contribution gradually decreased from regime 1 to regime 5. The weakening of hydrological forcing from the Yangtze was mainly attributed to the reduction in channel capacity resulting from sedimentation in the outfalls and water level dropping caused by river bed scour in the mainstream; 3) the effects of local climate was much smaller than these of inflows; and 4) since the operation of The Three Gorges Dam in 2006, the river-floodplain relationship entered a new equilibrium in that all investigated variables changed synchronously in terms of direction and magnitude. The results from this study reveal the mechanisms underlying the alternated inundation regime in Dongting Lake. The identified change points, some of which have not been previously reported, will allow a reappraisal of the current dam and reservoir operation strategies not only for flood/drought risk management but also for the maintenance and restoration of the regional ecological integrity.

  11. Non-linear regime of the Generalized Minimal Massive Gravity in critical points

    NASA Astrophysics Data System (ADS)

    Setare, M. R.; Adami, H.

    2016-03-01

    The Generalized Minimal Massive Gravity (GMMG) theory is realized by adding the CS deformation term, the higher derivative deformation term, and an extra term to pure Einstein gravity with a negative cosmological constant. In the present paper we obtain exact solutions to the GMMG field equations in the non-linear regime of the model. GMMG model about AdS_3 space is conjectured to be dual to a 2-dimensional CFT. We study the theory in critical points corresponding to the central charges c_-=0 or c_+=0, in the non-linear regime. We show that AdS_3 wave solutions are present, and have logarithmic form in critical points. Then we study the AdS_3 non-linear deformation solution. Furthermore we obtain logarithmic deformation of extremal BTZ black hole. After that using Abbott-Deser-Tekin method we calculate the energy and angular momentum of these types of black hole solutions.

  12. Latent log-linear models for handwritten digit classification.

    PubMed

    Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann

    2012-06-01

    We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.

  13. Feedback linearization for control of air breathing engines

    NASA Technical Reports Server (NTRS)

    Phillips, Stephen; Mattern, Duane

    1991-01-01

    The method of feedback linearization for control of the nonlinear nozzle and compressor components of an air breathing engine is presented. This method overcomes the need for a large number of scheduling variables and operating points to accurately model highly nonlinear plants. Feedback linearization also results in linear closed loop system performance simplifying subsequent control design. Feedback linearization is used for the nonlinear partial engine model and performance is verified through simulation.

  14. Linear and quadratic models of point process systems: contributions of patterned input to output.

    PubMed

    Lindsay, K A; Rosenberg, J R

    2012-08-01

    In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Trajectories of change across outcomes in intensive treatment for adolescent panic disorder and agoraphobia.

    PubMed

    Gallo, Kaitlin P; Cooper-Vince, Christine E; Hardway, Christina L; Pincus, Donna B; Comer, Jonathan S

    2014-01-01

    Much remains to be learned about typical and individual growth trajectories across treatment for adolescent panic disorder with and without agoraphobia and about critical treatment points associated with key changes. The present study examined the rate and shape of change across an 8-day intensive cognitive behavioral therapy for adolescent panic disorder with and without agoraphobia (N = 56). Participants ranged in age from 12 to 17 (M = 15.14, SD = 1.70; 58.9% female, 78.6% Caucasian). Multilevel modeling evaluated within-treatment linear and nonlinear changes across three treatment outcomes: panic severity, fear, and avoidance. Overall panic severity showed linear change, decreasing throughout treatment. In contrast, fear and avoidance ratings both showed cubic change, peaking slightly at the first session of treatment, starting to decrease at the second session of treatment, and with large gains continuing then plateauing at the fourth session. Findings are considered with regard to the extent to which they may elucidate critical treatment components and sessions for adolescents with panic disorder with and without agoraphobia.

  16. An Application to the Prediction of LOD Change Based on General Regression Neural Network

    NASA Astrophysics Data System (ADS)

    Zhang, X. H.; Wang, Q. J.; Zhu, J. J.; Zhang, H.

    2011-07-01

    Traditional prediction of the LOD (length of day) change was based on linear models, such as the least square model and the autoregressive technique, etc. Due to the complex non-linear features of the LOD variation, the performances of the linear model predictors are not fully satisfactory. This paper applies a non-linear neural network - general regression neural network (GRNN) model to forecast the LOD change, and the results are analyzed and compared with those obtained with the back propagation neural network and other models. The comparison shows that the performance of the GRNN model in the prediction of the LOD change is efficient and feasible.

  17. Progression of MDS-UPDRS Scores Over Five Years in De Novo Parkinson Disease from the Parkinson's Progression Markers Initiative Cohort.

    PubMed

    Holden, Samantha K; Finseth, Taylor; Sillau, Stefan H; Berman, Brian D

    2018-01-01

    The Movement Disorder Society Unified Parkinson Disease Rating Scale (MDS-UDPRS) is a commonly used tool to measure Parkinson disease (PD) progression. Longitudinal changes in MDS-UPDRS scores in de novo PD have not been established. Determine progression rates of MDS-UPDRS scores in de novo PD. 362 participants from the Parkinson's Progression Markers Initiative, a multicenter longitudinal cohort study of de novo PD, were included. Longitudinal progression of MDS-UPDRS total and subscale scores were modeled using mixed model regression. MDS-UPDRS scores increased in a linear fashion over five years in de novo PD. MDS-UPDRS total score increased an estimated 4.0 points/year, Part I 0.25 points/year, Part II 1.0 points/year, and Part III 2.4 points/year. The expected average progression of MDS-UPDRS scores in de novo PD from this study can assist in clinical monitoring and provide comparative data for detection of disease modification in treatment trials.

  18. The effect of dropout on the efficiency of D-optimal designs of linear mixed models.

    PubMed

    Ortega-Azurduy, S A; Tan, F E S; Berger, M P F

    2008-06-30

    Dropout is often encountered in longitudinal data. Optimal designs will usually not remain optimal in the presence of dropout. In this paper, we study D-optimal designs for linear mixed models where dropout is encountered. Moreover, we estimate the efficiency loss in cases where a D-optimal design for complete data is chosen instead of that for data with dropout. Two types of monotonically decreasing response probability functions are investigated to describe dropout. Our results show that the location of D-optimal design points for the dropout case will shift with respect to that for the complete and uncorrelated data case. Owing to this shift, the information collected at the D-optimal design points for the complete data case does not correspond to the smallest variance. We show that the size of the displacement of the time points depends on the linear mixed model and that the efficiency loss is moderate.

  19. Real-time solution of linear computational problems using databases of parametric reduced-order models with arbitrary underlying meshes

    NASA Astrophysics Data System (ADS)

    Amsallem, David; Tezaur, Radek; Farhat, Charbel

    2016-12-01

    A comprehensive approach for real-time computations using a database of parametric, linear, projection-based reduced-order models (ROMs) based on arbitrary underlying meshes is proposed. In the offline phase of this approach, the parameter space is sampled and linear ROMs defined by linear reduced operators are pre-computed at the sampled parameter points and stored. Then, these operators and associated ROMs are transformed into counterparts that satisfy a certain notion of consistency. In the online phase of this approach, a linear ROM is constructed in real-time at a queried but unsampled parameter point by interpolating the pre-computed linear reduced operators on matrix manifolds and therefore computing an interpolated linear ROM. The proposed overall model reduction framework is illustrated with two applications: a parametric inverse acoustic scattering problem associated with a mockup submarine, and a parametric flutter prediction problem associated with a wing-tank system. The second application is implemented on a mobile device, illustrating the capability of the proposed computational framework to operate in real-time.

  20. Slope Estimation in Noisy Piecewise Linear Functions✩

    PubMed Central

    Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy

    2014-01-01

    This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure. PMID:25419020

  1. Slope Estimation in Noisy Piecewise Linear Functions.

    PubMed

    Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy

    2015-03-01

    This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure.

  2. Automated Illustration of Molecular Flexibility.

    PubMed

    Bryden, A; Phillips, George N; Gleicher, M

    2012-01-01

    In this paper, we present an approach to creating illustrations of molecular flexibility using normal mode analysis (NMA). The output of NMA is a collection of points corresponding to the locations of atoms and associated motion vectors, where a vector for each point is known. Our approach abstracts the complex object and its motion by grouping the points, models the motion of each group as an affine velocity, and depicts the motion of each group by automatically choosing glyphs such as arrows. Affine exponentials allow the extrapolation of nonlinear effects such as near rotations and spirals from the linear velocities. Our approach automatically groups points by finding sets of neighboring points whose motions fit the motion model. The geometry and motion models for each group are used to determine glyphs that depict the motion, with various aspects of the motion mapped to each glyph. We evaluated the utility of our system in real work done by structural biologists both by utilizing it in our own structural biology work and quantitatively measuring its usefulness on a set of known protein conformation changes. Additionally, in order to allow ourselves and our collaborators to effectively use our techniques we integrated our system with commonly used tools for molecular visualization.

  3. Long-term skeletal effects of high-pull headgear followed by fixed appliances for the treatment of Class II malocclusions.

    PubMed

    Bilbo, E Erin; Marshall, Steven D; Southard, Karin A; Allareddy, Verrasathpurush; Holton, Nathan; Thames, Allyn M; Otsby, Marlene S; Southard, Thomas E

    2018-04-18

    The long-term skeletal effects of Class II treatment in growing individuals using high-pull facebow headgear and fixed edgewise appliances have not been reported. The purpose of this study was to evaluate the long-term skeletal effects of treatment using high-pull headgear followed by fixed orthodontic appliances compared to an untreated control group. Changes in anteroposterior and vertical cephalometric measurements of 42 Class II subjects (n = 21, mean age = 10.7 years) before treatment, after headgear correction to Class I molar relationship, after treatment with fixed appliances, and after long-term retention (mean 4.1 years), were compared to similar changes in a matched control group (n = 21, mean age = 10.9 years) by multivariable linear regression models. Compared to control, the study group displayed significant long-term horizontal restriction of A-point (SNA = -1.925°, P < .0001; FH-NA = -3.042°, P < .0001; linear measurement A-point to Vertical Reference = -3.859 mm, P < .0001) and reduction of the ANB angle (-1.767°, P < .0001), with no effect on mandibular horizontal growth or maxillary and mandibular vertical skeletal changes. A-point horizontal restriction and forward mandibular horizontal growth accompanied the study group correction to Class I molar, and these changes were stable long term. One phase treatment for Class II malocclusion with high-pull headgear followed by fixed orthodontic appliances resulted in correction to Class I molar through restriction of horizontal maxillary growth with continued horizontal mandibular growth and vertical skeletal changes unaffected. The anteroposterior molar correction and skeletal effects of this treatment were stable long term.

  4. Regression analysis using dependent Polya trees.

    PubMed

    Schörgendorfer, Angela; Branscum, Adam J

    2013-11-30

    Many commonly used models for linear regression analysis force overly simplistic shape and scale constraints on the residual structure of data. We propose a semiparametric Bayesian model for regression analysis that produces data-driven inference by using a new type of dependent Polya tree prior to model arbitrary residual distributions that are allowed to evolve across increasing levels of an ordinal covariate (e.g., time, in repeated measurement studies). By modeling residual distributions at consecutive covariate levels or time points using separate, but dependent Polya tree priors, distributional information is pooled while allowing for broad pliability to accommodate many types of changing residual distributions. We can use the proposed dependent residual structure in a wide range of regression settings, including fixed-effects and mixed-effects linear and nonlinear models for cross-sectional, prospective, and repeated measurement data. A simulation study illustrates the flexibility of our novel semiparametric regression model to accurately capture evolving residual distributions. In an application to immune development data on immunoglobulin G antibodies in children, our new model outperforms several contemporary semiparametric regression models based on a predictive model selection criterion. Copyright © 2013 John Wiley & Sons, Ltd.

  5. Inverse Modelling Problems in Linear Algebra Undergraduate Courses

    ERIC Educational Resources Information Center

    Martinez-Luaces, Victor E.

    2013-01-01

    This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…

  6. Forcing Regression through a Given Point Using Any Familiar Computational Routine.

    DTIC Science & Technology

    1983-03-01

    a linear model , Y =a + OX + e ( Model I) then adopt the principle of least squares; and use sample data to estimate the unknown parameters, a and 8...has an expected value of zero indicates that the "average" response is considered linear . If c varies widely, Model I, though conceptually correct, may...relationship is linear from the maximum observed x to x - a, then Model II should be used. To pro- ceed with the customary evaluation of Model I would be

  7. Infrastructure Tsunami Could Easily Dwarf Climate Change

    NASA Astrophysics Data System (ADS)

    Lansing, Stephen

    Compared to the physical and biological sciences, so far complexity has had far less impact on mainstream social science. This is not surprising, but it is alarming because we find ourselves in the midst of a planetary-scale transition from the Holocene to the Anthropocene. We have already breached some planetary boundaries for sustainability, but those tipping points are nearly invisible from the perspective of the linear equilibrium models that continue to hold sway in social science...

  8. Free oscillations in a climate model with ice-sheet dynamics

    NASA Technical Reports Server (NTRS)

    Kallen, E.; Crafoord, C.; Ghil, M.

    1979-01-01

    A study of stable periodic solutions to a simple nonlinear model of the ocean-atmosphere-ice system is presented. The model has two dependent variables: ocean-atmosphere temperature and latitudinal extent of the ice cover. No explicit dependence on latitude is considered in the model. Hence all variables depend only on time and the model consists of a coupled set of nonlinear ordinary differential equations. The globally averaged ocean-atmosphere temperature in the model is governed by the radiation balance. The reflectivity to incoming solar radiation, i.e., the planetary albedo, includes separate contributions from sea ice and from continental ice sheets. The major physical mechanisms active in the model are (1) albedo-temperature feedback, (2) continental ice-sheet dynamics and (3) precipitation-rate variations. The model has three-equilibrium solutions, two of which are linearly unstable, while one is linearly stable. For some choices of parameters, the stability picture changes and sustained, finite-amplitude oscillations obtain around the previously stable equilibrium solution. The physical interpretation of these oscillations points to the possibility of internal mechanisms playing a role in glaciation cycles.

  9. Waste management under multiple complexities: Inexact piecewise-linearization-based fuzzy flexible programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun Wei; Huang, Guo H., E-mail: huang@iseis.org; Institute for Energy, Environment and Sustainable Communities, University of Regina, Regina, Saskatchewan, S4S 0A2

    2012-06-15

    Highlights: Black-Right-Pointing-Pointer Inexact piecewise-linearization-based fuzzy flexible programming is proposed. Black-Right-Pointing-Pointer It's the first application to waste management under multiple complexities. Black-Right-Pointing-Pointer It tackles nonlinear economies-of-scale effects in interval-parameter constraints. Black-Right-Pointing-Pointer It estimates costs more accurately than the linear-regression-based model. Black-Right-Pointing-Pointer Uncertainties are decreased and more satisfactory interval solutions are obtained. - Abstract: To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerancemore » intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP's advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate that their solutions would be significantly different. The decreased system uncertainties in IPFP's solutions demonstrate its effectiveness for providing more satisfactory interval solutions than IPFP3. Following its first application to waste management, the IPFP can be potentially applied to other environmental problems under multiple complexities.« less

  10. Controls/CFD Interdisciplinary Research Software Generates Low-Order Linear Models for Control Design From Steady-State CFD Results

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.

    1997-01-01

    The NASA Lewis Research Center is developing analytical methods and software tools to create a bridge between the controls and computational fluid dynamics (CFD) disciplines. Traditionally, control design engineers have used coarse nonlinear simulations to generate information for the design of new propulsion system controls. However, such traditional methods are not adequate for modeling the propulsion systems of complex, high-speed vehicles like the High Speed Civil Transport. To properly model the relevant flow physics of high-speed propulsion systems, one must use simulations based on CFD methods. Such CFD simulations have become useful tools for engineers that are designing propulsion system components. The analysis techniques and software being developed as part of this effort are an attempt to evolve CFD into a useful tool for control design as well. One major aspect of this research is the generation of linear models from steady-state CFD results. CFD simulations, often used during the design of high-speed inlets, yield high resolution operating point data. Under a NASA grant, the University of Akron has developed analytical techniques and software tools that use these data to generate linear models for control design. The resulting linear models have the same number of states as the original CFD simulation, so they are still very large and computationally cumbersome. Model reduction techniques have been successfully applied to reduce these large linear models by several orders of magnitude without significantly changing the dynamic response. The result is an accurate, easy to use, low-order linear model that takes less time to generate than those generated by traditional means. The development of methods for generating low-order linear models from steady-state CFD is most complete at the one-dimensional level, where software is available to generate models with different kinds of input and output variables. One-dimensional methods have been extended somewhat so that linear models can also be generated from two- and three-dimensional steady-state results. Standard techniques are adequate for reducing the order of one-dimensional CFD-based linear models. However, reduction of linear models based on two- and three-dimensional CFD results is complicated by very sparse, ill-conditioned matrices. Some novel approaches are being investigated to solve this problem.

  11. Linear dependence between the wavefront gradient and the masked intensity for the point source with a CCD sensor

    NASA Astrophysics Data System (ADS)

    Yang, Huizhen; Ma, Liang; Wang, Bin

    2018-01-01

    In contrast to the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system doesn't need a WFS to measure the wavefront aberrations. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. The model-based WFSless system has a great potential in real-time correction applications because of its fast convergence. The control algorithm of the model-based WFSless system is based on an important theory result that is the linear relation between the Mean-Square Gradient (MSG) magnitude of the wavefront aberration and the second moment of the masked intensity distribution in the focal plane (also called as Masked Detector Signal-MDS). The linear dependence between MSG and MDS for the point source imaging with a CCD sensor will be discussed from theory and simulation in this paper. The theory relationship between MSG and MDS is given based on our previous work. To verify the linear relation for the point source, we set up an imaging model under atmospheric turbulence. Additionally, the value of MDS will be deviate from that of theory because of the noise of detector and further the deviation will affect the correction effect. The theory results under noise will be obtained through theoretical derivation and then the linear relation between MDS and MDS under noise will be discussed through the imaging model. Results show the linear relation between MDS and MDS under noise is also maintained well, which provides a theoretical support to applications of the model-based WFSless system.

  12. Carbon dioxide stripping in aquaculture -- part III: model verification

    USGS Publications Warehouse

    Colt, John; Watten, Barnaby; Pfeiffer, Tim

    2012-01-01

    Based on conventional mass transfer models developed for oxygen, the use of the non-linear ASCE method, 2-point method, and one parameter linear-regression method were evaluated for carbon dioxide stripping data. For values of KLaCO2 < approximately 1.5/h, the 2-point or ASCE method are a good fit to experimental data, but the fit breaks down at higher values of KLaCO2. How to correct KLaCO2 for gas phase enrichment remains to be determined. The one-parameter linear regression model was used to vary the C*CO2 over the test, but it did not result in a better fit to the experimental data when compared to the ASCE or fixed C*CO2 assumptions.

  13. PSC algorithm description

    NASA Technical Reports Server (NTRS)

    Nobbs, Steven G.

    1995-01-01

    An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.

  14. Measuring Teacher Effectiveness through Hierarchical Linear Models: Exploring Predictors of Student Achievement and Truancy

    ERIC Educational Resources Information Center

    Subedi, Bidya Raj; Reese, Nancy; Powell, Randy

    2015-01-01

    This study explored significant predictors of student's Grade Point Average (GPA) and truancy (days absent), and also determined teacher effectiveness based on proportion of variance explained at teacher level model. We employed a two-level hierarchical linear model (HLM) with student and teacher data at level-1 and level-2 models, respectively.…

  15. Analysis of Ninety Degree Flexure Tests for Characterization of Composite Transverse Tensile Strength

    NASA Technical Reports Server (NTRS)

    OBrien, T. Kevin; Krueger, Ronald

    2001-01-01

    Finite element (FE) analysis was performed on 3-point and 4-point bending test configurations of ninety degree oriented glass-epoxy and graphite-epoxy composite beams to identify deviations from beam theory predictions. Both linear and geometric non-linear analyses were performed using the ABAQUS finite element code. The 3-point and 4-point bending specimens were first modeled with two-dimensional elements. Three-dimensional finite element models were then performed for selected 4-point bending configurations to study the stress distribution across the width of the specimens and compare the results to the stresses computed from two-dimensional plane strain and plane stress analyses and the stresses from beam theory. Stresses for all configurations were analyzed at load levels corresponding to the measured transverse tensile strength of the material.

  16. The appearance, motion, and disappearance of three-dimensional magnetic null points

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, Nicholas A., E-mail: namurphy@cfa.harvard.edu; Parnell, Clare E.; Haynes, Andrew L.

    2015-10-15

    While theoretical models and simulations of magnetic reconnection often assume symmetry such that the magnetic null point when present is co-located with a flow stagnation point, the introduction of asymmetry typically leads to non-ideal flows across the null point. To understand this behavior, we present exact expressions for the motion of three-dimensional linear null points. The most general expression shows that linear null points move in the direction along which the magnetic field and its time derivative are antiparallel. Null point motion in resistive magnetohydrodynamics results from advection by the bulk plasma flow and resistive diffusion of the magnetic field,more » which allows non-ideal flows across topological boundaries. Null point motion is described intrinsically by parameters evaluated locally; however, global dynamics help set the local conditions at the null point. During a bifurcation of a degenerate null point into a null-null pair or the reverse, the instantaneous velocity of separation or convergence of the null-null pair will typically be infinite along the null space of the Jacobian matrix of the magnetic field, but with finite components in the directions orthogonal to the null space. Not all bifurcating null-null pairs are connected by a separator. Furthermore, except under special circumstances, there will not exist a straight line separator connecting a bifurcating null-null pair. The motion of separators cannot be described using solely local parameters because the identification of a particular field line as a separator may change as a result of non-ideal behavior elsewhere along the field line.« less

  17. Summer Research Program (1992). Summer Faculty Research Program (SFRP) Reports. Volume 2. Armstrong Laboratory

    DTIC Science & Technology

    1992-12-01

    desirable. In this study, the proposed model consists of a thick-walled, highly deformable elastic tube in which the blood flow is described by linearized ...presented a mechanical model consisting of linearized Navier-Stokes and finite elasticity equations to predict blood pooling under acceleration stress... linear multielement model of the cardiovascular system which can calculate blood pressures and flows at any point in the cardio- vascular system. It

  18. Fast intersection detection algorithm for PC-based robot off-line programming

    NASA Astrophysics Data System (ADS)

    Fedrowitz, Christian H.

    1994-11-01

    This paper presents a method for fast and reliable collision detection in complex production cells. The algorithm is part of the PC-based robot off-line programming system of the University of Siegen (Ropsus). The method is based on a solid model which is managed by a simplified constructive solid geometry model (CSG-model). The collision detection problem is divided in two steps. In the first step the complexity of the problem is reduced in linear time. In the second step the remaining solids are tested for intersection. For this the Simplex algorithm, which is known from linear optimization, is used. It computes a point which is common to two convex polyhedra. The polyhedra intersect, if such a point exists. Regarding the simplified geometrical model of Ropsus the algorithm runs also in linear time. In conjunction with the first step a resultant collision detection algorithm is found which requires linear time in all. Moreover it computes the resultant intersection polyhedron using the dual transformation.

  19. Uniform background assumption produces misleading lung EIT images.

    PubMed

    Grychtol, Bartłomiej; Adler, Andy

    2013-06-01

    Electrical impedance tomography (EIT) estimates an image of conductivity change within a body from stimulation and measurement at body surface electrodes. There is significant interest in EIT for imaging the thorax, as a monitoring tool for lung ventilation. To be useful in this application, we require an understanding of if and when EIT images can produce inaccurate images. In this paper, we study the consequences of the homogeneous background assumption, frequently made in linear image reconstruction, which introduces a mismatch between the reference measurement and the linearization point. We show in simulation and experimental data that the resulting images may contain large and clinically significant errors. A 3D finite element model of thorax conductivity is used to simulate EIT measurements for different heart and lung conductivity, size and position, as well as different amounts of gravitational collapse and ventilation-associated conductivity change. Three common linear EIT reconstruction algorithms are studied. We find that the asymmetric position of the heart can cause EIT images of ventilation to show up to 60% undue bias towards the left lung and that the effect is particularly strong for a ventilation distribution typical of mechanically ventilated patients. The conductivity gradient associated with gravitational lung collapse causes conductivity changes in non-dependent lung to be overestimated by up to 100% with respect to the dependent lung. Eliminating the mismatch by using a realistic conductivity distribution in the forward model of the reconstruction algorithm strongly reduces these undesirable effects. We conclude that subject-specific anatomically accurate forward models should be used in lung EIT and extra care is required when analysing EIT images of subjects whose background conductivity distribution in the lungs is known to be heterogeneous or exhibiting large changes.

  20. Nonlinear predictive control of a boiler-turbine unit: A state-space approach with successive on-line model linearisation and quadratic optimisation.

    PubMed

    Ławryńczuk, Maciej

    2017-03-01

    This paper details development of a Model Predictive Control (MPC) algorithm for a boiler-turbine unit, which is a nonlinear multiple-input multiple-output process. The control objective is to follow set-point changes imposed on two state (output) variables and to satisfy constraints imposed on three inputs and one output. In order to obtain a computationally efficient control scheme, the state-space model is successively linearised on-line for the current operating point and used for prediction. In consequence, the future control policy is easily calculated from a quadratic optimisation problem. For state estimation the extended Kalman filter is used. It is demonstrated that the MPC strategy based on constant linear models does not work satisfactorily for the boiler-turbine unit whereas the discussed algorithm with on-line successive model linearisation gives practically the same trajectories as the truly nonlinear MPC controller with nonlinear optimisation repeated at each sampling instant. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  1. A non-linear dynamical approach to belief revision in cognitive behavioral therapy

    PubMed Central

    Kronemyer, David; Bystritsky, Alexander

    2014-01-01

    Belief revision is the key change mechanism underlying the psychological intervention known as cognitive behavioral therapy (CBT). It both motivates and reinforces new behavior. In this review we analyze and apply a novel approach to this process based on AGM theory of belief revision, named after its proponents, Carlos Alchourrón, Peter Gärdenfors and David Makinson. AGM is a set-theoretical model. We reconceptualize it as describing a non-linear, dynamical system that occurs within a semantic space, which can be represented as a phase plane comprising all of the brain's attentional, cognitive, affective and physiological resources. Triggering events, such as anxiety-producing or depressing situations in the real world, or their imaginal equivalents, mobilize these assets so they converge on an equilibrium point. A preference function then evaluates and integrates evidentiary data associated with individual beliefs, selecting some of them and comprising them into a belief set, which is a metastable state. Belief sets evolve in time from one metastable state to another. In the phase space, this evolution creates a heteroclinic channel. AGM regulates this process and characterizes the outcome at each equilibrium point. Its objective is to define the necessary and sufficient conditions for belief revision by simultaneously minimizing the set of new beliefs that have to be adopted, and the set of old beliefs that have to be discarded or reformulated. Using AGM, belief revision can be modeled using three (and only three) fundamental syntactical operations performed on belief sets, which are expansion; revision; and contraction. Expansion is like adding a new belief without changing any old ones. Revision is like adding a new belief and changing old, inconsistent ones. Contraction is like changing an old belief without adding any new ones. We provide operationalized examples of this process in action. PMID:24860491

  2. Effects of laser in situ keratomileusis on mental health-related quality of life.

    PubMed

    Tounaka-Fujii, Kaoru; Yuki, Kenya; Negishi, Kazuno; Toda, Ikuko; Abe, Takayuki; Kouyama, Keisuke; Tsubota, Kazuo

    2016-01-01

    The aims of our study were to investigate whether laser in situ keratomileusis (LASIK) improves health-related quality of life (HRQoL) and to identify factors that affect postoperative HRQoL. A total of 213 Japanese patients who underwent primary LASIK were analyzed in this study. The average age of patients was 35.0±9.4 years. The subjects were asked to answer questions regarding subjective quality of vision, satisfaction, and quality of life (using the Japanese version of 36-Item Short Form Health Survey Version 2) at three time points: before LASIK, 1 month after LASIK, and 6 months after LASIK. Longitudinal changes over 6 months in the outputs of mental component summary (MCS) score and the physical component summary (PCS) score from the 36-Item Short Form Health Survey Version 2 questionnaire were compared between time points using a linear mixed-effects model. Delta MCS and PCS were calculated by subtracting the postoperative score (1 month after LASIK) from the preoperative score. Preoperative and postoperative factors associated with a change in the MCS score or PCS score were evaluated via a linear regression model. The preoperative MCS score was 51.0±9.4 and increased to 52.0±9.8 and 51.5±9.6 at 1 month and 6 months after LASIK, respectively, and the trend for the change from baseline in MCS through 6 months was significant ( P =0.03). PCS score did not change following LASIK. Delta MCS was significantly negatively associated with preoperative spherical equivalent, axial length, and postoperative quality of vision, after adjusting for potential confounding factors. Mental HRQoL is not lost with LASIK, and LASIK may improve mental HRQoL. Preoperative axial length may predict postoperative mental HRQoL.

  3. The topology of non-linear global carbon dynamics: from tipping points to planetary boundaries

    NASA Astrophysics Data System (ADS)

    Anderies, J. M.; Carpenter, S. R.; Steffen, Will; Rockström, Johan

    2013-12-01

    We present a minimal model of land use and carbon cycle dynamics and use it to explore the relationship between non-linear dynamics and planetary boundaries. Only the most basic interactions between land cover and terrestrial, atmospheric, and marine carbon stocks are considered in the model. Our goal is not to predict global carbon dynamics as it occurs in the actual Earth System. Rather, we construct a conceptually reasonable heuristic model of a feedback system between different carbon stocks that captures the qualitative features of the actual Earth System and use it to explore the topology of the boundaries of what can be called a ‘safe operating space’ for humans. The model analysis illustrates the existence of dynamic, non-linear tipping points in carbon cycle dynamics and the potential complexity of planetary boundaries. Finally, we use the model to illustrate some challenges associated with navigating planetary boundaries.

  4. TiO2 dye sensitized solar cell (DSSC): linear relationship of maximum power point and anthocyanin concentration

    NASA Astrophysics Data System (ADS)

    Ahmadian, Radin

    2010-09-01

    This study investigated the relationship of anthocyanin concentration from different organic fruit species and output voltage and current in a TiO2 dye-sensitized solar cell (DSSC) and hypothesized that fruits with greater anthocyanin concentration produce higher maximum power point (MPP) which would lead to higher current and voltage. Anthocyanin dye solution was made with crushing of a group of fresh fruits with different anthocyanin content in 2 mL of de-ionized water and filtration. Using these test fruit dyes, multiple DSSCs were assembled such that light enters through the TiO2 side of the cell. The full current-voltage (I-V) co-variations were measured using a 500 Ω potentiometer as a variable load. Point-by point current and voltage data pairs were measured at various incremental resistance values. The maximum power point (MPP) generated by the solar cell was defined as a dependent variable and the anthocyanin concentration in the fruit used in the DSSC as the independent variable. A regression model was used to investigate the linear relationship between study variables. Regression analysis showed a significant linear relationship between MPP and anthocyanin concentration with a p-value of 0.007. Fruits like blueberry and black raspberry with the highest anthocyanin content generated higher MPP. In a DSSC, a linear model may predict MPP based on the anthocyanin concentration. This model is the first step to find organic anthocyanin sources in the nature with the highest dye concentration to generate energy.

  5. Gas Dynamics of a Recessed Nozzle in Its Displacement in the Radial Direction

    NASA Astrophysics Data System (ADS)

    Volkov, K. N.; Denisikhin, S. V.; Emel'yanov, V. N.

    2017-07-01

    Numerical simulation of gasdynamic processes accompanying the operation of the recessed nozzle of a solid-propellant rocket motor in its linear displacement is carried out. Reynolds-averaged Navier-Stokes equations closed using the equations of a k-ɛ turbulence model are used for calculations. The calculations are done for different rates of flow of the gas in the main channel and in the over-nozzle gap, and also for different displacements of the nozzle from an axisymmetric position. The asymmetry of geometry gives rise to a complicated spatial flow pattern characterized by the presence of singular points of spreading and by substantially inhomogeneous velocity and pressure distributions. The vortex flow pattern resulting from the linear displacement of the nozzle from an axisymmetric position is compared with the data of experimental visualization. The change in the vortex pattern of the flow and in the position of the singular points as a function of the flow coefficient and the displacement of the nozzle from the symmetry axis is discussed.

  6. A novel approach for epipolar resampling of cross-track linear pushbroom imagery using orbital parameters model

    NASA Astrophysics Data System (ADS)

    Jannati, Mojtaba; Valadan Zoej, Mohammad Javad; Mokhtarzade, Mehdi

    2018-03-01

    This paper presents a novel approach to epipolar resampling of cross-track linear pushbroom imagery using orbital parameters model (OPM). The backbone of the proposed method relies on modification of attitude parameters of linear array stereo imagery in such a way to parallelize the approximate conjugate epipolar lines (ACELs) with the instantaneous base line (IBL) of the conjugate image points (CIPs). Afterward, a complementary rotation is applied in order to parallelize all the ACELs throughout the stereo imagery. The new estimated attitude parameters are evaluated based on the direction of the IBL and the ACELs. Due to the spatial and temporal variability of the IBL (respectively changes in column and row numbers of the CIPs) and nonparallel nature of the epipolar lines in the stereo linear images, some polynomials in the both column and row numbers of the CIPs are used to model new attitude parameters. As the instantaneous position of sensors remains fix, the digital elevation model (DEM) of the area of interest is not required in the resampling process. According to the experimental results obtained from two pairs of SPOT and RapidEye stereo imagery with a high elevation relief, the average absolute values of remained vertical parallaxes of CIPs in the normalized images were obtained 0.19 and 0.28 pixels respectively, which confirm the high accuracy and applicability of the proposed method.

  7. Near-optimal alternative generation using modified hit-and-run sampling for non-linear, non-convex problems

    NASA Astrophysics Data System (ADS)

    Rosenberg, D. E.; Alafifi, A.

    2016-12-01

    Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one step to any point in the near-optimal region, and each iterate generates a new, feasible alternative. We use the method to generate alternatives that span the near-optimal regions of simple and more complicated water management problems and may be preferred to optimal solutions. We also discuss extensions to handle non-linear equity constraints.

  8. Diagnostics for generalized linear hierarchical models in network meta-analysis.

    PubMed

    Zhao, Hong; Hodges, James S; Carlin, Bradley P

    2017-09-01

    Network meta-analysis (NMA) combines direct and indirect evidence comparing more than 2 treatments. Inconsistency arises when these 2 information sources differ. Previous work focuses on inconsistency detection, but little has been done on how to proceed after identifying inconsistency. The key issue is whether inconsistency changes an NMA's substantive conclusions. In this paper, we examine such discrepancies from a diagnostic point of view. Our methods seek to detect influential and outlying observations in NMA at a trial-by-arm level. These observations may have a large effect on the parameter estimates in NMA, or they may deviate markedly from other observations. We develop formal diagnostics for a Bayesian hierarchical model to check the effect of deleting any observation. Diagnostics are specified for generalized linear hierarchical NMA models and investigated for both published and simulated datasets. Results from our example dataset using either contrast- or arm-based models and from the simulated datasets indicate that the sources of inconsistency in NMA tend not to be influential, though results from the example dataset suggest that they are likely to be outliers. This mimics a familiar result from linear model theory, in which outliers with low leverage are not influential. Future extensions include incorporating baseline covariates and individual-level patient data. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Application of reaction-diffusion models to cell patterning in Xenopus retina. Initiation of patterns and their biological stability.

    PubMed

    Shoaf, S A; Conway, K; Hunt, R K

    1984-08-07

    We have examined the behavior of two reaction-diffusion models, originally proposed by Gierer & Meinhardt (1972) and by Kauffman, Shymko & Trabert (1978), for biological pattern formation. Calculations are presented for pattern formation on a disc (approximating the geometry of a number of embryonic anlagen including the frog eye rudiment), emphasizing the sensitivity of patterns to changes in initial conditions and to perturbations in the geometry of the morphogen-producing space. Analysis of the linearized equations from the models enabled us to select appropriate parameters and disc size for pattern growth. A computer-implemented finite element method was used to solve the non-linear model equations reiteratively. For the Gierer-Meinhardt model, initial activation (varying in size over two orders of magnitude) of one point on the disc's edge was sufficient to generate the primary gradient. Various parts of the disc were removed (remaining only as diffusible space) from the morphogen-producing cycle to investigate the effects of cells dropping out of the cycle due to cell death or malfunction (single point removed) or differentiation (center removed), as occur in the Xenopus eye rudiment. The resulting patterns had the same general shape and amplitude as normal gradients. Nor did a two-fold increase in disc size affect the pattern-generating ability of the model. Disc fragments bearing their primary gradient patterns were fused (with gradients in opposite directions, but each parallel to the fusion line). The resulting patterns generated by the model showed many similarities to results of "compound eye" experiments in Xenopus. Similar patterns were obtained with the model of Kauffman's group (1978), but we found less stability of the pattern subject to simulations of central differentiation. However, removal of a single point from the morphogen cycle (cell death) did not result in any change. The sensitivity of the Kauffman et al. model to shape perturbations is not surprising since the model was originally designed to use shape and increasing size during growth to generate a sequence of transient patterns. However, the Gierer-Meinhardt model is remarkably stable even when subjected to a wide range of perturbations in the diffusible space, thus allowing it to cope with normal biological variability, and offering an exciting range of possibilities for reaction-diffusion models as mechanisms underlying the spatial patterns of tissue structures.

  10. Applying linear programming to estimate fluxes in ecosystems or food webs: An example from the herpetological assemblage of the freshwater Everglades

    USGS Publications Warehouse

    Diffendorfer, James E.; Richards, Paul M.; Dalrymple, George H.; DeAngelis, Donald L.

    2001-01-01

    We present the application of Linear Programming for estimating biomass fluxes in ecosystem and food web models. We use the herpetological assemblage of the Everglades as an example. We developed food web structures for three common Everglades freshwater habitat types: marsh, prairie, and upland. We obtained a first estimate of the fluxes using field data, literature estimates, and professional judgment. Linear programming was used to obtain a consistent and better estimate of the set of fluxes, while maintaining mass balance and minimizing deviations from point estimates. The results support the view that the Everglades is a spatially heterogeneous system, with changing patterns of energy flux, species composition, and biomasses across the habitat types. We show that a food web/ecosystem perspective, combined with Linear Programming, is a robust method for describing food webs and ecosystems that requires minimal data, produces useful post-solution analyses, and generates hypotheses regarding the structure of energy flow in the system.

  11. Development of Driver/Vehicle Steering Interaction Models for Dynamic Analysis

    DTIC Science & Technology

    1988-12-01

    Figure 5-10. The Linearized Single-Unit Vehicle Model ............................... 41 Figure 5-11. Interpretation of the Single-Unit Model...The starting point for the driver modelling research conducted under this project was a linear preview control model originally proposed by MacAdam 1...regardless of its origin, can pass at least the elementary validation test of exhibiting "cross-over model"-like- behavior in the vicinity of its

  12. How is the weather? Forecasting inpatient glycemic control

    PubMed Central

    Saulnier, George E; Castro, Janna C; Cook, Curtiss B; Thompson, Bithika M

    2017-01-01

    Aim: Apply methods of damped trend analysis to forecast inpatient glycemic control. Method: Observed and calculated point-of-care blood glucose data trends were determined over 62 weeks. Mean absolute percent error was used to calculate differences between observed and forecasted values. Comparisons were drawn between model results and linear regression forecasting. Results: The forecasted mean glucose trends observed during the first 24 and 48 weeks of projections compared favorably to the results provided by linear regression forecasting. However, in some scenarios, the damped trend method changed inferences compared with linear regression. In all scenarios, mean absolute percent error values remained below the 10% accepted by demand industries. Conclusion: Results indicate that forecasting methods historically applied within demand industries can project future inpatient glycemic control. Additional study is needed to determine if forecasting is useful in the analyses of other glucometric parameters and, if so, how to apply the techniques to quality improvement. PMID:29134125

  13. Comparison of closed loop model with flight test results

    NASA Technical Reports Server (NTRS)

    George, F. L.

    1981-01-01

    An analytic technique capable of predicting the landing characteristics of proposed aircraft configurations in the early stages of design was developed. In this analysis, a linear pilot-aircraft closed loop model was evaluated using experimental data generated with the NT-33 variable stability in-flight simulator. The pilot dynamics are modeled as inner and outer servo loop closures around aircraft pitch attitude, and altitude rate-of-change respectively. The landing flare maneuver is of particular interest as recent experience with military and other highly augmented vehicles shows this task to be relatively demanding, and potentially a critical design point. A unique feature of the pilot model is the incorporation of an internal model of the pilot's desired flight path for the flare maneuver.

  14. Evaluation in a Research and Development Context.

    ERIC Educational Resources Information Center

    Cooley, William W.

    Educational research and development (R&D) has often been characterized as a neat, linear sequence of discrete steps, moving from research through development to evaluation and dissemination. Although the inadequacies of such linear models of educational research and development have been pointed out previously, these models have been so much…

  15. Predicting protein folding rate change upon point mutation using residue-level coevolutionary information.

    PubMed

    Mallik, Saurav; Das, Smita; Kundu, Sudip

    2016-01-01

    Change in folding kinetics of globular proteins upon point mutation is crucial to a wide spectrum of biological research, such as protein misfolding, toxicity, and aggregations. Here we seek to address whether residue-level coevolutionary information of globular proteins can be informative to folding rate changes upon point mutations. Generating residue-level coevolutionary networks of globular proteins, we analyze three parameters: relative coevolution order (rCEO), network density (ND), and characteristic path length (CPL). A point mutation is considered to be equivalent to a node deletion of this network and respective percentage changes in rCEO, ND, CPL are found linearly correlated (0.84, 0.73, and -0.61, respectively) with experimental folding rate changes. The three parameters predict the folding rate change upon a point mutation with 0.031, 0.045, and 0.059 standard errors, respectively. © 2015 Wiley Periodicals, Inc.

  16. Ratio-based estimators for a change point in persistence.

    PubMed

    Halunga, Andreea G; Osborn, Denise R

    2012-11-01

    We study estimation of the date of change in persistence, from [Formula: see text] to [Formula: see text] or vice versa. Contrary to statements in the original papers, our analytical results establish that the ratio-based break point estimators of Kim [Kim, J.Y., 2000. Detection of change in persistence of a linear time series. Journal of Econometrics 95, 97-116], Kim et al. [Kim, J.Y., Belaire-Franch, J., Badillo Amador, R., 2002. Corringendum to "Detection of change in persistence of a linear time series". Journal of Econometrics 109, 389-392] and Busetti and Taylor [Busetti, F., Taylor, A.M.R., 2004. Tests of stationarity against a change in persistence. Journal of Econometrics 123, 33-66] are inconsistent when a mean (or other deterministic component) is estimated for the process. In such cases, the estimators converge to random variables with upper bound given by the true break date when persistence changes from [Formula: see text] to [Formula: see text]. A Monte Carlo study confirms the large sample downward bias and also finds substantial biases in moderate sized samples, partly due to properties at the end points of the search interval.

  17. Adjusted variable plots for Cox's proportional hazards regression model.

    PubMed

    Hall, C B; Zeger, S L; Bandeen-Roche, K J

    1996-01-01

    Adjusted variable plots are useful in linear regression for outlier detection and for qualitative evaluation of the fit of a model. In this paper, we extend adjusted variable plots to Cox's proportional hazards model for possibly censored survival data. We propose three different plots: a risk level adjusted variable (RLAV) plot in which each observation in each risk set appears, a subject level adjusted variable (SLAV) plot in which each subject is represented by one point, and an event level adjusted variable (ELAV) plot in which the entire risk set at each failure event is represented by a single point. The latter two plots are derived from the RLAV by combining multiple points. In each point, the regression coefficient and standard error from a Cox proportional hazards regression is obtained by a simple linear regression through the origin fit to the coordinates of the pictured points. The plots are illustrated with a reanalysis of a dataset of 65 patients with multiple myeloma.

  18. Neural network-based nonlinear model predictive control vs. linear quadratic gaussian control

    USGS Publications Warehouse

    Cho, C.; Vance, R.; Mardi, N.; Qian, Z.; Prisbrey, K.

    1997-01-01

    One problem with the application of neural networks to the multivariable control of mineral and extractive processes is determining whether and how to use them. The objective of this investigation was to compare neural network control to more conventional strategies and to determine if there are any advantages in using neural network control in terms of set-point tracking, rise time, settling time, disturbance rejection and other criteria. The procedure involved developing neural network controllers using both historical plant data and simulation models. Various control patterns were tried, including both inverse and direct neural network plant models. These were compared to state space controllers that are, by nature, linear. For grinding and leaching circuits, a nonlinear neural network-based model predictive control strategy was superior to a state space-based linear quadratic gaussian controller. The investigation pointed out the importance of incorporating state space into neural networks by making them recurrent, i.e., feeding certain output state variables into input nodes in the neural network. It was concluded that neural network controllers can have better disturbance rejection, set-point tracking, rise time, settling time and lower set-point overshoot, and it was also concluded that neural network controllers can be more reliable and easy to implement in complex, multivariable plants.

  19. A dynamic model for costing disaster mitigation policies.

    PubMed

    Altay, Nezih; Prasad, Sameer; Tata, Jasmine

    2013-07-01

    The optimal level of investment in mitigation strategies is usually difficult to ascertain in the context of disaster planning. This research develops a model to provide such direction by relying on cost of quality literature. This paper begins by introducing a static approach inspired by Joseph M. Juran's cost of quality management model (Juran, 1951) to demonstrate the non-linear trade-offs in disaster management expenditure. Next it presents a dynamic model that includes the impact of dynamic interactions of the changing level of risk, the cost of living, and the learning/investments that may alter over time. It illustrates that there is an optimal point that minimises the total cost of disaster management, and that this optimal point moves as governments learn from experience or as states get richer. It is hoped that the propositions contained herein will help policymakers to plan, evaluate, and justify voluntary disaster mitigation expenditures. © 2013 The Author(s). Journal compilation © Overseas Development Institute, 2013.

  20. An approach to multivariable control of manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    The paper presents simple schemes for multivariable control of multiple-joint robot manipulators in joint and Cartesian coordinates. The joint control scheme consists of two independent multivariable feedforward and feedback controllers. The feedforward controller is the minimal inverse of the linearized model of robot dynamics and contains only proportional-double-derivative (PD2) terms - implying feedforward from the desired position, velocity and acceleration. This controller ensures that the manipulator joint angles track any reference trajectories. The feedback controller is of proportional-integral-derivative (PID) type and is designed to achieve pole placement. This controller reduces any initial tracking error to zero as desired and also ensures that robust steady-state tracking of step-plus-exponential trajectories is achieved by the joint angles. Simple and explicit expressions of computation of the feedforward and feedback gains are obtained based on the linearized model of robot dynamics. This leads to computationally efficient schemes for either on-line gain computation or off-line gain scheduling to account for variations in the linearized robot model due to changes in the operating point. The joint control scheme is extended to direct control of the end-effector motion in Cartesian space. Simulation results are given for illustration.

  1. A Block Preconditioned Conjugate Gradient-type Iterative Solver for Linear Systems in Thermal Reservoir Simulation

    NASA Astrophysics Data System (ADS)

    Betté, Srinivas; Diaz, Julio C.; Jines, William R.; Steihaug, Trond

    1986-11-01

    A preconditioned residual-norm-reducing iterative solver is described. Based on a truncated form of the generalized-conjugate-gradient method for nonsymmetric systems of linear equations, the iterative scheme is very effective for linear systems generated in reservoir simulation of thermal oil recovery processes. As a consequence of employing an adaptive implicit finite-difference scheme to solve the model equations, the number of variables per cell-block varies dynamically over the grid. The data structure allows for 5- and 9-point operators in the areal model, 5-point in the cross-sectional model, and 7- and 11-point operators in the three-dimensional model. Block-diagonal-scaling of the linear system, done prior to iteration, is found to have a significant effect on the rate of convergence. Block-incomplete-LU-decomposition (BILU) and block-symmetric-Gauss-Seidel (BSGS) methods, which result in no fill-in, are used as preconditioning procedures. A full factorization is done on the well terms, and the cells are ordered in a manner which minimizes the fill-in in the well-column due to this factorization. The convergence criterion for the linear (inner) iteration is linked to that of the nonlinear (Newton) iteration, thereby enhancing the efficiency of the computation. The algorithm, with both BILU and BSGS preconditioners, is evaluated in the context of a variety of thermal simulation problems. The solver is robust and can be used with little or no user intervention.

  2. Tumor evolution: Linear, branching, neutral or punctuated?☆

    PubMed Central

    Davis, Alexander; Gao, Ruli; Navin, Nicholas

    2017-01-01

    Intratumor heterogeneity has been widely reported in human cancers, but our knowledge of how this genetic diversity emerges over time remains limited. A central challenge in studying tumor evolution is the difficulty in collecting longitudinal samples from cancer patients. Consequently, most studies have inferred tumor evolution from single time-point samples, providing very indirect information. These data have led to several competing models of tumor evolution: linear, branching, neutral and punctuated. Each model makes different assumptions regarding the timing of mutations and selection of clones, and therefore has different implications for the diagnosis and therapeutic treatment of cancer patients. Furthermore, emerging evidence suggests that models may change during tumor progression or operate concurrently for different classes of mutations. Finally, we discuss data that supports the theory that most human tumors evolve from a single cell in the normal tissue. This article is part of a Special Issue entitled: Evolutionary principles - heterogeneity in cancer?, edited by Dr. Robert A. Gatenby. PMID:28110020

  3. Wear-caused deflection evolution of a slide rail, considering linear and non-linear wear models

    NASA Astrophysics Data System (ADS)

    Kim, Dongwook; Quagliato, Luca; Park, Donghwi; Murugesan, Mohanraj; Kim, Naksoo; Hong, Seokmoo

    2017-05-01

    The research presented in this paper details an experimental-numerical approach for the quantitative correlation between wear and end-point deflection in a slide rail. Focusing the attention on slide rail utilized in white-goods applications, the aim is to evaluate the number of cycles the slide rail can operate, under different load conditions, before it should be replaced due to unacceptable end-point deflection. In this paper, two formulations are utilized to describe the wear: Archard model for the linear wear and Lemaitre damage model for the nonlinear wear. The linear wear gradually reduces the surface of the slide rail whereas the nonlinear one accounts for the surface element deletion (i.e. due to pitting). To determine the constants to use in the wear models, simple tension test and sliding wear test, by utilizing a designed and developed experiment machine, have been carried out. A full slide rail model simulation has been implemented in ABAQUS including both linear and non-linear wear models and the results have been compared with those of the real rails under different load condition, provided by the rail manufacturer. The comparison between numerically estimated and real rail results proved the reliability of the developed numerical model, limiting the error in a ±10% range. The proposed approach allows predicting the displacement vs cycle curves, parametrized for different loads and, based on a chosen failure criterion, to predict the lifetime of the rail.

  4. Quantum mechanical/molecular mechanical/continuum style solvation model: linear response theory, variational treatment, and nuclear gradients.

    PubMed

    Li, Hui

    2009-11-14

    Linear response and variational treatment are formulated for Hartree-Fock (HF) and Kohn-Sham density functional theory (DFT) methods and combined discrete-continuum solvation models that incorporate self-consistently induced dipoles and charges. Due to the variational treatment, analytic nuclear gradients can be evaluated efficiently for these discrete and continuum solvation models. The forces and torques on the induced point dipoles and point charges can be evaluated using simple electrostatic formulas as for permanent point dipoles and point charges, in accordance with the electrostatic nature of these methods. Implementation and tests using the effective fragment potential (EFP, a polarizable force field) method and the conductorlike polarizable continuum model (CPCM) show that the nuclear gradients are as accurate as those in the gas phase HF and DFT methods. Using B3LYP/EFP/CPCM and time-dependent-B3LYP/EFP/CPCM methods, acetone S(0)-->S(1) excitation in aqueous solution is studied. The results are close to those from full B3LYP/CPCM calculations.

  5. Application of glas laser altimetry to detect elevation changes in East Antarctica

    NASA Astrophysics Data System (ADS)

    Scaioni, M.; Tong, X.; Li, R.

    2013-10-01

    In this paper the use of ICESat/GLAS laser altimeter for estimating multi-temporal elevation changes on polar ice sheets is afforded. Due to non-overlapping laser spots during repeat passes, interpolation methods are required to make comparisons. After reviewing the main methods described in the literature (crossover point analysis, cross-track DEM projection, space-temporal regressions), the last one has been chosen for its capability of providing more elevation change rate measurements. The standard implementation of the space-temporal linear regression technique has been revisited and improved to better cope with outliers and to check the estimability of model's parameters. GLAS data over the PANDA route in East Antarctica have been used for testing. Obtained results have been quite meaningful from a physical point of view, confirming the trend reported by the literature of a constant snow accumulation in the area during the two past decades, unlike the most part of the continent that has been losing mass.

  6. lidar change detection using building models

    NASA Astrophysics Data System (ADS)

    Kim, Angela M.; Runyon, Scott C.; Jalobeanu, Andre; Esterline, Chelsea H.; Kruse, Fred A.

    2014-06-01

    Terrestrial LiDAR scans of building models collected with a FARO Focus3D and a RIEGL VZ-400 were used to investigate point-to-point and model-to-model LiDAR change detection. LiDAR data were scaled, decimated, and georegistered to mimic real world airborne collects. Two physical building models were used to explore various aspects of the change detection process. The first model was a 1:250-scale representation of the Naval Postgraduate School campus in Monterey, CA, constructed from Lego blocks and scanned in a laboratory setting using both the FARO and RIEGL. The second model at 1:8-scale consisted of large cardboard boxes placed outdoors and scanned from rooftops of adjacent buildings using the RIEGL. A point-to-point change detection scheme was applied directly to the point-cloud datasets. In the model-to-model change detection scheme, changes were detected by comparing Digital Surface Models (DSMs). The use of physical models allowed analysis of effects of changes in scanner and scanning geometry, and performance of the change detection methods on different types of changes, including building collapse or subsistence, construction, and shifts in location. Results indicate that at low false-alarm rates, the point-to-point method slightly outperforms the model-to-model method. The point-to-point method is less sensitive to misregistration errors in the data. Best results are obtained when the baseline and change datasets are collected using the same LiDAR system and collection geometry.

  7. A Direct Method for Fuel Optimal Maneuvers of Distributed Spacecraft in Multiple Flight Regimes

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.; Cooley, D. S.; Guzman, Jose J.

    2005-01-01

    We present a method to solve the impulsive minimum fuel maneuver problem for a distributed set of spacecraft. We develop the method assuming a non-linear dynamics model and parameterize the problem to allow the method to be applicable to multiple flight regimes including low-Earth orbits, highly-elliptic orbits (HEO), Lagrange point orbits, and interplanetary trajectories. Furthermore, the approach is not limited by the inter-spacecraft separation distances and is applicable to both small formations as well as large constellations. Semianalytical derivatives are derived for the changes in the total AV with respect to changes in the independent variables. We also apply a set of constraints to ensure that the fuel expenditure is equalized over the spacecraft in formation. We conclude with several examples and present optimal maneuver sequences for both a HE0 and libration point formation.

  8. A 1-D model of the nonlinear dynamics of the human lumbar intervertebral disc

    NASA Astrophysics Data System (ADS)

    Marini, Giacomo; Huber, Gerd; Püschel, Klaus; Ferguson, Stephen J.

    2017-01-01

    Lumped parameter models of the spine have been developed to investigate its response to whole body vibration. However, these models assume the behaviour of the intervertebral disc to be linear-elastic. Recently, the authors have reported on the nonlinear dynamic behaviour of the human lumbar intervertebral disc. This response was shown to be dependent on the applied preload and amplitude of the stimuli. However, the mechanical properties of a standard linear elastic model are not dependent on the current deformation state of the system. The aim of this study was therefore to develop a model that is able to describe the axial, nonlinear quasi-static response and to predict the nonlinear dynamic characteristics of the disc. The ability to adapt the model to an individual disc's response was a specific focus of the study, with model validation performed against prior experimental data. The influence of the numerical parameters used in the simulations was investigated. The developed model exhibited an axial quasi-static and dynamic response, which agreed well with the corresponding experiments. However, the model needs further improvement to capture additional peculiar characteristics of the system dynamics, such as the change of mean point of oscillation exhibited by the specimens when oscillating in the region of nonlinear resonance. Reference time steps were identified for specific integration scheme. The study has demonstrated that taking into account the nonlinear-elastic behaviour typical of the intervertebral disc results in a predicted system oscillation much closer to the physiological response than that provided by linear-elastic models. For dynamic analysis, the use of standard linear-elastic models should be avoided, or restricted to study cases where the amplitude of the stimuli is relatively small.

  9. A new approach to assess COPD by identifying lung function break-points

    PubMed Central

    Eriksson, Göran; Jarenbäck, Linnea; Peterson, Stefan; Ankerst, Jaro; Bjermer, Leif; Tufvesson, Ellen

    2015-01-01

    Purpose COPD is a progressive disease, which can take different routes, leading to great heterogeneity. The aim of the post-hoc analysis reported here was to perform continuous analyses of advanced lung function measurements, using linear and nonlinear regressions. Patients and methods Fifty-one COPD patients with mild to very severe disease (Global Initiative for Chronic Obstructive Lung Disease [GOLD] Stages I–IV) and 41 healthy smokers were investigated post-bronchodilation by flow-volume spirometry, body plethysmography, diffusion capacity testing, and impulse oscillometry. The relationship between COPD severity, based on forced expiratory volume in 1 second (FEV1), and different lung function parameters was analyzed by flexible nonparametric method, linear regression, and segmented linear regression with break-points. Results Most lung function parameters were nonlinear in relation to spirometric severity. Parameters related to volume (residual volume, functional residual capacity, total lung capacity, diffusion capacity [diffusion capacity of the lung for carbon monoxide], diffusion capacity of the lung for carbon monoxide/alveolar volume) and reactance (reactance area and reactance at 5Hz) were segmented with break-points at 60%–70% of FEV1. FEV1/forced vital capacity (FVC) and resonance frequency had break-points around 80% of FEV1, while many resistance parameters had break-points below 40%. The slopes in percent predicted differed; resistance at 5 Hz minus resistance at 20 Hz had a linear slope change of −5.3 per unit FEV1, while residual volume had no slope change above and −3.3 change per unit FEV1 below its break-point of 61%. Conclusion Continuous analyses of different lung function parameters over the spirometric COPD severity range gave valuable information additional to categorical analyses. Parameters related to volume, diffusion capacity, and reactance showed break-points around 65% of FEV1, indicating that air trapping starts to dominate in moderate COPD (FEV1 =50%–80%). This may have an impact on the patient’s management plan and selection of patients and/or outcomes in clinical research. PMID:26508849

  10. Simulation of flexible appendage interactions with Mariner Venus/Mercury attitude control and science platform pointing

    NASA Technical Reports Server (NTRS)

    Fleischer, G. E.

    1973-01-01

    A new computer subroutine, which solves the attitude equations of motion for any vehicle idealized as a topological tree of hinge-connected rigid bodies, is used to simulate and analyze science instrument pointing control interaction with a flexible Mariner Venus/Mercury (MVM) spacecraft. The subroutine's user options include linearized or partially linearized hinge-connected models whose computational advantages are demonstrated for the MVM problem. Results of the pointing control/flexible vehicle interaction simulations, including imaging experiment pointing accuracy predictions and implications for MVM science sequence planning, are described in detail.

  11. The Component Slope Linear Model for Calculating Intensive Partial Molar Properties: Application to Waste Glasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reynolds, Jacob G.

    2013-01-11

    Partial molar properties are the changes occurring when the fraction of one component is varied while the fractions of all other component mole fractions change proportionally. They have many practical and theoretical applications in chemical thermodynamics. Partial molar properties of chemical mixtures are difficult to measure because the component mole fractions must sum to one, so a change in fraction of one component must be offset with a change in one or more other components. Given that more than one component fraction is changing at a time, it is difficult to assign a change in measured response to a changemore » in a single component. In this study, the Component Slope Linear Model (CSLM), a model previously published in the statistics literature, is shown to have coefficients that correspond to the intensive partial molar properties. If a measured property is plotted against the mole fraction of a component while keeping the proportions of all other components constant, the slope at any given point on a graph of this curve is the partial molar property for that constituent. Actually plotting this graph has been used to determine partial molar properties for many years. The CSLM directly includes this slope in a model that predicts properties as a function of the component mole fractions. This model is demonstrated by applying it to the constant pressure heat capacity data from the NaOH-NaAl(OH{sub 4}H{sub 2}O system, a system that simplifies Hanford nuclear waste. The partial molar properties of H{sub 2}O, NaOH, and NaAl(OH){sub 4} are determined. The equivalence of the CSLM and the graphical method is verified by comparing results detennined by the two methods. The CSLM model has been previously used to predict the liquidus temperature of spinel crystals precipitated from Hanford waste glass. Those model coefficients are re-interpreted here as the partial molar spinel liquidus temperature of the glass components.« less

  12. Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots

    ERIC Educational Resources Information Center

    Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.

    2013-01-01

    Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…

  13. Economic incentives and diagnostic coding in a public health care system.

    PubMed

    Anthun, Kjartan Sarheim; Bjørngaard, Johan Håkon; Magnussen, Jon

    2017-03-01

    We analysed the association between economic incentives and diagnostic coding practice in the Norwegian public health care system. Data included 3,180,578 hospital discharges in Norway covering the period 1999-2008. For reimbursement purposes, all discharges are grouped in diagnosis-related groups (DRGs). We examined pairs of DRGs where the addition of one or more specific diagnoses places the patient in a complicated rather than an uncomplicated group, yielding higher reimbursement. The economic incentive was measured as the potential gain in income by coding a patient as complicated, and we analysed the association between this gain and the share of complicated discharges within the DRG pairs. Using multilevel linear regression modelling, we estimated both differences between hospitals for each DRG pair and changes within hospitals for each DRG pair over time. Over the whole period, a one-DRG-point difference in price was associated with an increased share of complicated discharges of 14.2 (95 % confidence interval [CI] 11.2-17.2) percentage points. However, a one-DRG-point change in prices between years was only associated with a 0.4 (95 % CI [Formula: see text] to 1.8) percentage point change of discharges into the most complicated diagnostic category. Although there was a strong increase in complicated discharges over time, this was not as closely related to price changes as expected.

  14. Modified Hyperspheres Algorithm to Trace Homotopy Curves of Nonlinear Circuits Composed by Piecewise Linear Modelled Devices

    PubMed Central

    Vazquez-Leal, H.; Jimenez-Fernandez, V. M.; Benhammouda, B.; Filobello-Nino, U.; Sarmiento-Reyes, A.; Ramirez-Pinero, A.; Marin-Hernandez, A.; Huerta-Chua, J.

    2014-01-01

    We present a homotopy continuation method (HCM) for finding multiple operating points of nonlinear circuits composed of devices modelled by using piecewise linear (PWL) representations. We propose an adaptation of the modified spheres path tracking algorithm to trace the homotopy trajectories of PWL circuits. In order to assess the benefits of this proposal, four nonlinear circuits composed of piecewise linear modelled devices are analysed to determine their multiple operating points. The results show that HCM can find multiple solutions within a single homotopy trajectory. Furthermore, we take advantage of the fact that homotopy trajectories are PWL curves meant to replace the multidimensional interpolation and fine tuning stages of the path tracking algorithm with a simple and highly accurate procedure based on the parametric straight line equation. PMID:25184157

  15. Optimal design of stimulus experiments for robust discrimination of biochemical reaction networks.

    PubMed

    Flassig, R J; Sundmacher, K

    2012-12-01

    Biochemical reaction networks in the form of coupled ordinary differential equations (ODEs) provide a powerful modeling tool for understanding the dynamics of biochemical processes. During the early phase of modeling, scientists have to deal with a large pool of competing nonlinear models. At this point, discrimination experiments can be designed and conducted to obtain optimal data for selecting the most plausible model. Since biological ODE models have widely distributed parameters due to, e.g. biologic variability or experimental variations, model responses become distributed. Therefore, a robust optimal experimental design (OED) for model discrimination can be used to discriminate models based on their response probability distribution functions (PDFs). In this work, we present an optimal control-based methodology for designing optimal stimulus experiments aimed at robust model discrimination. For estimating the time-varying model response PDF, which results from the nonlinear propagation of the parameter PDF under the ODE dynamics, we suggest using the sigma-point approach. Using the model overlap (expected likelihood) as a robust discrimination criterion to measure dissimilarities between expected model response PDFs, we benchmark the proposed nonlinear design approach against linearization with respect to prediction accuracy and design quality for two nonlinear biological reaction networks. As shown, the sigma-point outperforms the linearization approach in the case of widely distributed parameter sets and/or existing multiple steady states. Since the sigma-point approach scales linearly with the number of model parameter, it can be applied to large systems for robust experimental planning. An implementation of the method in MATLAB/AMPL is available at http://www.uni-magdeburg.de/ivt/svt/person/rf/roed.html. flassig@mpi-magdeburg.mpg.de Supplementary data are are available at Bioinformatics online.

  16. Chemoviscosity modeling for thermosetting resins - I

    NASA Technical Reports Server (NTRS)

    Hou, T. H.

    1984-01-01

    A new analytical model for chemoviscosity variation during cure of thermosetting resins was developed. This model is derived by modifying the widely used WLF (Williams-Landel-Ferry) Theory in polymer rheology. Major assumptions involved are that the rate of reaction is diffusion controlled and is linearly inversely proportional to the viscosity of the medium over the entire cure cycle. The resultant first order nonlinear differential equation is solved numerically, and the model predictions compare favorably with experimental data of EPON 828/Agent U obtained on a Rheometrics System 4 Rheometer. The model describes chemoviscosity up to a range of six orders of magnitude under isothermal curing conditions. The extremely non-linear chemoviscosity profile for a dynamic heating cure cycle is predicted as well. The model is also shown to predict changes of glass transition temperature for the thermosetting resin during cure. The physical significance of this prediction is unclear at the present time, however, and further research is required. From the chemoviscosity simulation point of view, the technique of establishing an analytical model as described here is easily applied to any thermosetting resin. The model thus obtained is used in real-time process controls for fabricating composite materials.

  17. Quasi-Linear Parameter Varying Representation of General Aircraft Dynamics Over Non-Trim Region

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob

    2007-01-01

    For applying linear parameter varying (LPV) control synthesis and analysis to a nonlinear system, it is required that a nonlinear system be represented in the form of an LPV model. In this paper, a new representation method is developed to construct an LPV model from a nonlinear mathematical model without the restriction that an operating point must be in the neighborhood of equilibrium points. An LPV model constructed by the new method preserves local stabilities of the original nonlinear system at "frozen" scheduling parameters and also represents the original nonlinear dynamics of a system over a non-trim region. An LPV model of the motion of FASER (Free-flying Aircraft for Subscale Experimental Research) is constructed by the new method.

  18. Geomorphic tipping points: convenient metaphor or fundamental landscape property?

    NASA Astrophysics Data System (ADS)

    Lane, Stuart

    2016-04-01

    In 2000 Malcolm Gladwell published as book that has done much to publicise Tipping Points in society but also in academia. His arguments, re-expressed in a geomorphic sense, have three core elements: (1) a "Law of the Few", where rapid change results from the effects of a relatively restricted number of critical elements, ones that are able to rapidly connect systems together, that are particularly sensitive to an external force, of that are spatially organised in a particular way; (2) a "Stickiness" where an element of the landscape is able to assimilate characteristics which make it progressively more applicable to the "Law of the Few"; and (3), given (1) and (2) a history and a geography that means that the same force can have dramatically different effects, according to where and when it occurs. Expressed in this way, it is not clear that Tipping Points bring much to our understanding in geomorphology that existing concepts (e.g. landscape sensitivity and recovery; cusp-catastrophe theory; non-linear dynamics systems) do not already provide. It may also be all too easy to describe change in geomorphology as involving a Tipping Point: we know that geomorphic processes often involve a non-linear response above a certain critical threshold; we know that landscapes can, after Denys Brunsden, be though of as involving long periods of boredom ("stability") interspersed with brief moments of terror ("change"); but these are not, after Gladwell, sufficient for the term Tipping Point to apply. Following from these issues, this talk will address three themes. First, it will question, through reference to specific examples, notably in high Alpine systems, the extent to which the Tipping Point analogy is truly a property of the world in which we live. Second, it will explore how 'tipping points' become assigned metaphorically, sometimes evolving to the point that they themselves gain agency, that is, shaping the way we interpret landscape rather than vice versa. Third, I will think through what this understanding means for geomorphology in a tipping point world arguing that if it indeed holds, it presents profound challenges for data collection and modelling that we do not fully appreciate, and will require very different kinds of analyses to those that we normally are accustomed to.

  19. Analysis and control of the METC fluid-bed gasifier. Quarterly report, October 1994--January 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farell, A.E.; Reddy, S.

    1995-03-01

    This document summarizes work performed for the period 10/1/94 to 2/1/95. The initial phase of the work focuses on developing a simple transfer function model of the Fluidized Bed Gasifier (FBG). This transfer function model will be developed based purely on the gasifier responses to step changes in gasifier inputs (including reactor air, convey air, cone nitrogen, FBG pressure, and coal feedrate). This transfer function model will represent a linear, dynamic model that is valid near the operating point at which the data was taken. In addition, a similar transfer function model will be developed using MGAS in order tomore » assess MGAS for use as a model of the FBG for control systems analysis.« less

  20. Wind turbine model and loop shaping controller design

    NASA Astrophysics Data System (ADS)

    Gilev, Bogdan

    2017-12-01

    A model of a wind turbine is evaluated, consisting of: wind speed model, mechanical and electrical model of generator and tower oscillation model. Model of the whole system is linearized around of a nominal point. By using the linear model with uncertainties is synthesized a uncertain model. By using the uncertain model is developed a H∞ controller, which provide mode of stabilizing the rotor frequency and damping the tower oscillations. Finally is simulated work of nonlinear system and H∞ controller.

  1. Stability and Optimal Harvesting of Modified Leslie-Gower Predator-Prey Model

    NASA Astrophysics Data System (ADS)

    Toaha, S.; Azis, M. I.

    2018-03-01

    This paper studies a modified of dynamics of Leslie-Gower predator-prey population model. The model is stated as a system of first order differential equations. The model consists of one predator and one prey. The Holling type II as a predation function is considered in this model. The predator and prey populations are assumed to be beneficial and then the two populations are harvested with constant efforts. Existence and stability of the interior equilibrium point are analysed. Linearization method is used to get the linearized model and the eigenvalue is used to justify the stability of the interior equilibrium point. From the analyses, we show that under a certain condition the interior equilibrium point exists and is locally asymptotically stable. For the model with constant efforts of harvesting, cost function, revenue function, and profit function are considered. The stable interior equilibrium point is then related to the maximum profit problem as well as net present value of revenues problem. We show that there exists a certain value of the efforts that maximizes the profit function and net present value of revenues while the interior equilibrium point remains stable. This means that the populations can live in coexistence for a long time and also maximize the benefit even though the populations are harvested with constant efforts.

  2. Binding affinity toward human prion protein of some anti-prion compounds - Assessment based on QSAR modeling, molecular docking and non-parametric ranking.

    PubMed

    Kovačević, Strahinja; Karadžić, Milica; Podunavac-Kuzmanović, Sanja; Jevrić, Lidija

    2018-01-01

    The present study is based on the quantitative structure-activity relationship (QSAR) analysis of binding affinity toward human prion protein (huPrP C ) of quinacrine, pyridine dicarbonitrile, diphenylthiazole and diphenyloxazole analogs applying different linear and non-linear chemometric regression techniques, including univariate linear regression, multiple linear regression, partial least squares regression and artificial neural networks. The QSAR analysis distinguished molecular lipophilicity as an important factor that contributes to the binding affinity. Principal component analysis was used in order to reveal similarities or dissimilarities among the studied compounds. The analysis of in silico absorption, distribution, metabolism, excretion and toxicity (ADMET) parameters was conducted. The ranking of the studied analogs on the basis of their ADMET parameters was done applying the sum of ranking differences, as a relatively new chemometric method. The main aim of the study was to reveal the most important molecular features whose changes lead to the changes in the binding affinities of the studied compounds. Another point of view on the binding affinity of the most promising analogs was established by application of molecular docking analysis. The results of the molecular docking were proven to be in agreement with the experimental outcome. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Oil Formation Volume Factor Determination Through a Fused Intelligence

    NASA Astrophysics Data System (ADS)

    Gholami, Amin

    2016-12-01

    Volume change of oil between reservoir condition and standard surface condition is called oil formation volume factor (FVF), which is very time, cost and labor intensive to determine. This study proposes an accurate, rapid and cost-effective approach for determining FVF from reservoir temperature, dissolved gas oil ratio, and specific gravity of both oil and dissolved gas. Firstly, structural risk minimization (SRM) principle of support vector regression (SVR) was employed to construct a robust model for estimating FVF from the aforementioned inputs. Subsequently, an alternating conditional expectation (ACE) was used for approximating optimal transformations of input/output data to a higher correlated data and consequently developing a sophisticated model between transformed data. Eventually, a committee machine with SVR and ACE was constructed through the use of hybrid genetic algorithm-pattern search (GA-PS). Committee machine integrates ACE and SVR models in an optimal linear combination such that makes benefit of both methods. A group of 342 data points was used for model development and a group of 219 data points was used for blind testing the constructed model. Results indicated that the committee machine performed better than individual models.

  4. Using a Virtual Experiment to Analyze Infiltration Process from Point to Grid-cell Size Scale

    NASA Astrophysics Data System (ADS)

    Barrios, M. I.

    2013-12-01

    The hydrological science requires the emergence of a consistent theoretical corpus driving the relationships between dominant physical processes at different spatial and temporal scales. However, the strong spatial heterogeneities and non-linearities of these processes make difficult the development of multiscale conceptualizations. Therefore, scaling understanding is a key issue to advance this science. This work is focused on the use of virtual experiments to address the scaling of vertical infiltration from a physically based model at point scale to a simplified physically meaningful modeling approach at grid-cell scale. Numerical simulations have the advantage of deal with a wide range of boundary and initial conditions against field experimentation. The aim of the work was to show the utility of numerical simulations to discover relationships between the hydrological parameters at both scales, and to use this synthetic experience as a media to teach the complex nature of this hydrological process. The Green-Ampt model was used to represent vertical infiltration at point scale; and a conceptual storage model was employed to simulate the infiltration process at the grid-cell scale. Lognormal and beta probability distribution functions were assumed to represent the heterogeneity of soil hydraulic parameters at point scale. The linkages between point scale parameters and the grid-cell scale parameters were established by inverse simulations based on the mass balance equation and the averaging of the flow at the point scale. Results have shown numerical stability issues for particular conditions and have revealed the complex nature of the non-linear relationships between models' parameters at both scales and indicate that the parameterization of point scale processes at the coarser scale is governed by the amplification of non-linear effects. The findings of these simulations have been used by the students to identify potential research questions on scale issues. Moreover, the implementation of this virtual lab improved the ability to understand the rationale of these process and how to transfer the mathematical models to computational representations.

  5. Optimisation of an idealised primitive equation ocean model using stochastic parameterization

    NASA Astrophysics Data System (ADS)

    Cooper, Fenwick C.

    2017-05-01

    Using a simple parameterization, an idealised low resolution (biharmonic viscosity coefficient of 5 × 1012 m4s-1 , 128 × 128 grid) primitive equation baroclinic ocean gyre model is optimised to have a much more accurate climatological mean, variance and response to forcing, in all model variables, with respect to a high resolution (biharmonic viscosity coefficient of 8 × 1010 m4s-1 , 512 × 512 grid) equivalent. For example, the change in the climatological mean due to a small change in the boundary conditions is more accurate in the model with parameterization. Both the low resolution and high resolution models are strongly chaotic. We also find that long timescales in the model temperature auto-correlation at depth are controlled by the vertical temperature diffusion parameter and time mean vertical advection and are caused by short timescale random forcing near the surface. This paper extends earlier work that considered a shallow water barotropic gyre. Here the analysis is extended to a more turbulent multi-layer primitive equation model that includes temperature as a prognostic variable. The parameterization consists of a constant forcing, applied to the velocity and temperature equations at each grid point, which is optimised to obtain a model with an accurate climatological mean, and a linear stochastic forcing, that is optimised to also obtain an accurate climatological variance and 5 day lag auto-covariance. A linear relaxation (nudging) is not used. Conservation of energy and momentum is discussed in an appendix.

  6. Optimal Design of Spring Characteristics of Damper for Subharmonic Vibration in Automatic Transmission Powertrain

    NASA Astrophysics Data System (ADS)

    Nakae, T.; Ryu, T.; Matsuzaki, K.; Rosbi, S.; Sueoka, A.; Takikawa, Y.; Ooi, Y.

    2016-09-01

    In the torque converter, the damper of the lock-up clutch is used to effectively absorb the torsional vibration. The damper is designed using a piecewise-linear spring with three stiffness stages. However, a nonlinear vibration, referred to as a subharmonic vibration of order 1/2, occurred around the switching point in the piecewise-linear restoring torque characteristics because of the nonlinearity. In the present study, we analyze vibration reduction for subharmonic vibration. The model used herein includes the torque converter, the gear train, and the differential gear. The damper is modeled by a nonlinear rotational spring of the piecewise-linear spring. We focus on the optimum design of the spring characteristics of the damper in order to suppress the subharmonic vibration. A piecewise-linear spring with five stiffness stages is proposed, and the effect of the distance between switching points on the subharmonic vibration is investigated. The results of our analysis indicate that the subharmonic vibration can be suppressed by designing a damper with five stiffness stages to have a small spring constant ratio between the neighboring springs. The distances between switching points must be designed to be large enough that the amplitude of the main frequency component of the systems does not reach the neighboring switching point.

  7. Comparing the sensitivity of linear and volumetric MRI measurements to detect changes in the size of vestibular schwannomas in patients with neurofibromatosis type 2 on bevacizumab treatment.

    PubMed

    Morris, Katrina A; Parry, Allyson; Pretorius, Pieter M

    2016-09-01

    To compare the sensitivity of linear and volumetric measurements on MRI in detecting schwannoma progression in patients with neurofibromatosis type 2 on bevacizumab treatment as well as the extent to which this depends on the size of the tumour. We compared retrospectively, changes in linear tumour dimensions at a range of thresholds to volumetric tumour measurements performed using Brainlab iPlan(®) software (Feldkirchen, Germany) and classified for tumour progression according to the Response Evaluation in Neurofibromatosis and Schwannomatosis (REiNS) criteria. Assessment of 61 schwannomas in 46 patients with a median follow-up of 20 months (range 3-43 months) was performed. There was a mean of 7 time points per tumour (range 2-12 time points). Using the volumetric REiNS criteria as the gold standard, a sensitivity of 86% was achieved for linear measurement using a 2-mm threshold to define progression. We propose that a change in linear measurement by 2 mm (particularly in tumours with starting diameters 20-30 mm, the majority of this cohort) could be used as a filter to identify cases of possible progression requiring volumetric analysis. This pragmatic approach can be used if stabilization of a previously growing schwannoma is sufficient for a patient to continue treatment in such a circumstance. We demonstrate the real-world limitations of linear vs volumetric measurement in tumour response assessment and identify limited circumstances where linear measurements can be used to determine which patients require the more resource-intensive volumetric measurements.

  8. A Knowledge Discovery from POS Data using State Space Models

    NASA Astrophysics Data System (ADS)

    Sato, Tadahiko; Higuchi, Tomoyuki

    The number of competing-brands changes by new product's entry. The new product introduction is endemic among consumer packaged goods firm and is an integral component of their marketing strategy. As a new product's entry affects markets, there is a pressing need to develop market response model that can adapt to such changes. In this paper, we develop a dynamic model that capture the underlying evolution of the buying behavior associated with the new product. This extends an application of a dynamic linear model, which is used by a number of time series analyses, by allowing the observed dimension to change at some point in time. Our model copes with a problem that dynamic environments entail: changes in parameter over time and changes in the observed dimension. We formulate the model with framework of a state space model. We realize an estimation of the model using modified Kalman filter/fixed interval smoother. We find that new product's entry (1) decreases brand differentiation for existing brands, as indicated by decreasing difference between cross-price elasticities; (2) decreases commodity power for existing brands, as indicated by decreasing trend; and (3) decreases the effect of discount for existing brands, as indicated by a decrease in the magnitude of own-brand price elasticities. The proposed framework is directly applicable to other fields in which the observed dimension might be change, such as economic, bioinformatics, and so forth.

  9. Non-LTE line-blanketed model atmospheres of hot stars. 1: Hybrid complete linearization/accelerated lambda iteration method

    NASA Technical Reports Server (NTRS)

    Hubeny, I.; Lanz, T.

    1995-01-01

    A new munerical method for computing non-Local Thermodynamic Equilibrium (non-LTE) model stellar atmospheres is presented. The method, called the hybird complete linearization/accelerated lambda iretation (CL/ALI) method, combines advantages of both its constituents. Its rate of convergence is virtually as high as for the standard CL method, while the computer time per iteration is almost as low as for the standard ALI method. The method is formulated as the standard complete lineariation, the only difference being that the radiation intensity at selected frequency points is not explicity linearized; instead, it is treated by means of the ALI approach. The scheme offers a wide spectrum of options, ranging from the full CL to the full ALI method. We deonstrate that the method works optimally if the majority of frequency points are treated in the ALI mode, while the radiation intensity at a few (typically two to 30) frequency points is explicity linearized. We show how this method can be applied to calculate metal line-blanketed non-LTE model atmospheres, by using the idea of 'superlevels' and 'superlines' introduced originally by Anderson (1989). We calculate several illustrative models taking into accont several tens of thosands of lines of Fe III to Fe IV and show that the hybrid CL/ALI method provides a robust method for calculating non-LTE line-blanketed model atmospheres for a wide range of stellar parameters. The results for individual stellar types will be presented in subsequent papers in this series.

  10. Modeling Sea-Level Change using Errors-in-Variables Integrated Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Cahill, Niamh; Parnell, Andrew; Kemp, Andrew; Horton, Benjamin

    2014-05-01

    We perform Bayesian inference on historical and late Holocene (last 2000 years) rates of sea-level change. The data that form the input to our model are tide-gauge measurements and proxy reconstructions from cores of coastal sediment. To accurately estimate rates of sea-level change and reliably compare tide-gauge compilations with proxy reconstructions it is necessary to account for the uncertainties that characterize each dataset. Many previous studies used simple linear regression models (most commonly polynomial regression) resulting in overly precise rate estimates. The model we propose uses an integrated Gaussian process approach, where a Gaussian process prior is placed on the rate of sea-level change and the data itself is modeled as the integral of this rate process. The non-parametric Gaussian process model is known to be well suited to modeling time series data. The advantage of using an integrated Gaussian process is that it allows for the direct estimation of the derivative of a one dimensional curve. The derivative at a particular time point will be representative of the rate of sea level change at that time point. The tide gauge and proxy data are complicated by multiple sources of uncertainty, some of which arise as part of the data collection exercise. Most notably, the proxy reconstructions include temporal uncertainty from dating of the sediment core using techniques such as radiocarbon. As a result of this, the integrated Gaussian process model is set in an errors-in-variables (EIV) framework so as to take account of this temporal uncertainty. The data must be corrected for land-level change known as glacio-isostatic adjustment (GIA) as it is important to isolate the climate-related sea-level signal. The correction for GIA introduces covariance between individual age and sea level observations into the model. The proposed integrated Gaussian process model allows for the estimation of instantaneous rates of sea-level change and accounts for all available sources of uncertainty in tide-gauge and proxy-reconstruction data. Our response variable is sea level after correction for GIA. By embedding the integrated process in an errors-in-variables (EIV) framework, and removing the estimate of GIA, we can quantify rates with better estimates of uncertainty than previously possible. The model provides a flexible fit and enables us to estimate rates of change at any given time point, thus observing how rates have been evolving from the past to present day.

  11. Intraplate deformation due to continental collisions: A numerical study of deformation in a thin viscous sheet

    NASA Technical Reports Server (NTRS)

    Cohen, S. C.; Morgan, R. C.

    1985-01-01

    A model of crustal deformation from continental collision that involves the penetration of a rigid punch into a deformable sheet is investigated. A linear viscous flow law is used to compute the magnitude and rate of change of crustal thickness, the velocity of mass points, strain rates and their principal axes, modes of deformation, areal changes, and stress. In general, a free lateral boundary reduces the magnitude of changes in crustal thickening by allowing material to more readily escape the advancing punch. The shearing that occurs diagonally in front of the punch terminates in compression or extension depending on whether the lateral boundary is fixed or free. When the ratio of the diameter of the punch to that of the sheet exceeds one-third, the deformation is insenstive to the choice of lateral boundary conditions. When the punch is rigid with sharply defined edges, deformation is concentrated near the punch corners. With non-rigid punches, shearing results in deformation being concentrated near the center of the punch. Variations with respect to linearity and nonlinearity of flow are discussed.

  12. Developmental changes in emotion recognition from full-light and point-light displays of body movement.

    PubMed

    Ross, Patrick D; Polson, Louise; Grosbras, Marie-Hélène

    2012-01-01

    To date, research on the development of emotion recognition has been dominated by studies on facial expression interpretation; very little is known about children's ability to recognize affective meaning from body movements. In the present study, we acquired simultaneous video and motion capture recordings of two actors portraying four basic emotions (Happiness Sadness, Fear and Anger). One hundred and seven primary and secondary school children (aged 4-17) and 14 adult volunteers participated in the study. Each participant viewed the full-light and point-light video clips and was asked to make a forced-choice as to which emotion was being portrayed. As a group, children performed worse than adults for both point-light and full-light conditions. Linear regression showed that both age and lighting condition were significant predictors of performance in children. Using piecewise regression, we found that a bilinear model with a steep improvement in performance until 8.5 years of age, followed by a much slower improvement rate through late childhood and adolescence best explained the data. These findings confirm that, like for facial expression, adolescents' recognition of basic emotions from body language is not fully mature and seems to follow a non-linear development. This is in line with observations of non-linear developmental trajectories for different aspects of human stimuli processing (voices and faces), perhaps suggesting a shift from one perceptual or cognitive strategy to another during adolescence. These results have important implications to understanding the maturation of social cognition.

  13. On Mechanical Transitions in Biologically Motivated Soft Matter Systems

    NASA Astrophysics Data System (ADS)

    Fogle, Craig

    The notion of phase transitions as a characterization of a change in physical properties pervades modern physics. Such abrupt and fundamental changes in the behavior of physical systems are evident in condensed matter system and also occur in nuclear and subatomic settings. While this concept is less prevalent in the field of biology, recent advances have pointed to its relevance in a number of settings. Recent studies have modeled both the cell cycle and cancer as phase transition in physical systems. In this dissertation we construct simplified models for two biological systems. As described by those models, both systems exhibit phase transitions. The first model is inspired by the shape transition in the nuclei of neutrophils during differentiation. During differentiation the nucleus transitions from spherical to a shape often described as "beads on a string." As a simplified model of this system, we investigate the spherical-to-wrinkled transition in an elastic core bounded to a fluid shell system. We find that this model exhibits a first-order phase transition, and the shape that minimizes the energy of the system scales as (micror3/kappa). . The second system studied is motivated by the dynamics of globular proteins. These proteins may undergoes conformational changes with large displacements relative to their size. Transitions between conformational states are not possible if the dynamics are governed strictly by linear elasticity. We construct a model consisting of an predominantly elastic region near the energetic minimum of the system and a non-linear softening of the system at a critical displacement. We find that this simple model displays very rich dynamics include a sharp dynamical phase transition and driving-force-dependent symmetry breaking.

  14. Regression Model Term Selection for the Analysis of Strain-Gage Balance Calibration Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred; Volden, Thomas R.

    2010-01-01

    The paper discusses the selection of regression model terms for the analysis of wind tunnel strain-gage balance calibration data. Different function class combinations are presented that may be used to analyze calibration data using either a non-iterative or an iterative method. The role of the intercept term in a regression model of calibration data is reviewed. In addition, useful algorithms and metrics originating from linear algebra and statistics are recommended that will help an analyst (i) to identify and avoid both linear and near-linear dependencies between regression model terms and (ii) to make sure that the selected regression model of the calibration data uses only statistically significant terms. Three different tests are suggested that may be used to objectively assess the predictive capability of the final regression model of the calibration data. These tests use both the original data points and regression model independent confirmation points. Finally, data from a simplified manual calibration of the Ames MK40 balance is used to illustrate the application of some of the metrics and tests to a realistic calibration data set.

  15. Non-Point Source Pollutant Load Variation in Rapid Urbanization Areas by Remote Sensing, Gis and the L-THIA Model: A Case in Bao'an District, Shenzhen, China.

    PubMed

    Li, Tianhong; Bai, Fengjiao; Han, Peng; Zhang, Yuanyan

    2016-11-01

    Urban sprawl is a major driving force that alters local and regional hydrology and increases non-point source pollution. Using the Bao'an District in Shenzhen, China, a typical rapid urbanization area, as the study area and land-use change maps from 1988 to 2014 that were obtained by remote sensing, the contributions of different land-use types to NPS pollutant production were assessed with a localized long-term hydrologic impact assessment (L-THIA) model. The results show that the non-point source pollution load changed significantly both in terms of magnitude and spatial distribution. The loads of chemical oxygen demand, total suspended substances, total nitrogen and total phosphorus were affected by the interactions between event mean concentration and the magnitude of changes in land-use acreages and the spatial distribution. From 1988 to 2014, the loads of chemical oxygen demand, suspended substances and total phosphorus showed clearly increasing trends with rates of 132.48 %, 32.52 % and 38.76 %, respectively, while the load of total nitrogen decreased by 71.52 %. The immigrant population ratio was selected as an indicator to represent the level of rapid urbanization and industrialization in the study area, and a comparison analysis of the indicator with the four non-point source loads demonstrated that the chemical oxygen demand, total phosphorus and total nitrogen loads are linearly related to the immigrant population ratio. The results provide useful information for environmental improvement and city management in the study area.

  16. Non-Point Source Pollutant Load Variation in Rapid Urbanization Areas by Remote Sensing, Gis and the L-THIA Model: A Case in Bao'an District, Shenzhen, China

    NASA Astrophysics Data System (ADS)

    Li, Tianhong; Bai, Fengjiao; Han, Peng; Zhang, Yuanyan

    2016-11-01

    Urban sprawl is a major driving force that alters local and regional hydrology and increases non-point source pollution. Using the Bao'an District in Shenzhen, China, a typical rapid urbanization area, as the study area and land-use change maps from 1988 to 2014 that were obtained by remote sensing, the contributions of different land-use types to NPS pollutant production were assessed with a localized long-term hydrologic impact assessment (L-THIA) model. The results show that the non-point source pollution load changed significantly both in terms of magnitude and spatial distribution. The loads of chemical oxygen demand, total suspended substances, total nitrogen and total phosphorus were affected by the interactions between event mean concentration and the magnitude of changes in land-use acreages and the spatial distribution. From 1988 to 2014, the loads of chemical oxygen demand, suspended substances and total phosphorus showed clearly increasing trends with rates of 132.48 %, 32.52 % and 38.76 %, respectively, while the load of total nitrogen decreased by 71.52 %. The immigrant population ratio was selected as an indicator to represent the level of rapid urbanization and industrialization in the study area, and a comparison analysis of the indicator with the four non-point source loads demonstrated that the chemical oxygen demand, total phosphorus and total nitrogen loads are linearly related to the immigrant population ratio. The results provide useful information for environmental improvement and city management in the study area.

  17. Analysis of an inventory model for both linearly decreasing demand and holding cost

    NASA Astrophysics Data System (ADS)

    Malik, A. K.; Singh, Parth Raj; Tomar, Ajay; Kumar, Satish; Yadav, S. K.

    2016-03-01

    This study proposes the analysis of an inventory model for linearly decreasing demand and holding cost for non-instantaneous deteriorating items. The inventory model focuses on commodities having linearly decreasing demand without shortages. The holding cost doesn't remain uniform with time due to any form of variation in the time value of money. Here we consider that the holding cost decreases with respect to time. The optimal time interval for the total profit and the optimal order quantity are determined. The developed inventory model is pointed up through a numerical example. It also includes the sensitivity analysis.

  18. Linear models: permutation methods

    USGS Publications Warehouse

    Cade, B.S.; Everitt, B.S.; Howell, D.C.

    2005-01-01

    Permutation tests (see Permutation Based Inference) for the linear model have applications in behavioral studies when traditional parametric assumptions about the error term in a linear model are not tenable. Improved validity of Type I error rates can be achieved with properly constructed permutation tests. Perhaps more importantly, increased statistical power, improved robustness to effects of outliers, and detection of alternative distributional differences can be achieved by coupling permutation inference with alternative linear model estimators. For example, it is well-known that estimates of the mean in linear model are extremely sensitive to even a single outlying value of the dependent variable compared to estimates of the median [7, 19]. Traditionally, linear modeling focused on estimating changes in the center of distributions (means or medians). However, quantile regression allows distributional changes to be estimated in all or any selected part of a distribution or responses, providing a more complete statistical picture that has relevance to many biological questions [6]...

  19. A General Linear Model Approach to Adjusting the Cumulative GPA.

    ERIC Educational Resources Information Center

    Young, John W.

    A general linear model (GLM), using least-squares techniques, was used to develop a criterion measure to replace freshman year grade point average (GPA) in college admission predictive validity studies. Problems with the use of GPA include those associated with the combination of grades from different courses and disciplines into a single measure,…

  20. Analyzing Multilevel Data: Comparing Findings from Hierarchical Linear Modeling and Ordinary Least Squares Regression

    ERIC Educational Resources Information Center

    Rocconi, Louis M.

    2013-01-01

    This study examined the differing conclusions one may come to depending upon the type of analysis chosen, hierarchical linear modeling or ordinary least squares (OLS) regression. To illustrate this point, this study examined the influences of seniors' self-reported critical thinking abilities three ways: (1) an OLS regression with the student…

  1. Compensation for loads during arm movements using equilibrium-point control.

    PubMed

    Gribble, P L; Ostry, D J

    2000-12-01

    A significant problem in motor control is how information about movement error is used to modify control signals to achieve desired performance. A potential source of movement error and one that is readily controllable experimentally relates to limb dynamics and associated movement-dependent loads. In this paper, we have used a position control model to examine changes to control signals for arm movements in the context of movement-dependent loads. In the model, based on the equilibrium-point hypothesis, equilibrium shifts are adjusted directly in proportion to the positional error between desired and actual movements. The model is used to simulate multi-joint movements in the presence of both "internal" loads due to joint interaction torques, and externally applied loads resulting from velocity-dependent force fields. In both cases it is shown that the model can achieve close correspondence to empirical data using a simple linear adaptation procedure. An important feature of the model is that it achieves compensation for loads during movement without the need for either coordinate transformations between positional error and associated corrective forces, or inverse dynamics calculations.

  2. Home advantage in southern hemisphere rugby union: national and international.

    PubMed

    Morton R, Hugh

    2006-05-01

    This study evaluates home advantages both for national (Super 12) and international (Tri-nations) rugby union teams from South Africa, Australia and New Zealand, over the five-year period 2000 - 2004 using linear modelling. These home advantages are examined for statistical and practical significance, for variability between teams, for stability over time and for inter-correlation. These data reveal that the overall home advantage in elite rugby union has a mean of +6.7 points, and that this changes little from year to year. Closer scrutiny nevertheless reveals a high degree of variability. Different teams can and do have different home advantages, which ranges from a low of -0.7 to a high of +28.3 points in any one year. Furthermore, some team home advantages change up or down from one year to the next, by as much as -36.5 to +31.4 points at the extremes. There is no evidence that the stronger teams have the higher home advantages, or that a high home advantage leads to a superior finishing position in the competition.

  3. Proceedings of the Third International Workshop on Multistrategy Learning, May 23-25 Harpers Ferry, WV.

    DTIC Science & Technology

    1996-09-16

    approaches are: • Adaptive filtering • Single exponential smoothing (Brown, 1963) * The Box-Jenkins methodology ( ARIMA modeling ) - Linear exponential... ARIMA • Linear exponential smoothing: Holt’s two parameter modeling (Box and Jenkins, 1976). However, there are two approach (Holt et al., 1960) very...crucial disadvantages: The most important point in - Winters’ three parameter method (Winters, 1960) ARIMA modeling is model identification. As shown in

  4. Identifying influential data points in hydrological model calibration and their impact on streamflow predictions

    NASA Astrophysics Data System (ADS)

    Wright, David; Thyer, Mark; Westra, Seth

    2015-04-01

    Highly influential data points are those that have a disproportionately large impact on model performance, parameters and predictions. However, in current hydrological modelling practice the relative influence of individual data points on hydrological model calibration is not commonly evaluated. This presentation illustrates and evaluates several influence diagnostics tools that hydrological modellers can use to assess the relative influence of data. The feasibility and importance of including influence detection diagnostics as a standard tool in hydrological model calibration is discussed. Two classes of influence diagnostics are evaluated: (1) computationally demanding numerical "case deletion" diagnostics; and (2) computationally efficient analytical diagnostics, based on Cook's distance. These diagnostics are compared against hydrologically orientated diagnostics that describe changes in the model parameters (measured through the Mahalanobis distance), performance (objective function displacement) and predictions (mean and maximum streamflow). These influence diagnostics are applied to two case studies: a stage/discharge rating curve model, and a conceptual rainfall-runoff model (GR4J). Removing a single data point from the calibration resulted in differences to mean flow predictions of up to 6% for the rating curve model, and differences to mean and maximum flow predictions of up to 10% and 17%, respectively, for the hydrological model. When using the Nash-Sutcliffe efficiency in calibration, the computationally cheaper Cook's distance metrics produce similar results to the case-deletion metrics at a fraction of the computational cost. However, Cooks distance is adapted from linear regression with inherit assumptions on the data and is therefore less flexible than case deletion. Influential point detection diagnostics show great potential to improve current hydrological modelling practices by identifying highly influential data points. The findings of this study establish the feasibility and importance of including influential point detection diagnostics as a standard tool in hydrological model calibration. They provide the hydrologist with important information on whether model calibration is susceptible to a small number of highly influent data points. This enables the hydrologist to make a more informed decision of whether to (1) remove/retain the calibration data; (2) adjust the calibration strategy and/or hydrological model to reduce the susceptibility of model predictions to a small number of influential observations.

  5. Generation of linear dynamic models from a digital nonlinear simulation

    NASA Technical Reports Server (NTRS)

    Daniele, C. J.; Krosel, S. M.

    1979-01-01

    The results and methodology used to derive linear models from a nonlinear simulation are presented. It is shown that averaged positive and negative perturbations in the state variables can reduce numerical errors in finite difference, partial derivative approximations and, in the control inputs, can better approximate the system response in both directions about the operating point. Both explicit and implicit formulations are addressed. Linear models are derived for the F 100 engine, and comparisons of transients are made with the nonlinear simulation. The problem of startup transients in the nonlinear simulation in making these comparisons is addressed. Also, reduction of the linear models is investigated using the modal and normal techniques. Reduced-order models of the F 100 are derived and compared with the full-state models.

  6. The Influential Effect of Blending, Bump, Changing Period, and Eclipsing Cepheids on the Leavitt Law

    NASA Astrophysics Data System (ADS)

    García-Varela, A.; Muñoz, J. R.; Sabogal, B. E.; Vargas Domínguez, S.; Martínez, J.

    2016-06-01

    The investigation of the nonlinearity of the Leavitt law (LL) is a topic that began more than seven decades ago, when some of the studies in this field found that the LL has a break at about 10 days. The goal of this work is to investigate a possible statistical cause of this nonlinearity. By applying linear regressions to OGLE-II and OGLE-IV data, we find that to obtain the LL by using linear regression, robust techniques to deal with influential points and/or outliers are needed instead of the ordinary least-squares regression traditionally used. In particular, by using M- and MM-regressions we establish firmly and without doubt the linearity of the LL in the Large Magellanic Cloud, without rejecting or excluding Cepheid data from the analysis. This implies that light curves of Cepheids suggesting blending, bumps, eclipses, or period changes do not affect the LL for this galaxy. For the Small Magellanic Cloud, when including Cepheids of this kind, it is not possible to find an adequate model, probably because of the geometry of the galaxy. In that case, a possible influence of these stars could exist.

  7. Influence of the surface magnetic field of a cylindrical permanent magnet on the maximum levitation force in high-Tc superconductors

    NASA Astrophysics Data System (ADS)

    Zhao, Xian-Feng; Liu, Yuan

    2006-06-01

    In this paper we present the dependence of the maximum levitation force (FzMax) of a high-Tc superconductor on the surface magnetic field (Bs) of a cylindrical permanent magnet, based on the Bean critical state model and Ampère's law. A transition point of Bs is found at which the relation between FzMax and Bs changes: while the surface magnetic field is less than the transition point the dependence is subjected to a nonlinear function, otherwise it is a linear one. The two different relations are estimated to correspond to partial penetration of the shielding currents in the interior of the superconductor below the transition point and complete penetration above it, respectively. Furthermore, the influence of the geometrical properties of superconductors on the transition point of Bs is discussed, which shows a quadratic polynomial function between the transition points and the radii and the thickness of superconductors. Some optimum contours of the transition point of Bs are presented in order to achieve large levitation forces.

  8. Overgeneral autobiographical memory predicts changes in depression in a community sample.

    PubMed

    Van Daele, Tom; Griffith, James W; Van den Bergh, Omer; Hermans, Dirk

    2014-01-01

    This study investigated whether overgeneral autobiographical memory (OGM) predicts the course of symptoms of depression and anxiety in a community sample, after 5, 6, 12 and 18 months. Participants (N=156) completed the Autobiographical Memory Test and the Depression Anxiety Stress Scales-21 (DASS-21) at baseline and were subsequently reassessed using the DASS-21 at four time points over a period of 18 months. Using latent growth curve modelling, we found that OGM was associated with a linear increase in depression. We were unable to detect changes over time in anxiety. OGM may be an important marker to identify people at risk for depression in the future, but more research is needed with anxiety.

  9. Three-dimensional computer-assisted study model analysis of long-term oral-appliance wear. Part 1: Methodology.

    PubMed

    Chen, Hui; Lowe, Alan A; de Almeida, Fernanda Riberiro; Wong, Mary; Fleetham, John A; Wang, Bangkang

    2008-09-01

    The aim of this study was to test a 3-dimensional (3D) computer-assisted dental model analysis system that uses selected landmarks to describe tooth movement during treatment with an oral appliance. Dental casts of 70 patients diagnosed with obstructive sleep apnea and treated with oral appliances for a mean time of 7 years 4 months were evaluated with a 3D digitizer (MicroScribe-3DX, Immersion, San Jose, Calif) compatible with the Rhinoceros modeling program (version 3.0 SR3c, Robert McNeel & Associates, Seattle, Wash). A total of 86 landmarks on each model were digitized, and 156 variables were calculated as either the linear distance between points or the distance from points to reference planes. Four study models for each patient (maxillary baseline, mandibular baseline, maxillary follow-up, and mandibular follow-up) were superimposed on 2 sets of reference points: 3 points on the palatal rugae for maxillary model superimposition, and 3 occlusal contact points for the same set of maxillary and mandibular model superimpositions. The patients were divided into 3 evaluation groups by 5 orthodontists based on the changes between baseline and follow-up study models. Digital dental measurements could be analyzed, including arch width, arch length, curve of Spee, overbite, overjet, and the anteroposterior relationship between the maxillary and mandibular arches. A method error within 0.23 mm in 14 selected variables was found for the 3D system. The statistical differences in the 3 evaluation groups verified the division criteria determined by the orthodontists. The system provides a method to record 3D measurements of study models that permits computer visualization of tooth position and movement from various perspectives.

  10. A megahertz-frequency tunable piecewise-linear electromechanical resonator realized via nonlinear feedback

    NASA Astrophysics Data System (ADS)

    Bajaj, Nikhil; Chiu, George T.-C.; Rhoads, Jeffrey F.

    2018-07-01

    Vibration-based sensing modalities traditionally have relied upon monitoring small shifts in natural frequency in order to detect structural changes (such as those in mass or stiffness). In contrast, bifurcation-based sensing schemes rely on the detection of a qualitative change in the behavior of a system as a parameter is varied. This can produce easy-to-detect changes in response amplitude with high sensitivity to structural change, but requires resonant devices with specific dynamic behavior which is not always easily reproduced. Desirable behavior for such devices can be produced reliably via nonlinear feedback circuitry, but has in past efforts been largely limited to sub-MHz operation, partially due to the time delay limitations present in certain nonlinear feedback circuits, such as multipliers. This work demonstrates the design and implementation of a piecewise-linear resonator realized via diode- and integrated circuit-based feedback electronics and a quartz crystal resonator. The proposed system is fabricated and characterized, and the creation and selective placement of the bifurcation points of the overall electromechanical system is demonstrated by tuning the circuit gains. The demonstrated circuit operates at 16 MHz. Preliminary modeling and analysis is presented that qualitatively agrees with the experimentally-observed behavior.

  11. Experimental and environmental factors affect spurious detection of ecological thresholds

    USGS Publications Warehouse

    Daily, Jonathan P.; Hitt, Nathaniel P.; Smith, David; Snyder, Craig D.

    2012-01-01

    Threshold detection methods are increasingly popular for assessing nonlinear responses to environmental change, but their statistical performance remains poorly understood. We simulated linear change in stream benthic macroinvertebrate communities and evaluated the performance of commonly used threshold detection methods based on model fitting (piecewise quantile regression [PQR]), data partitioning (nonparametric change point analysis [NCPA]), and a hybrid approach (significant zero crossings [SiZer]). We demonstrated that false detection of ecological thresholds (type I errors) and inferences on threshold locations are influenced by sample size, rate of linear change, and frequency of observations across the environmental gradient (i.e., sample-environment distribution, SED). However, the relative importance of these factors varied among statistical methods and between inference types. False detection rates were influenced primarily by user-selected parameters for PQR (τ) and SiZer (bandwidth) and secondarily by sample size (for PQR) and SED (for SiZer). In contrast, the location of reported thresholds was influenced primarily by SED. Bootstrapped confidence intervals for NCPA threshold locations revealed strong correspondence to SED. We conclude that the choice of statistical methods for threshold detection should be matched to experimental and environmental constraints to minimize false detection rates and avoid spurious inferences regarding threshold location.

  12. Longitudinal Study of the Transition From Healthy Aging to Alzheimer Disease

    PubMed Central

    Johnson, David K.; Storandt, Martha; Morris, John C.; Galvin, James E.

    2009-01-01

    Background Detection of the earliest cognitive changes signifying Alzheimer disease is difficult. Objective To model the cognitive decline in preclinical Alzheimer disease. Design Longitudinal archival study comparing individuals who became demented during follow-up and people who remained nondemented on each of 4 cognitive factors: global, verbal memory, visuospatial, and working memory. Setting Alzheimer Disease Research Center, Washington University School of Medicine, St Louis, Missouri. Participants One hundred thirty-four individuals who became demented during follow-up and 310 who remained nondemented. Main Outcome Measures Inflection point in longitudinal cognitive performance. Results The best-fitting model for each of the 4 factors in the stable group was linear, with a very slight downward trend on all but the Visuospatial factor. In contrast, a piecewise model with accelerated slope after a sharp inflection point provided the best fit for the group that progressed. The optimal inflection point for all 4 factors was prior to diagnosis of dementia: Global, 2 years; Verbal and Working Memory, 1 year; and Visuospatial, 3 years. These results were also obtained when data were limited to the subset (n=44) with autopsy-confirmed Alzheimer disease. Conclusions There is a sharp inflection point followed by accelerating decline in multiple domains of cognition, not just memory, in the preclinical period in Alzheimer disease when there is insufficient cognitive decline to warrant clinical diagnosis using conventional criteria. Early change was seen in tests of visuospatial ability, most of which were speeded. Research into early detection of cognitive disorders using only episodic memory tasks may not be sensitive to all of the early manifestations of disease. PMID:19822781

  13. Rate and State Friction Relation for Nanoscale Contacts: Thermally Activated Prandtl-Tomlinson Model with Chemical Aging

    NASA Astrophysics Data System (ADS)

    Tian, Kaiwen; Goldsby, David L.; Carpick, Robert W.

    2018-05-01

    Rate and state friction (RSF) laws are widely used empirical relationships that describe macroscale to microscale frictional behavior. They entail a linear combination of the direct effect (the increase of friction with sliding velocity due to the reduced influence of thermal excitations) and the evolution effect (the change in friction with changes in contact "state," such as the real contact area or the degree of interfacial chemical bonds). Recent atomic force microscope (AFM) experiments and simulations found that nanoscale single-asperity amorphous silica-silica contacts exhibit logarithmic aging (increasing friction with time) over several decades of contact time, due to the formation of interfacial chemical bonds. Here we establish a physically based RSF relation for such contacts by combining the thermally activated Prandtl-Tomlinson (PTT) model with an evolution effect based on the physics of chemical aging. This thermally activated Prandtl-Tomlinson model with chemical aging (PTTCA), like the PTT model, uses the loading point velocity for describing the direct effect, not the tip velocity (as in conventional RSF laws). Also, in the PTTCA model, the combination of the evolution and direct effects may be nonlinear. We present AFM data consistent with the PTTCA model whereby in aging tests, for a given hold time, static friction increases with the logarithm of the loading point velocity. Kinetic friction also increases with the logarithm of the loading point velocity at sufficiently high velocities, but at a different increasing rate. The discrepancy between the rates of increase of static and kinetic friction with velocity arises from the fact that appreciable aging during static contact changes the energy landscape. Our approach extends the PTT model, originally used for crystalline substrates, to amorphous materials. It also establishes how conventional RSF laws can be modified for nanoscale single-asperity contacts to provide a physically based friction relation for nanoscale contacts that exhibit chemical bond-induced aging, as well as other aging mechanisms with similar physical characteristics.

  14. Factors influencing the ablative efficiency of high intensity focused ultrasound (HIFU) treatment for adenomyosis: A retrospective study.

    PubMed

    Gong, Chunmei; Yang, Bin; Shi, Yarong; Liu, Zhongqiong; Wan, Lili; Zhang, Hong; Jiang, Denghua; Zhang, Lian

    2016-08-01

    Objectives The aim of this study was to investigate factors affecting ablative efficiency of high intensity focused ultrasound (HIFU) for adenomyosis. Materials and methods In all, 245 patients with adenomyosis who underwent ultrasound guided HIFU (USgHIFU) were retrospectively reviewed. All patients underwent dynamic contrast-enhanced magnetic resonance imaging (MRI) before and after HIFU treatment. The non-perfused volume (NPV) ratio, energy efficiency factor (EEF) and greyscale change were set as dependent variables, while the factors possibly affecting ablation efficiency were set as independent variables. These variables were used to build multiple regression models. Results A total of 245 patients with adenomyosis successfully completed HIFU treatment. Enhancement type on T1 weighted image (WI), abdominal wall thickness, volume of adenomyotic lesion, the number of hyperintense points, location of the uterus, and location of adenomyosis all had a linear relationship with the NPV ratio. Distance from skin to the adenomyotic lesion's ventral side, enhancement type on T1WI, volume of adenomyotic lesion, abdominal wall thickness, and signal intensity on T2WI all had a linear relationship with EEF. Location of the uterus and abdominal wall thickness also both had a linear relationship with greyscale change. Conclusion The enhancement type on T1WI, signal intensity on T2WI, volume of adenomyosis, location of the uterus and adenomyosis, number of hyperintense points, abdominal wall thickness, and distance from the skin to the adenomyotic lesion's ventral side can all be used as predictors of HIFU for adenomyosis.

  15. Environmentally Dependent Density-Distance Relationship of Dispersing Culex tarsalis in a Southern California Desert Region.

    PubMed

    Antonić, Oleg; Sudarić-Bogojević, Mirta; Lothrop, Hugh; Merdić, Enrih

    2014-09-01

    The direct inclusion of environmental factors into the empirical model that describes a density-distance relationship (DDR) is demonstrated on dispersal data obtained in a capture-mark-release-recapture experiment (CMRR) with Culex tarsalis conducted around the community of Mecca, CA. Empirical parameters of standard (environmentally independent) DDR were expressed as linear functions of environmental variables: relative orientation (azimuthal deviation of north) of release point (relative to recapture point) and proportions of habitat types surrounding each recapture point. The yielded regression model (R(2)  =  0.5373, after optimization on the best subset of linear terms) suggests that spatial density of recaptured individuals after 12 days of a CMRR experiment significantly depended on 1) distance from release point, 2) orientation of recapture points in relation to release point (preferring dispersal toward the south, probably due to wind drift and position of periodically flooded habitats suitable for species egg clutches), and 3) habitat spectrum in surroundings of recapture points (increasing and decreasing population density in desert and urban environment, respectively).

  16. Relationships Between Changes in Patient-Reported Health Status and Functional Capacity in Outpatients With Heart Failure

    PubMed Central

    Flynn, Kathryn E.; Lin, Li; Moe, Gordon W.; Howlett, Jonathan G.; Fine, Lawrence J.; Spertus, John A.; McConnell, Timothy R.; Piña, Ileana L.; Weinfurt, Kevin P.

    2011-01-01

    Background Heart failure trials use a variety of measures of functional capacity and quality of life. Lack of formal assessments of the relationships between changes in multiple aspects of patient-reported health status and measures of functional capacity over time limit the ability to compare results across studies. Methods Using data from HF-ACTION (N = 2331), we used Pearson correlation coefficients and predicted change scores from linear mixed-effects modeling to demonstrate associations between changes in patient-reported health status measured with the EQ-5D visual analog scale (VAS) and the Kansas City Cardiomyopathy Questionnaire (KCCQ) and changes in peak VO2 and 6-minute walk distance at 3 and 12 months. We examined a 5-point change in KCCQ within individuals to provide a framework for interpreting changes in these measures. Results After adjustment for baseline characteristics, correlations between changes in the VAS and changes in peak VO2 and 6-minute walk distance ranged from 0.13 to 0.28, and correlations between changes in the KCCQ overall and subscale scores and changes in peak VO2 and 6-minute walk distance ranged from 0.18 to 0.34. A 5-point change in KCCQ was associated with a 2.50 ml/kg/min change in peak VO2 (95% confidence interval, 2.21–2.86) and a 112-meter change in 6-minute walk distance (95% confidence interval, 96–134). Conclusions Changes in patient-reported health status are not highly correlated with changes in functional capacity. Our findings generally support the current practice of considering a 5-point change in the KCCQ within individuals to be clinically meaningful. Trial Registration clinicaltrials.gov Identifier: NCT00047437 PMID:22172441

  17. Influence of different base thicknesses on maxillary complete denture processing: linear and angular graphic analysis on the movement of artificial teeth.

    PubMed

    Mazaro, José Vitor Quinelli; Gennari Filho, Humberto; Vedovatto, Eduardo; Amoroso, Andressa Paschoal; Pellizzer, Eduardo Piza; Zavanelli, Adriana Cristina

    2011-09-01

    The purpose of this study was to compare the dental movement that occurs during the processing of maxillary complete dentures with 3 different base thicknesses, using 2 investment methods, and microwave polymerization. A sample of 42 denture models was randomly divided into 6 groups (n = 7), with base thicknesses of 1.25, 2.50, and 3.75 mm and gypsum or silicone flask investment. Points were demarcated on the distal surface of the second molars and on the back of the gypsum cast at the alveolar ridge level to allow linear and angular measurement using AutoCAD software. The data were subjected to analysis of variance with double factor, Tukey test and Fisher (post hoc). Angular analysis of the varying methods and their interactions generated a statistical difference (P = 0.023) when the magnitudes of molar inclination were compared. Tooth movement was greater for thin-based prostheses, 1.25 mm (-0.234), versus thick 3.75 mm (0.2395), with antagonistic behavior. Prosthesis investment with silicone (0.053) showed greater vertical change compared with the gypsum investment (0.032). There was a difference between the point of analysis, demonstrating that the changes were not symmetric. All groups evaluated showed change in the position of artificial teeth after processing. The complete denture with a thin base (1.25 mm) and silicone investment showed the worst results, whereas intermediate thickness (2.50 mm) was demonstrated to be ideal for the denture base.

  18. Temporally-Constrained Group Sparse Learning for Longitudinal Data Analysis in Alzheimer’s Disease

    PubMed Central

    Jie, Biao; Liu, Mingxia; Liu, Jun

    2016-01-01

    Sparse learning has been widely investigated for analysis of brain images to assist the diagnosis of Alzheimer’s disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). However, most existing sparse learning-based studies only adopt cross-sectional analysis methods, where the sparse model is learned using data from a single time-point. Actually, multiple time-points of data are often available in brain imaging applications, which can be used in some longitudinal analysis methods to better uncover the disease progression patterns. Accordingly, in this paper we propose a novel temporally-constrained group sparse learning method aiming for longitudinal analysis with multiple time-points of data. Specifically, we learn a sparse linear regression model by using the imaging data from multiple time-points, where a group regularization term is first employed to group the weights for the same brain region across different time-points together. Furthermore, to reflect the smooth changes between data derived from adjacent time-points, we incorporate two smoothness regularization terms into the objective function, i.e., one fused smoothness term which requires that the differences between two successive weight vectors from adjacent time-points should be small, and another output smoothness term which requires the differences between outputs of two successive models from adjacent time-points should also be small. We develop an efficient optimization algorithm to solve the proposed objective function. Experimental results on ADNI database demonstrate that, compared with conventional sparse learning-based methods, our proposed method can achieve improved regression performance and also help in discovering disease-related biomarkers. PMID:27093313

  19. The relevance of restrained eating behavior for circadian eating patterns in adolescents

    PubMed Central

    Alexy, Ute; Diederichs, Tanja; Buyken, Anette E.; Roßbach, Sarah

    2018-01-01

    Background Restrained Eating, i.e. the tendency to restrict dietary intake to control body-weight, often emerges during adolescence and may result in changes in circadian eating patterns. Objective The objective of the present investigation was to determine the cross-sectional relevance of restrained eating for characteristics of circadian eating pattern in adolescents and whether changes in restrained eating are accompanied by concurrent changes in circadian eating pattern over the course of adolescence. Methods Two questionnaires assessing restrained eating (Score 0–30) with parallel 3-day weighed dietary records from two different time points were available from 209 (♂:101, ♀:108) 11–18 year old adolescents of the DONALD study. Mixed linear regression models were used to analyze whether restrained eating was associated with eating occasion frequency, snack frequency and morning and evening energy intake [in % of daily energy intake, %E]. Linear regression models were used to examine whether changes in restrained eating were associated with changes in the mentioned variables. Results Among girls, greater restrained eating was cross-sectionally associated with higher morning energy intake (p = 0.03). Further, there was a tendency towards lower evening energy intake with higher levels of restrained eating for the whole sample (p = 0.06). No cross-sectional associations were found with eating occasion or snack frequency. Each one-point increase in restrained eating during adolescence was related to a concurrent decrease in eating occasion frequency by 0.04 (95% CI -0.08; -0.01, p = 0.02) and in evening energy intake by 0.36%E (95% CI -0.70; -0.03, p = 0.04). A tendency towards decreasing snack frequency with increasing restrained eating was observed (β = -0.03, 95% CI -0.07; 0.00, p = 0.07). No association was found between changes in restrained eating and concurrent changes in morning energy intake. Conclusion We found indications for cross-sectional and prospective associations between restrained eating and chronobiological aspects of food intake in adolescents. Our results suggest that restrained eating should be considered a relevant determinant of circadian eating patterns. PMID:29791516

  20. Vegetation physiology controls continental water cycle responses to climate change

    NASA Astrophysics Data System (ADS)

    Lemordant, L. A.; Swann, A. L. S.; Cook, B.; Scheff, J.; Gentine, P.

    2017-12-01

    Abstract per se:Predicting how climate change will affect the hydrologic cycle is of utmost importance for ecological systems and for human life and activities. A typical perspective is that global warming will cause an intensification of the mean state, the so-called "dry gets drier, wet gets wetter" paradigm. While this result is robust over the oceans, recent works suggest it may be less appropriate for terrestrial regions. Using Earth System Models (ESMs) with decoupled surface (vegetation physiology, PHYS) and atmospheric (radiative, ATMO) CO2 responses, we show that the CO2 physiological response dominates the change in the continental hydrologic cycle compared to radiative and precipitation changes due to increased atmospheric CO2, counter to previous assumptions. Using multiple linear regression analysis, we estimate the individual contribution of each of the three main drivers, precipitation, radiation and physiological CO2 forcing (see attached figure). Our analysis reveals that physiological effects dominate changes for 3 key indicators of dryness and/or vegetation stress (namely LAI, P-ET and EF) over the largest fraction of the globe, except for soil moisture which exhibits a more complex response. This highlights the key role of vegetation in controlling future terrestrial hydrologic response.Legend of the Figure attached:Decomposition along the three main drivers of LAI (a), P-ET (b), EF (c) in the control run. Green quantifies the effect of the vegetation physiology based on the run PHYS; red and blue quantify the contribution of, respectively, net radiation and precipitation, based on multiple linear regression in ATMO. Pie charts show for each variable the fraction (labelled in %) of land under the main influence (more than 50% of the changes is attributed to this driver) of one the three main drivers (green for grid points dominated by vegetation physiology, red for grid points dominated by net radiation, and blue for grid points dominated by the precipitation), and under no single driver influence (grey). Based on an article in review at Nature Climate Change as of Aug, 2nd 2017

  1. Single toxin dose-response models revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demidenko, Eugene, E-mail: eugened@dartmouth.edu

    The goal of this paper is to offer a rigorous analysis of the sigmoid shape single toxin dose-response relationship. The toxin efficacy function is introduced and four special points, including maximum toxin efficacy and inflection points, on the dose-response curve are defined. The special points define three phases of the toxin effect on mortality: (1) toxin concentrations smaller than the first inflection point or (2) larger then the second inflection point imply low mortality rate, and (3) concentrations between the first and the second inflection points imply high mortality rate. Probabilistic interpretation and mathematical analysis for each of the fourmore » models, Hill, logit, probit, and Weibull is provided. Two general model extensions are introduced: (1) the multi-target hit model that accounts for the existence of several vital receptors affected by the toxin, and (2) model with a nonzero mortality at zero concentration to account for natural mortality. Special attention is given to statistical estimation in the framework of the generalized linear model with the binomial dependent variable as the mortality count in each experiment, contrary to the widespread nonlinear regression treating the mortality rate as continuous variable. The models are illustrated using standard EPA Daphnia acute (48 h) toxicity tests with mortality as a function of NiCl or CuSO{sub 4} toxin. - Highlights: • The paper offers a rigorous study of a sigmoid dose-response relationship. • The concentration with highest mortality rate is rigorously defined. • A table with four special points for five morality curves is presented. • Two new sigmoid dose-response models have been introduced. • The generalized linear model is advocated for estimation of sigmoid dose-response relationship.« less

  2. A network model of successive partitioning-limited solute diffusion through the stratum corneum.

    PubMed

    Schumm, Phillip; Scoglio, Caterina M; van der Merwe, Deon

    2010-02-07

    As the most exposed point of contact with the external environment, the skin is an important barrier to many chemical exposures, including medications, potentially toxic chemicals and cosmetics. Traditional dermal absorption models treat the stratum corneum lipids as a homogenous medium through which solutes diffuse according to Fick's first law of diffusion. This approach does not explain non-linear absorption and irregular distribution patterns within the stratum corneum lipids as observed in experimental data. A network model, based on successive partitioning-limited solute diffusion through the stratum corneum, where the lipid structure is represented by a large, sparse, and regular network where nodes have variable characteristics, offers an alternative, efficient, and flexible approach to dermal absorption modeling that simulates non-linear absorption data patterns. Four model versions are presented: two linear models, which have unlimited node capacities, and two non-linear models, which have limited node capacities. The non-linear model outputs produce absorption to dose relationships that can be best characterized quantitatively by using power equations, similar to the equations used to describe non-linear experimental data.

  3. Sensory processing and world modeling for an active ranging device

    NASA Technical Reports Server (NTRS)

    Hong, Tsai-Hong; Wu, Angela Y.

    1991-01-01

    In this project, we studied world modeling and sensory processing for laser range data. World Model data representation and operation were defined. Sensory processing algorithms for point processing and linear feature detection were designed and implemented. The interface between world modeling and sensory processing in the Servo and Primitive levels was investigated and implemented. In the primitive level, linear features detectors for edges were also implemented, analyzed and compared. The existing world model representations is surveyed. Also presented is the design and implementation of the Y-frame model, a hierarchical world model. The interfaces between the world model module and the sensory processing module are discussed as well as the linear feature detectors that were designed and implemented.

  4. Robust Classification and Segmentation of Planar and Linear Features for Construction Site Progress Monitoring and Structural Dimension Compliance Control

    NASA Astrophysics Data System (ADS)

    Maalek, R.; Lichti, D. D.; Ruwanpura, J.

    2015-08-01

    The application of terrestrial laser scanners (TLSs) on construction sites for automating construction progress monitoring and controlling structural dimension compliance is growing markedly. However, current research in construction management relies on the planned building information model (BIM) to assign the accumulated point clouds to their corresponding structural elements, which may not be reliable in cases where the dimensions of the as-built structure differ from those of the planned model and/or the planned model is not available with sufficient detail. In addition outliers exist in construction site datasets due to data artefacts caused by moving objects, occlusions and dust. In order to overcome the aforementioned limitations, a novel method for robust classification and segmentation of planar and linear features is proposed to reduce the effects of outliers present in the LiDAR data collected from construction sites. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a robust clustering method. A method is also proposed to robustly extract the points belonging to the flat-slab floors and/or ceilings without performing the aforementioned stages in order to preserve computational efficiency. The applicability of the proposed method is investigated in two scenarios, namely, a laboratory with 30 million points and an actual construction site with over 150 million points. The results obtained by the two experiments validate the suitability of the proposed method for robust segmentation of planar and linear features in contaminated datasets, such as those collected from construction sites.

  5. Cosmological constraints from the CFHTLenS shear measurements using a new, accurate, and flexible way of predicting non-linear mass clustering

    NASA Astrophysics Data System (ADS)

    Angulo, Raul E.; Hilbert, Stefan

    2015-03-01

    We explore the cosmological constraints from cosmic shear using a new way of modelling the non-linear matter correlation functions. The new formalism extends the method of Angulo & White, which manipulates outputs of N-body simulations to represent the 3D non-linear mass distribution in different cosmological scenarios. We show that predictions from our approach for shear two-point correlations at 1-300 arcmin separations are accurate at the ˜10 per cent level, even for extreme changes in cosmology. For moderate changes, with target cosmologies similar to that preferred by analyses of recent Planck data, the accuracy is close to ˜5 per cent. We combine this approach with a Monte Carlo Markov chain sampler to explore constraints on a Λ cold dark matter model from the shear correlation functions measured in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS). We obtain constraints on the parameter combination σ8(Ωm/0.27)0.6 = 0.801 ± 0.028. Combined with results from cosmic microwave background data, we obtain marginalized constraints on σ8 = 0.81 ± 0.01 and Ωm = 0.29 ± 0.01. These results are statistically compatible with previous analyses, which supports the validity of our approach. We discuss the advantages of our method and the potential it offers, including a path to model in detail (i) the effects of baryons, (ii) high-order shear correlation functions, and (iii) galaxy-galaxy lensing, among others, in future high-precision cosmological analyses.

  6. A Worst-Case Approach for On-Line Flutter Prediction

    NASA Technical Reports Server (NTRS)

    Lind, Rick C.; Brenner, Martin J.

    1998-01-01

    Worst-case flutter margins may be computed for a linear model with respect to a set of uncertainty operators using the structured singular value. This paper considers an on-line implementation to compute these robust margins in a flight test program. Uncertainty descriptions are updated at test points to account for unmodeled time-varying dynamics of the airplane by ensuring the robust model is not invalidated by measured flight data. Robust margins computed with respect to this uncertainty remain conservative to the changing dynamics throughout the flight. A simulation clearly demonstrates this method can improve the efficiency of flight testing by accurately predicting the flutter margin to improve safety while reducing the necessary flight time.

  7. Is the pain visual analogue scale linear and responsive to change? An exploration using Rasch analysis.

    PubMed

    Kersten, Paula; White, Peter J; Tennant, Alan

    2014-01-01

    Pain visual analogue scales (VAS) are commonly used in clinical trials and are often treated as an interval level scale without evidence that this is appropriate. This paper examines the internal construct validity and responsiveness of the pain VAS using Rasch analysis. Patients (n = 221, mean age 67, 58% female) with chronic stable joint pain (hip 40% or knee 60%) of mechanical origin waiting for joint replacement were included. Pain was scored on seven daily VASs. Rasch analysis was used to examine fit to the Rasch model. Responsiveness (Standardized Response Means, SRM) was examined on the raw ordinal data and the interval data generated from the Rasch analysis. Baseline pain VAS scores fitted the Rasch model, although 15 aberrant cases impacted on unidimensionality. There was some local dependency between items but this did not significantly affect the person estimates of pain. Daily pain (item difficulty) was stable, suggesting that single measures can be used. Overall, the SRMs derived from ordinal data overestimated the true responsiveness by 59%. Changes over time at the lower and higher end of the scale were represented by large jumps in interval equivalent data points; in the middle of the scale the reverse was seen. The pain VAS is a valid tool for measuring pain at one point in time. However, the pain VAS does not behave linearly and SRMs vary along the trait of pain. Consequently, Minimum Clinically Important Differences using raw data, or change scores in general, are invalid as these will either under- or overestimate true change; raw pain VAS data should not be used as a primary outcome measure or to inform parametric-based Randomised Controlled Trial power calculations in research studies; and Rasch analysis should be used to convert ordinal data to interval data prior to data interpretation.

  8. Effects of laser in situ keratomileusis on mental health-related quality of life

    PubMed Central

    Tounaka-Fujii, Kaoru; Yuki, Kenya; Negishi, Kazuno; Toda, Ikuko; Abe, Takayuki; Kouyama, Keisuke; Tsubota, Kazuo

    2016-01-01

    Purpose The aims of our study were to investigate whether laser in situ keratomileusis (LASIK) improves health-related quality of life (HRQoL) and to identify factors that affect postoperative HRQoL. Materials and methods A total of 213 Japanese patients who underwent primary LASIK were analyzed in this study. The average age of patients was 35.0±9.4 years. The subjects were asked to answer questions regarding subjective quality of vision, satisfaction, and quality of life (using the Japanese version of 36-Item Short Form Health Survey Version 2) at three time points: before LASIK, 1 month after LASIK, and 6 months after LASIK. Longitudinal changes over 6 months in the outputs of mental component summary (MCS) score and the physical component summary (PCS) score from the 36-Item Short Form Health Survey Version 2 questionnaire were compared between time points using a linear mixed-effects model. Delta MCS and PCS were calculated by subtracting the postoperative score (1 month after LASIK) from the preoperative score. Preoperative and postoperative factors associated with a change in the MCS score or PCS score were evaluated via a linear regression model. Results The preoperative MCS score was 51.0±9.4 and increased to 52.0±9.8 and 51.5±9.6 at 1 month and 6 months after LASIK, respectively, and the trend for the change from baseline in MCS through 6 months was significant (P=0.03). PCS score did not change following LASIK. Delta MCS was significantly negatively associated with preoperative spherical equivalent, axial length, and postoperative quality of vision, after adjusting for potential confounding factors. Conclusion Mental HRQoL is not lost with LASIK, and LASIK may improve mental HRQoL. Preoperative axial length may predict postoperative mental HRQoL. PMID:27713617

  9. Floquet band structure of a semi-Dirac system

    NASA Astrophysics Data System (ADS)

    Chen, Qi; Du, Liang; Fiete, Gregory A.

    2018-01-01

    In this work we use Floquet-Bloch theory to study the influence of circularly and linearly polarized light on two-dimensional band structures with semi-Dirac band touching points, taking the anisotropic nearest neighbor hopping model on the honeycomb lattice as an example. We find that circularly polarized light opens a gap and induces a band inversion to create a finite Chern number in the two-band model. By contrast, linearly polarized light can either open up a gap (polarized in the quadratically dispersing direction) or split the semi-Dirac band touching point into two Dirac points (polarized in the linearly dispersing direction) by an amount that depends on the amplitude of the light. Motivated by recent pump-probe experiments, we investigated the nonequilibrium spectral properties and momentum-dependent spin texture of our model in the Floquet state following a quench in the absence of phonons, and in the presence of phonon dissipation that leads to a steady state independently of the pump protocol. Finally, we make connections to optical measurements by computing the frequency dependence of the longitudinal and transverse optical conductivity for this two-band model. We analyze the various contributions from interband transitions and different Floquet modes. Our results suggest strategies for optically controlling band structures and experimentally measuring topological Floquet systems.

  10. A position-aware linear solid constitutive model for peridynamics

    DOE PAGES

    Mitchell, John A.; Silling, Stewart A.; Littlewood, David J.

    2015-11-06

    A position-aware linear solid (PALS) peridynamic constitutive model is proposed for isotropic elastic solids. The PALS model addresses problems that arise, in ordinary peridynamic material models such as the linear peridynamic solid (LPS), due to incomplete neighborhoods near the surface of a body. We improved model behavior in the vicinity of free surfaces through the application of two influence functions that correspond, respectively, to the volumetric and deviatoric parts of the deformation. Furthermore, the model is position-aware in that the influence functions vary over the body and reflect the proximity of each material point to free surfaces. Demonstration calculations onmore » simple benchmark problems show a sharp reduction in error relative to the LPS model.« less

  11. A position-aware linear solid constitutive model for peridynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, John A.; Silling, Stewart A.; Littlewood, David J.

    A position-aware linear solid (PALS) peridynamic constitutive model is proposed for isotropic elastic solids. The PALS model addresses problems that arise, in ordinary peridynamic material models such as the linear peridynamic solid (LPS), due to incomplete neighborhoods near the surface of a body. We improved model behavior in the vicinity of free surfaces through the application of two influence functions that correspond, respectively, to the volumetric and deviatoric parts of the deformation. Furthermore, the model is position-aware in that the influence functions vary over the body and reflect the proximity of each material point to free surfaces. Demonstration calculations onmore » simple benchmark problems show a sharp reduction in error relative to the LPS model.« less

  12. Advanced statistics: linear regression, part I: simple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  13. Cosmological Constraints from Fourier Phase Statistics

    NASA Astrophysics Data System (ADS)

    Ali, Kamran; Obreschkow, Danail; Howlett, Cullan; Bonvin, Camille; Llinares, Claudio; Oliveira Franco, Felipe; Power, Chris

    2018-06-01

    Most statistical inference from cosmic large-scale structure relies on two-point statistics, i.e. on the galaxy-galaxy correlation function (2PCF) or the power spectrum. These statistics capture the full information encoded in the Fourier amplitudes of the galaxy density field but do not describe the Fourier phases of the field. Here, we quantify the information contained in the line correlation function (LCF), a three-point Fourier phase correlation function. Using cosmological simulations, we estimate the Fisher information (at redshift z = 0) of the 2PCF, LCF and their combination, regarding the cosmological parameters of the standard ΛCDM model, as well as a Warm Dark Matter (WDM) model and the f(R) and Symmetron modified gravity models. The galaxy bias is accounted for at the level of a linear bias. The relative information of the 2PCF and the LCF depends on the survey volume, sampling density (shot noise) and the bias uncertainty. For a volume of 1h^{-3}Gpc^3, sampled with points of mean density \\bar{n} = 2× 10^{-3} h3 Mpc^{-3} and a bias uncertainty of 13%, the LCF improves the parameter constraints by about 20% in the ΛCDM cosmology and potentially even more in alternative models. Finally, since a linear bias only affects the Fourier amplitudes (2PCF), but not the phases (LCF), the combination of the 2PCF and the LCF can be used to break the degeneracy between the linear bias and σ8, present in 2-point statistics.

  14. Predicting Longitudinal Change in Language Production and Comprehension in Individuals with Down Syndrome: Hierarchical Linear Modeling.

    ERIC Educational Resources Information Center

    Chapman, Robin S.; Hesketh, Linda J.; Kistler, Doris J.

    2002-01-01

    Longitudinal change in syntax comprehension and production skill, measured over six years, was modeled in 31 individuals (ages 5-20) with Down syndrome. The best fitting Hierarchical Linear Modeling model of comprehension uses age and visual and auditory short-term memory as predictors of initial status, and age for growth trajectory. (Contains…

  15. [Visual field progression in glaucoma: cluster analysis].

    PubMed

    Bresson-Dumont, H; Hatton, J; Foucher, J; Fonteneau, M

    2012-11-01

    Visual field progression analysis is one of the key points in glaucoma monitoring, but distinction between true progression and random fluctuation is sometimes difficult. There are several different algorithms but no real consensus for detecting visual field progression. The trend analysis of global indices (MD, sLV) may miss localized deficits or be affected by media opacities. Conversely, point-by-point analysis makes progression difficult to differentiate from physiological variability, particularly when the sensitivity of a point is already low. The goal of our study was to analyse visual field progression with the EyeSuite™ Octopus Perimetry Clusters algorithm in patients with no significant changes in global indices or worsening of the analysis of pointwise linear regression. We analyzed the visual fields of 162 eyes (100 patients - 58 women, 42 men, average age 66.8 ± 10.91) with ocular hypertension or glaucoma. For inclusion, at least six reliable visual fields per eye were required, and the trend analysis (EyeSuite™ Perimetry) of visual field global indices (MD and SLV), could show no significant progression. The analysis of changes in cluster mode was then performed. In a second step, eyes with statistically significant worsening of at least one of their clusters were analyzed point-by-point with the Octopus Field Analysis (OFA). Fifty four eyes (33.33%) had a significant worsening in some clusters, while their global indices remained stable over time. In this group of patients, more advanced glaucoma was present than in stable group (MD 6.41 dB vs. 2.87); 64.82% (35/54) of those eyes in which the clusters progressed, however, had no statistically significant change in the trend analysis by pointwise linear regression. Most software algorithms for analyzing visual field progression are essentially trend analyses of global indices, or point-by-point linear regression. This study shows the potential role of analysis by clusters trend. However, for best results, it is preferable to compare the analyses of several tests in combination with morphologic exam. Copyright © 2012 Elsevier Masson SAS. All rights reserved.

  16. Relationship between changes in vasomotor symptoms and changes in menopause-specific quality of life and sleep parameters.

    PubMed

    Pinkerton, JoAnn V; Abraham, Lucy; Bushmakin, Andrew G; Cappelleri, Joseph C; Komm, Barry S

    2016-10-01

    This study characterizes and quantifies the relationship of vasomotor symptoms (VMS) of menopause with menopause-specific quality of life (MSQOL) and sleep parameters to help predict treatment outcomes and inform treatment decision-making. Data were derived from a 12-week randomized, double-blind, placebo-controlled phase 3 trial that evaluated effects of two doses of conjugated estrogens/bazedoxifene on VMS in nonhysterectomized postmenopausal women (N = 318, mean age = 53.39) experiencing at least seven moderate to severe hot flushes (HFs) per day or at least 50 per week. Repeated measures models were used to determine relationships between HF frequency and severity and outcomes on the Menopause-Specific Quality of Life questionnaire and the Medical Outcomes Study sleep scale. Sensitivity analyses were performed to check assumptions of linearity between VMS and outcomes. Frequency and severity of HFs showed approximately linear relationships with MSQOL and sleep parameters. Sensitivity analyses supported assumptions of linearity. The largest changes associated with a reduction of five HFs and a 0.5-point decrease in severity occurred in the Menopause-Specific Quality of Life vasomotor functioning domain (0.78 for number of HFs and 0.98 for severity) and the Medical Outcomes Study sleep disturbance (7.38 and 4.86) and sleep adequacy (-5.60 and -4.66) domains and the two overall sleep problems indices (SPI: 5.17 and 3.63; SPII: 5.82 and 3.83). Frequency and severity of HFs have an approximately linear relationship with MSQOL and sleep parameters-that is, improvements in HFs are associated with improvements in MSQOL and sleep. Such relationships may enable clinicians to predict changes in sleep and MSQOL expected from various VMS treatments.

  17. Point-by-point model calculation of the prompt neutron multiplicity distribution ν(A) in the incident neutron energy range of multi-chance fission

    NASA Astrophysics Data System (ADS)

    Tudora, Anabella; Hambsch, Franz-Josef; Tobosaru, Viorel

    2017-09-01

    Prompt neutron multiplicity distributions ν(A) are required for prompt emission correction of double energy (2E) measurements of fission fragments to determine pre-neutron fragment properties. The lack of experimental ν(A) data especially at incident neutron energies (En) where the multi-chance fission occurs impose the use of ν(A) predicted by models. The Point-by-Point model of prompt emission is able to provide the individual ν(A) of the compound nuclei of the main and secondary nucleus chains undergoing fission at a given En. The total ν(A) is obtained by averaging these individual ν(A) over the probabilities of fission chances (expressed as total and partial fission cross-section ratios). An indirect validation of the total ν(A) results is proposed. At high En, above 70 MeV, the PbP results of individual ν(A) of the first few nuclei of the main and secondary nucleus chains exhibit an almost linear increase. This shape is explained by the damping of shell effects entering the super-fluid expression of the level density parameters. They tend to approach the asymptotic values for most of the fragments. This fact leads to a smooth and almost linear increase of fragment excitation energy with the mass number that is reflected in a smooth and almost linear behaviour of ν(A).

  18. Constructing an Efficient Self-Tuning Aircraft Engine Model for Control and Health Management Applications

    NASA Technical Reports Server (NTRS)

    Armstrong, Jeffrey B.; Simon, Donald L.

    2012-01-01

    Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulations.Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulatns.

  19. Choosing the Optimal Number of B-spline Control Points (Part 1: Methodology and Approximation of Curves)

    NASA Astrophysics Data System (ADS)

    Harmening, Corinna; Neuner, Hans

    2016-09-01

    Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.

  20. Addressing the unemployment-mortality conundrum: non-linearity is the answer.

    PubMed

    Bonamore, Giorgio; Carmignani, Fabrizio; Colombo, Emilio

    2015-02-01

    The effect of unemployment on mortality is the object of a lively literature. However, this literature is characterized by sharply conflicting results. We revisit this issue and suggest that the relationship might be non-linear. We use data for 265 territorial units (regions) within 23 European countries over the period 2000-2012 to estimate a multivariate regression of mortality. The estimating equation allows for a quadratic relationship between unemployment and mortality. We control for various other determinants of mortality at regional and national level and we include region-specific and time-specific fixed effects. The model is also extended to account for the dynamic adjustment of mortality and possible lagged effects of unemployment. We find that the relationship between mortality and unemployment is U shaped. In the benchmark regression, when the unemployment rate is low, at 3%, an increase by one percentage point decreases average mortality by 0.7%. As unemployment increases, the effect decays: when the unemployment rate is 8% (sample average) a further increase by one percentage point decreases average mortality by 0.4%. The effect changes sign, turning from negative to positive, when unemployment is around 17%. When the unemployment rate is 25%, a further increase by one percentage point raises average mortality by 0.4%. Results hold for different causes of death and across different specifications of the estimating equation. We argue that the non-linearity arises because the level of unemployment affects the psychological and behavioural response of individuals to worsening economic conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Procedures for generation and reduction of linear models of a turbofan engine

    NASA Technical Reports Server (NTRS)

    Seldner, K.; Cwynar, D. S.

    1978-01-01

    A real time hybrid simulation of the Pratt & Whitney F100-PW-F100 turbofan engine was used for linear-model generation. The linear models were used to analyze the effect of disturbances about an operating point on the dynamic performance of the engine. A procedure that disturbs, samples, and records the state and control variables was developed. For large systems, such as the F100 engine, the state vector is large and may contain high-frequency information not required for control. This, reducing the full-state to a reduced-order model may be a practicable approach to simplifying the control design. A reduction technique was developed to generate reduced-order models. Selected linear and nonlinear output responses to exhaust-nozzle area and main-burner fuel flow disturbances are presented for comparison.

  2. The component slope linear model for calculating intensive partial molar properties /application to waste glasses and aluminate solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reynolds, Jacob G.

    2013-01-11

    Partial molar properties are the changes occurring when the fraction of one component is varied while the fractions of all other component mole fractions change proportionally. They have many practical and theoretical applications in chemical thermodynamics. Partial molar properties of chemical mixtures are difficult to measure because the component mole fractions must sum to one, so a change in fraction of one component must be offset with a change in one or more other components. Given that more than one component fraction is changing at a time, it is difficult to assign a change in measured response to a changemore » in a single component. In this study, the Component Slope Linear Model (CSLM), a model previously published in the statistics literature, is shown to have coefficients that correspond to the intensive partial molar properties. If a measured property is plotted against the mole fraction of a component while keeping the proportions of all other components constant, the slope at any given point on a graph of this curve is the partial molar property for that constituent. Actually plotting this graph has been used to determine partial molar properties for many years. The CSLM directly includes this slope in a model that predicts properties as a function of the component mole fractions. This model is demonstrated by applying it to the constant pressure heat capacity data from the NaOH-NaAl(OH){sub 4}-H{sub 2}O system, a system that simplifies Hanford nuclear waste. The partial molar properties of H{sub 2}O, NaOH, and NaAl(OH){sub 4} are determined. The equivalence of the CSLM and the graphical method is verified by comparing results determined by the two methods. The CSLM model has been previously used to predict the liquidus temperature of spinel crystals precipitated from Hanford waste glass. Those model coefficients are re-interpreted here as the partial molar spinel liquidus temperature of the glass components.« less

  3. The Component Slope Linear Model for Calculating Intensive Partial Molar Properties: Application to Waste Glasses and Aluminate Solutions - 13099

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reynolds, Jacob G.

    2013-07-01

    Partial molar properties are the changes occurring when the fraction of one component is varied while the fractions of all other component mole fractions change proportionally. They have many practical and theoretical applications in chemical thermodynamics. Partial molar properties of chemical mixtures are difficult to measure because the component mole fractions must sum to one, so a change in fraction of one component must be offset with a change in one or more other components. Given that more than one component fraction is changing at a time, it is difficult to assign a change in measured response to a changemore » in a single component. In this study, the Component Slope Linear Model (CSLM), a model previously published in the statistics literature, is shown to have coefficients that correspond to the intensive partial molar properties. If a measured property is plotted against the mole fraction of a component while keeping the proportions of all other components constant, the slope at any given point on a graph of this curve is the partial molar property for that constituent. Actually plotting this graph has been used to determine partial molar properties for many years. The CSLM directly includes this slope in a model that predicts properties as a function of the component mole fractions. This model is demonstrated by applying it to the constant pressure heat capacity data from the NaOHNaAl(OH){sub 4}-H{sub 2}O system, a system that simplifies Hanford nuclear waste. The partial molar properties of H{sub 2}O, NaOH, and NaAl(OH){sub 4} are determined. The equivalence of the CSLM and the graphical method is verified by comparing results determined by the two methods. The CSLM model has been previously used to predict the liquidus temperature of spinel crystals precipitated from Hanford waste glass. Those model coefficients are re-interpreted here as the partial molar spinel liquidus temperature of the glass components. (authors)« less

  4. Factors relating to windblown dust in associations between ...

    EPA Pesticide Factsheets

    Introduction: In effect estimates of city-specific PM2.5-mortality associations across United States (US), there exists a substantial amount of spatial heterogeneity. Some of this heterogeneity may be due to mass distribution of PM; areas where PM2.5 is likely to be dominated by large size fractions (above 1 micron; e.g., the contribution of windblown dust), may have a weaker association with mortality. Methods: Log rate ratios (betas) for the PM2.5-mortality association—derived from a model adjusting for time, an interaction with age-group, day of week, and natural splines of current temperature, current dew point, and unconstrained temperature at lags 1, 2, and 3, for 313 core-based statistical areas (CBSA) and their metropolitan divisions (MD) over 1999-2005—were used as the outcome. Using inverse variance weighted linear regression, we examined change in log rate ratios in association with PM10-PM2.5 correlation as a marker of windblown dust/higher PM size fraction; linearity of associations was assessed in models using splines with knots at quintile values. Results: Weighted mean PM2.5 association (0.96 percent increase in total non-accidental mortality for a 10 ug/m3 increment in PM2.5) increased by 0.34 (95% confidence interval: 0.20, 0.48) per interquartile change (0.25) in the PM10-PM2.5 correlation, and explained approximately 8% of the observed heterogeneity; the association was linear based on spline analysis. Conclusions: Preliminary results pro

  5. Modeling the dynamics of a phreatic eruption based on a tilt observation: Barrier breakage leading to the 2014 eruption of Mount Ontake, Japan

    NASA Astrophysics Data System (ADS)

    Maeda, Yuta; Kato, Aitaro; Yamanaka, Yoshiko

    2017-02-01

    Although phreatic eruptions are common volcanic phenomena that sometimes result in significant disasters, their dynamics are poorly understood. In this study, we address the dynamics of the phreatic eruption of Mount Ontake, Japan, in 2014 based on analyses of a tilt change observed immediately (450 s) before the eruption onset. We conducted two sets of analysis: a waveform inversion and a modified phase-space analysis. Our waveform inversion of the tilt signal points to a vertical tensile crack at a depth of 1100 m. Our modified phase-space analysis suggests that the tilt change was at first a linear function in time that then switched to exponential growth. We constructed simple analytical models to explain these temporal functions. The linear function was explained by the boiling of underground water controlled by a constant heat supply from a greater depth. The exponential function was explained by the decompression-induced boiling of water and the upward Darcy flow of the water vapor through a permeable region of small cracks that were newly created in response to ongoing boiling. We interpret that this region was intact prior to the start of the tilt change, and thus, it has acted as a permeability barrier for the upward migration of fluids; it was a breakage of this barrier that led to the eruption.

  6. Quadratic band touching points and flat bands in two-dimensional topological Floquet systems

    NASA Astrophysics Data System (ADS)

    Du, Liang; Zhou, Xiaoting; Fiete, Gregory A.

    2017-01-01

    In this paper we theoretically study, using Floquet-Bloch theory, the influence of circularly and linearly polarized light on two-dimensional band structures with Dirac and quadratic band touching points, and flat bands, taking the nearest neighbor hopping model on the kagome lattice as an example. We find circularly polarized light can invert the ordering of this three-band model, while leaving the flat band dispersionless. We find a small gap is also opened at the quadratic band touching point by two-photon and higher order processes. By contrast, linearly polarized light splits the quadratic band touching point (into two Dirac points) by an amount that depends only on the amplitude and polarization direction of the light, independent of the frequency, and generally renders dispersion to the flat band. The splitting is perpendicular to the direction of the polarization of the light. We derive an effective low-energy theory that captures these key results. Finally, we compute the frequency dependence of the optical conductivity for this three-band model and analyze the various interband contributions of the Floquet modes. Our results suggest strategies for optically controlling band structure and interaction strength in real systems.

  7. On 3D minimal massive gravity

    NASA Astrophysics Data System (ADS)

    Alishahiha, Mohsen; Qaemmaqami, Mohammad M.; Naseh, Ali; Shirzad, Ahmad

    2014-12-01

    We study linearized equations of motion of the newly proposed three dimensional gravity, known as minimal massive gravity, using its metric formulation. By making use of a redefinition of the parameters of the model, we observe that the resulting linearized equations are exactly the same as that of TMG. In particular the model admits logarithmic modes at critical points. We also study several vacuum solutions of the model, specially at a certain limit where the contribution of Chern-Simons term vanishes.

  8. A modelling approach to assessing the timescale uncertainties in proxy series with chronological errors

    NASA Astrophysics Data System (ADS)

    Divine, D. V.; Godtliebsen, F.; Rue, H.

    2012-01-01

    The paper proposes an approach to assessment of timescale errors in proxy-based series with chronological uncertainties. The method relies on approximation of the physical process(es) forming a proxy archive by a random Gamma process. Parameters of the process are partly data-driven and partly determined from prior assumptions. For a particular case of a linear accumulation model and absolutely dated tie points an analytical solution is found suggesting the Beta-distributed probability density on age estimates along the length of a proxy archive. In a general situation of uncertainties in the ages of the tie points the proposed method employs MCMC simulations of age-depth profiles yielding empirical confidence intervals on the constructed piecewise linear best guess timescale. It is suggested that the approach can be further extended to a more general case of a time-varying expected accumulation between the tie points. The approach is illustrated by using two ice and two lake/marine sediment cores representing the typical examples of paleoproxy archives with age models based on tie points of mixed origin.

  9. Using operations research to plan the british columbia registered nurses' workforce.

    PubMed

    Lavieri, Mariel S; Regan, Sandra; Puterman, Martin L; Ratner, Pamela A

    2008-11-01

    The authors explore the power and flexibility of using an operations research methodology known as linear programming to support health human resources (HHR) planning. The model takes as input estimates of the future need for healthcare providers and, in contrast to simulation, compares all feasible strategies to identify a long-term plan for achieving a balance between supply and demand at the least cost to the system. The approach is illustrated by using it to plan the British Columbia registered nurse (RN) workforce over a 20-year horizon. The authors show how the model can be used for scenario analysis by investigating the impact of decreasing attrition from educational programs, changing RN-to-manager ratios in direct care and exploring how other changes might alter planning recommendations. In addition to HHR policy recommendations, their analysis also points to new research opportunities. Copyright © 2008 Longwoods Publishing.

  10. Adaptation to Variance of Stimuli in Drosophila Larva Navigation

    NASA Astrophysics Data System (ADS)

    Wolk, Jason; Gepner, Ruben; Gershow, Marc

    In order to respond to stimuli that vary over orders of magnitude while also being capable of sensing very small changes, neural systems must be capable of rapidly adapting to the variance of stimuli. We study this adaptation in Drosophila larvae responding to varying visual signals and optogenetically induced fictitious odors using an infrared illuminated arena and custom computer vision software. Larval navigational decisions (when to turn) are modeled as the output a linear-nonlinear Poisson process. The development of the nonlinear turn rate in response to changes in variance is tracked using an adaptive point process filter determining the rate of adaptation to different stimulus profiles. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.

  11. Note: Wide-operating-range control for thermoelectric coolers.

    PubMed

    Peronio, P; Labanca, I; Ghioni, M; Rech, I

    2017-11-01

    A new algorithm for controlling the temperature of a thermoelectric cooler is proposed. Unlike a classic proportional-integral-derivative (PID) control, which computes the bias voltage from the temperature error, the proposed algorithm exploits the linear relation that exists between the cold side's temperature and the amount of heat that is removed per unit time. Since this control is based on an existing linear relation, it is insensitive to changes in the operating point that are instead crucial in classic PID control of a non-linear system.

  12. Note: Wide-operating-range control for thermoelectric coolers

    NASA Astrophysics Data System (ADS)

    Peronio, P.; Labanca, I.; Ghioni, M.; Rech, I.

    2017-11-01

    A new algorithm for controlling the temperature of a thermoelectric cooler is proposed. Unlike a classic proportional-integral-derivative (PID) control, which computes the bias voltage from the temperature error, the proposed algorithm exploits the linear relation that exists between the cold side's temperature and the amount of heat that is removed per unit time. Since this control is based on an existing linear relation, it is insensitive to changes in the operating point that are instead crucial in classic PID control of a non-linear system.

  13. Hypothesis testing of a change point during cognitive decline among Alzheimer's disease patients.

    PubMed

    Ji, Ming; Xiong, Chengjie; Grundman, Michael

    2003-10-01

    In this paper, we present a statistical hypothesis test for detecting a change point over the course of cognitive decline among Alzheimer's disease patients. The model under the null hypothesis assumes a constant rate of cognitive decline over time and the model under the alternative hypothesis is a general bilinear model with an unknown change point. When the change point is unknown, however, the null distribution of the test statistics is not analytically tractable and has to be simulated by parametric bootstrap. When the alternative hypothesis that a change point exists is accepted, we propose an estimate of its location based on the Akaike's Information Criterion. We applied our method to a data set from the Neuropsychological Database Initiative by implementing our hypothesis testing method to analyze Mini Mental Status Exam scores based on a random-slope and random-intercept model with a bilinear fixed effect. Our result shows that despite large amount of missing data, accelerated decline did occur for MMSE among AD patients. Our finding supports the clinical belief of the existence of a change point during cognitive decline among AD patients and suggests the use of change point models for the longitudinal modeling of cognitive decline in AD research.

  14. Change detection of riverbed movements using river cross-sections and LiDAR data

    NASA Astrophysics Data System (ADS)

    Vetter, Michael; Höfle, Bernhard; Mandlburger, Gottfried; Rutzinger, Martin

    2010-05-01

    Today, Airborne LiDAR derived digital terrain models (DTMs) are used for several aspects in different scientific disciplines, such as hydrology, geomorphology or archaeology. In the field of river geomorphology, LiDAR data sets can provide information on the riverine vegetation, the level and boundary of the water body, the elevation of the riparian foreland and their roughness. The LiDAR systems in use for topographic data acquisition mainly operate with wavelengths of at least 1064nm and, thus, are not able to penetrate water. LiDAR sensors with two wavelengths are available (bathymetric LiDAR), but they can only provide elevation information of riverbeds or lakes, if the water is clear and the minimum water depth exceeds 1.5m. In small and shallow rivers it is impossible to collect information of the riverbed, regardless of the used LiDAR sensor. In this article, we present a method to derive a high-resolution DTM of the riverbed and to combine it with the LiDAR DTM resulting in a watercourse DTM (DTM-W) as a basis for calculating the changes in the riverbed during several years. To obtain such a DTM-W we use river cross-sections acquired by terrestrial survey or echo-sounding. First, a differentiation between water and land has to be done. A highly accurate water surface can be derived by using a water surface delineation algorithm, which incorporates the amplitude information of the LiDAR point cloud and additional geometrical features (e.g. local surface roughness). The second step is to calculate a thalweg line, which is the lowest flow path in the riverbed. This is achieved by extracting the lowest point of each river cross section and by fitting a B-spline curve through those points. In the next step, the centerline of the river is calculated by applying a shrinking algorithm of the water boundary polygon. By averaging the thalweg line and the centerline, a main flow path line can be computed. Subsequently, a dense array of 2D-profiles perpendicular to the flow path line is defined and the heights are computed by linear interpolation of the original cross sections. Thus, a very dense 3D point cloud of the riverbed is obtained from which a grid model of the river bed can be calculated applying any suitable interpolation technique like triangulation, linear prediction or inverse distance mapping. In a final step, the river bed model and the LiDAR DTM are combined resulting in a watercourse DTM. By computing different DTM-Ws from multiple cross section data sets, the volume and the magnitude of changes in the riverbed can be estimated. Hence, the erosion or accumulation areas and their volume changes during several years can be quantified.

  15. A genetic algorithm approach to estimate glacier mass variations from GRACE data

    NASA Astrophysics Data System (ADS)

    Reimond, Stefan; Klinger, Beate; Krauss, Sandro; Mayer-Gürr, Torsten

    2017-04-01

    The application of a genetic algorithm (GA) to the inference of glacier mass variations with a point-mass modeling method is described. GRACE K-band ranging data (available since April 2002) processed at the Graz University of Technology serve as input for this study. The reformulation of the point-mass inversion method in terms of an optimization problem is motivated by two reasons: first, an improved choice of the positions of the modeled point-masses (with a particular focus on the depth parameter) is expected to increase the signal-to-noise ratio. Considering these coordinates as additional unknown parameters (besides from the mass change magnitudes) results in a highly non-linear optimization problem. The second reason is that the mass inversion from satellite tracking data is an ill-posed problem, and hence regularization becomes necessary. The main task in this context is the determination of the regularization parameter, which is typically done by means of heuristic selection rules like, e.g., the L-curve criterion. In this study, however, the challenge of selecting a suitable balancing parameter (or even a matrix) is tackled by introducing regularization to the overall optimization problem. Based on this novel approach, estimations of ice-mass changes in various alpine glacier systems (e.g. Svalbard) are presented and compared to existing results and alternative inversion methods.

  16. Linear control of a boiler-turbine unit: analysis and design.

    PubMed

    Tan, Wen; Fang, Fang; Tian, Liang; Fu, Caifen; Liu, Jizhen

    2008-04-01

    Linear control of a boiler-turbine unit is discussed in this paper. Based on the nonlinear model of the unit, this paper analyzes the nonlinearity of the unit, and selects the appropriate operating points so that the linear controller can achieve wide-range performance. Simulation and experimental results at the No. 4 Unit at the Dalate Power Plant show that the linear controller can achieve the desired performance under a specific range of load variations.

  17. Study of texture stitching in 3D modeling of lidar point cloud based on per-pixel linear interpolation along loop line buffer

    NASA Astrophysics Data System (ADS)

    Xu, Jianxin; Liang, Hong

    2013-07-01

    Terrestrial laser scanning creates a point cloud composed of thousands or millions of 3D points. Through pre-processing, generating TINs, mapping texture, a 3D model of a real object is obtained. When the object is too large, the object is separated into some parts. This paper mainly focuses on problem of gray uneven of two adjacent textures' intersection. The new algorithm is presented in the paper, which is per-pixel linear interpolation along loop line buffer .The experiment data derives from point cloud of stone lion which is situated in front of west gate of Henan Polytechnic University. The model flow is composed of three parts. First, the large object is separated into two parts, and then each part is modeled, finally the whole 3D model of the stone lion is composed of two part models. When the two part models are combined, there is an obvious fissure line in the overlapping section of two adjacent textures for the two models. Some researchers decrease brightness value of all pixels for two adjacent textures by some algorithms. However, some algorithms are effect and the fissure line still exists. Gray uneven of two adjacent textures is dealt by the algorithm in the paper. The fissure line in overlapping section textures is eliminated. The gray transition in overlapping section become more smoothly.

  18. Vegetation anomalies caused by antecedent precipitation in most of the world

    NASA Astrophysics Data System (ADS)

    Papagiannopoulou, C.; Miralles, D. G.; Dorigo, W. A.; Verhoest, N. E. C.; Depoorter, M.; Waegeman, W.

    2017-07-01

    Quantifying environmental controls on vegetation is critical to predict the net effect of climate change on global ecosystems and the subsequent feedback on climate. Following a non-linear Granger causality framework based on a random forest predictive model, we exploit the current wealth of multi-decadal satellite data records to uncover the main drivers of monthly vegetation variability at the global scale. Results indicate that water availability is the most dominant factor driving vegetation globally: about 61% of the vegetated surface was primarily water-limited during 1981-2010. This included semiarid climates but also transitional ecoregions. Intra-annually, temperature controls Northern Hemisphere deciduous forests during the growing season, while antecedent precipitation largely dominates vegetation dynamics during the senescence period. The uncovered dependency of global vegetation on water availability is substantially larger than previously reported. This is owed to the ability of the framework to (1) disentangle the co-linearities between radiation/temperature and precipitation, and (2) quantify non-linear impacts of climate on vegetation. Our results reveal a prolonged effect of precipitation anomalies in dry regions: due to the long memory of soil moisture and the cumulative, non-linear, response of vegetation, water-limited regions show sensitivity to the values of precipitation occurring three months earlier. Meanwhile, the impacts of temperature and radiation anomalies are more immediate and dissipate shortly, pointing to a higher resilience of vegetation to these anomalies. Despite being infrequent by definition, hydro-climatic extremes are responsible for up to 10% of the vegetation variability during the 1981-2010 period in certain areas, particularly in water-limited ecosystems. Our approach is a first step towards a quantitative comparison of the resistance and resilience signature of different ecosystems, and can be used to benchmark Earth system models in their representations of past vegetation sensitivity to changes in climate.

  19. Estimating population trends with a linear model

    USGS Publications Warehouse

    Bart, Jonathan; Collins, Brian D.; Morrison, R.I.G.

    2003-01-01

    We describe a simple and robust method for estimating trends in population size. The method may be used with Breeding Bird Survey data, aerial surveys, point counts, or any other program of repeated surveys at permanent locations. Surveys need not be made at each location during each survey period. The method differs from most existing methods in being design based, rather than model based. The only assumptions are that the nominal sampling plan is followed and that sample size is large enough for use of the t-distribution. Simulations based on two bird data sets from natural populations showed that the point estimate produced by the linear model was essentially unbiased even when counts varied substantially and 25% of the complete data set was missing. The estimating-equation approach, often used to analyze Breeding Bird Survey data, performed similarly on one data set but had substantial bias on the second data set, in which counts were highly variable. The advantages of the linear model are its simplicity, flexibility, and that it is self-weighting. A user-friendly computer program to carry out the calculations is available from the senior author.

  20. Testing of next-generation nonlinear calibration based non-uniformity correction techniques using SWIR devices

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna R.; Wickert, Mark A.

    2017-05-01

    A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.

  1. Modeling of time trends and interactions in vital rates using restricted regression splines.

    PubMed

    Heuer, C

    1997-03-01

    For the analysis of time trends in incidence and mortality rates, the age-period-cohort (apc) model has became a widely accepted method. The considered data are arranged in a two-way table by age group and calendar period, which are mostly subdivided into 5- or 10-year intervals. The disadvantage of this approach is the loss of information by data aggregation and the problems of estimating interactions in the two-way layout without replications. In this article we show how splines can be useful when yearly data, i.e., 1-year age groups and 1-year periods, are given. The estimated spline curves are still smooth and represent yearly changes in the time trends. Further, it is straightforward to include interaction terms by the tensor product of the spline functions. If the data are given in a nonrectangular table, e.g., 5-year age groups and 1-year periods, the period and cohort variables can be parameterized by splines, while the age variable is parameterized as fixed effect levels, which leads to a semiparametric apc model. An important methodological issue in developing the nonparametric and semiparametric models is stability of the estimated spline curve at the boundaries. Here cubic regression splines will be used, which are constrained to be linear in the tails. Another point of importance is the nonidentifiability problem due to the linear dependency of the three time variables. This will be handled by decomposing the basis of each spline by orthogonal projection into constant, linear, and nonlinear terms, as suggested by Holford (1983, Biometrics 39, 311-324) for the traditional apc model. The advantage of using splines for yearly data compared to the traditional approach for aggregated data is the more accurate curve estimation for the nonlinear trend changes and the simple way of modeling interactions between the time variables. The method will be demonstrated with hypothetical data as well as with cancer mortality data.

  2. EMG prediction from Motor Cortical Recordings via a Non-Negative Point Process Filter

    PubMed Central

    Nazarpour, Kianoush; Ethier, Christian; Paninski, Liam; Rebesco, James M.; Miall, R. Chris; Miller, Lee E.

    2012-01-01

    A constrained point process filtering mechanism for prediction of electromyogram (EMG) signals from multi-channel neural spike recordings is proposed here. Filters from the Kalman family are inherently sub-optimal in dealing with non-Gaussian observations, or a state evolution that deviates from the Gaussianity assumption. To address these limitations, we modeled the non-Gaussian neural spike train observations by using a generalized linear model (GLM) that encapsulates covariates of neural activity, including the neurons’ own spiking history, concurrent ensemble activity, and extrinsic covariates (EMG signals). In order to predict the envelopes of EMGs, we reformulated the Kalman filter (KF) in an optimization framework and utilized a non-negativity constraint. This structure characterizes the non-linear correspondence between neural activity and EMG signals reasonably. The EMGs were recorded from twelve forearm and hand muscles of a behaving monkey during a grip-force task. For the case of limited training data, the constrained point process filter improved the prediction accuracy when compared to a conventional Wiener cascade filter (a linear causal filter followed by a static non-linearity) for different bin sizes and delays between input spikes and EMG output. For longer training data sets, results of the proposed filter and that of the Wiener cascade filter were comparable. PMID:21659018

  3. Variations of cosmic large-scale structure covariance matrices across parameter space

    NASA Astrophysics Data System (ADS)

    Reischke, Robert; Kiessling, Alina; Schäfer, Björn Malte

    2017-03-01

    The likelihood function for cosmological parameters, given by e.g. weak lensing shear measurements, depends on contributions to the covariance induced by the non-linear evolution of the cosmic web. As highly non-linear clustering to date has only been described by numerical N-body simulations in a reliable and sufficiently precise way, the necessary computational costs for estimating those covariances at different points in parameter space are tremendous. In this work, we describe the change of the matter covariance and the weak lensing covariance matrix as a function of cosmological parameters by constructing a suitable basis, where we model the contribution to the covariance from non-linear structure formation using Eulerian perturbation theory at third order. We show that our formalism is capable of dealing with large matrices and reproduces expected degeneracies and scaling with cosmological parameters in a reliable way. Comparing our analytical results to numerical simulations, we find that the method describes the variation of the covariance matrix found in the SUNGLASS weak lensing simulation pipeline within the errors at one-loop and tree-level for the spectrum and the trispectrum, respectively, for multipoles up to ℓ ≤ 1300. We show that it is possible to optimize the sampling of parameter space where numerical simulations should be carried out by minimizing interpolation errors and propose a corresponding method to distribute points in parameter space in an economical way.

  4. Application of global sensitivity analysis methods to Takagi-Sugeno-Kang rainfall-runoff fuzzy models

    NASA Astrophysics Data System (ADS)

    Jacquin, A. P.; Shamseldin, A. Y.

    2009-04-01

    This study analyses the sensitivity of the parameters of Takagi-Sugeno-Kang rainfall-runoff fuzzy models previously developed by the authors. These models can be classified in two types, where the first type is intended to account for the effect of changes in catchment wetness and the second type incorporates seasonality as a source of non-linearity in the rainfall-runoff relationship. The sensitivity analysis is performed using two global sensitivity analysis methods, namely Regional Sensitivity Analysis (RSA) and Sobol's Variance Decomposition (SVD). In general, the RSA method has the disadvantage of not being able to detect sensitivities arising from parameter interactions. By contrast, the SVD method is suitable for analysing models where the model response surface is expected to be affected by interactions at a local scale and/or local optima, such as the case of the rainfall-runoff fuzzy models analysed in this study. The data of six catchments from different geographical locations and sizes are used in the sensitivity analysis. The sensitivity of the model parameters is analysed in terms of two measures of goodness of fit, assessing the model performance from different points of view. These measures are the Nash-Sutcliffe criterion and the index of volumetric fit. The results of the study show that the sensitivity of the model parameters depends on both the type of non-linear effects (i.e. changes in catchment wetness or seasonality) that dominates the catchment's rainfall-runoff relationship and the measure used to assess the model performance. Acknowledgements: This research was supported by FONDECYT, Research Grant 11070130. We would also like to express our gratitude to Prof. Kieran M. O'Connor from the National University of Ireland, Galway, for providing the data used in this study.

  5. The 3D modeling of high numerical aperture imaging in thin films

    NASA Technical Reports Server (NTRS)

    Flagello, D. G.; Milster, Tom

    1992-01-01

    A modelling technique is described which is used to explore three dimensional (3D) image irradiance distributions formed by high numerical aperture (NA is greater than 0.5) lenses in homogeneous, linear films. This work uses a 3D modelling approach that is based on a plane-wave decomposition in the exit pupil. Each plane wave component is weighted by factors due to polarization, aberration, and input amplitude and phase terms. This is combined with a modified thin-film matrix technique to derive the total field amplitude at each point in a film by a coherent vector sum over all plane waves. Then the total irradiance is calculated. The model is used to show how asymmetries present in the polarized image change with the influence of a thin film through varying degrees of focus.

  6. SU-G-BRA-07: An Innovative Fiducial-Less Tracking Method for Radiation Treatment of Abdominal Tumors by Diaphragm Disparity Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dick, D; Zhao, W; Wu, X

    2016-06-15

    Purpose: To investigate the feasibility of tracking abdominal tumors without the use of gold fiducial markers Methods: In this simulation study, an abdominal 4DCT dataset, acquired previously and containing 8 phases of the breathing cycle, was used as the testing data. Two sets of DRR images (45 and 135 degrees) were generated for each phase. Three anatomical points along the lung-diaphragm interface on each of the Digital Reconstructed Radiograph(DRR) images were identified by cross-correlation. The gallbladder, which simulates the tumor, was contoured for each phase of the breathing cycle and the corresponding centroid values serve as the measured center ofmore » the tumor. A linear model was created to correlate the diaphragm’s disparity of the three identified anatomical points with the center of the tumor. To verify the established linear model, we sequentially removed one phase of the data (i.e., 3 anatomical points and the corresponding tumor center) and created new linear models with the remaining 7 phases. Then we substituted the eliminated phase data (disparities of the 3 anatomical points) into the corresponding model to compare model-generated tumor center and the measured tumor center. Results: The maximum difference between the modeled and the measured centroid values across the 8 phases were 0.72, 0.29 and 0.30 pixels in the x, y and z directions respectively, which yielded a maximum mean-squared-error value of 0.75 pixels. The outcomes of the verification process, by eliminating each phase, produced mean-squared-errors ranging from 0.41 to 1.28 pixels. Conclusion: Gold fiducial markers, requiring surgical procedures to be implanted, are conventionally used in radiation therapy. The present work shows the feasibility of a fiducial-less tracking method for localizing abdominal tumors. Through developed diaphragm disparity analysis, the established linear model was verified with clinically accepted errors. The tracking method in real time under different radiation therapy platforms will be further investigated.« less

  7. Age-Related Changes in Corneal Astigmatism.

    PubMed

    Shao, Xu; Zhou, Kai-Jing; Pan, An-Peng; Cheng, Xue-Ying; Cai, He-Xie; Huang, Jin-Hai; Yu, A-Yong

    2017-10-01

    To analyze the changes in corneal astigmatism as a function of age and develop a novel model to estimate corneal astigmatic change according to age. This was a cross-sectional study of right eyes of 3,769 individuals. Total corneal astigmatism, keratometric astigmatism, anterior corneal astigmatism, and posterior corneal astigmatism were measured by a Scheimpflug tomographer. Smoothing fitting curves of polar values of corneal astigmatism as a function of age were drawn and average changes in corneal astigmatism at different ages were calculated. Two turning points of age on total corneal astigmatism were 36 and 69 years. The average change of total corneal astigmatism toward against-the-rule astigmatism was 0.13 diopters (D)/10 years from 18 to 35 years, 0.45 D/10 years from 36 to 68 years, and decreased after 69 years, mainly caused by anterior corneal astigmatism. The mean magnitude of posterior corneal astigmatism was -0.33 D and exceeded 0.50 D in 14.27% of eyes. The vectorial difference between total corneal astigmatism and keratometric astigmatism was correlated with posterior corneal astigmatism, polar value of anterior corneal astigmatism, age, and corneal higher order aberrations (r = 0.636; standard partial regression coefficients were 0.479, -0.466, 0.282, and 0.196, respectively; all P < .001). Based on the non-linear model to estimate corneal astigmatic change with age, a formula was developed to calculate recommended correction of astigmatism according to age and astigmatic type. The rate of change of total corneal astigmatism showed a non-linear trend toward against-the-rule astigmatism, which was low at young and old age, high at middle age, and should be taken into account when performing surgery to correct astigmatism. [J Refract Surg. 2017;33(10):696-703.]. Copyright 2017, SLACK Incorporated.

  8. A simplified dynamic model of the T700 turboshaft engine

    NASA Technical Reports Server (NTRS)

    Duyar, Ahmet; Gu, Zhen; Litt, Jonathan S.

    1992-01-01

    A simplified open-loop dynamic model of the T700 turboshaft engine, valid within the normal operating range of the engine, is developed. This model is obtained by linking linear state space models obtained at different engine operating points. Each linear model is developed from a detailed nonlinear engine simulation using a multivariable system identification and realization method. The simplified model may be used with a model-based real time diagnostic scheme for fault detection and diagnostics, as well as for open loop engine dynamics studies and closed loop control analysis utilizing a user generated control law.

  9. A characterization of linearly repetitive cut and project sets

    NASA Astrophysics Data System (ADS)

    Haynes, Alan; Koivusalo, Henna; Walton, James

    2018-02-01

    For the development of a mathematical theory which can be used to rigorously investigate physical properties of quasicrystals, it is necessary to understand regularity of patterns in special classes of aperiodic point sets in Euclidean space. In one dimension, prototypical mathematical models for quasicrystals are provided by Sturmian sequences and by point sets generated by substitution rules. Regularity properties of such sets are well understood, thanks mostly to well known results by Morse and Hedlund, and physicists have used this understanding to study one dimensional random Schrödinger operators and lattice gas models. A key fact which plays an important role in these problems is the existence of a subadditive ergodic theorem, which is guaranteed when the corresponding point set is linearly repetitive. In this paper we extend the one-dimensional model to cut and project sets, which generalize Sturmian sequences in higher dimensions, and which are frequently used in mathematical and physical literature as models for higher dimensional quasicrystals. By using a combination of algebraic, geometric, and dynamical techniques, together with input from higher dimensional Diophantine approximation, we give a complete characterization of all linearly repetitive cut and project sets with cubical windows. We also prove that these are precisely the collection of such sets which satisfy subadditive ergodic theorems. The results are explicit enough to allow us to apply them to known classical models, and to construct linearly repetitive cut and project sets in all pairs of dimensions and codimensions in which they exist. Research supported by EPSRC grants EP/L001462, EP/J00149X, EP/M023540. HK also gratefully acknowledges the support of the Osk. Huttunen foundation.

  10. Applying the Principles of Specific Objectivity and of Generalizability to the Measurement of Change.

    ERIC Educational Resources Information Center

    Fischer, Gerhard H.

    1987-01-01

    A natural parameterization and formalization of the problem of measuring change in dichotomous data is developed. Mathematically-exact definitions of specific objectivity are presented, and the basic structures of the linear logistic test model and the linear logistic model with relaxed assumptions are clarified. (SLD)

  11. Longitudinal associations between sibling relationship quality, parental differential treatment, and children's adjustment.

    PubMed

    Richmond, Melissa K; Stocker, Clare M; Rienks, Shauna L

    2005-12-01

    This study examined associations between changes in sibling relationships and changes in parental differential treatment and corresponding changes in children's adjustment. One hundred thirty-three families were assessed at 3 time points. Parents rated children's externalizing problems, and children reported on sibling relationship quality, parental differential treatment, and depressive symptoms. On average, older siblings were 10, 12, and 16 years old, and younger siblings were 8, 10, and 14 years old at Waves 1, 2, and 3, respectively. Results from hierarchical linear modeling indicated that as sibling relationships improved over time, children's depressive symptoms decreased over time. In addition, as children were less favored over their siblings over time, children's externalizing problems increased over time. Findings highlight the developmental interplay between the sibling context and children's adjustment. Copyright 2006 APA, all rights reserved).

  12. On determining the point of no return in climate change

    NASA Astrophysics Data System (ADS)

    van Zalinge, Brenda C.; Feng, Qing Yi; Aengenheyster, Matthias; Dijkstra, Henk A.

    2017-08-01

    Earth's global mean surface temperature has increased by about 1.0 °C over the period 1880-2015. One of the main causes is thought to be the increase in atmospheric greenhouse gases. If greenhouse gas emissions are not substantially decreased, several studies indicate that there will be a dangerous anthropogenic interference with climate by the end of this century. However, there is no good quantitative measure to determine when it is too late to start reducing greenhouse gas emissions in order to avoid such dangerous interference. In this study, we develop a method for determining a so-called point of no return for several greenhouse gas emission scenarios. The method is based on a combination of aspects of stochastic viability theory and linear response theory; the latter is used to estimate the probability density function of the global mean surface temperature. The innovative element in this approach is the applicability to high-dimensional climate models as demonstrated by the results obtained with the PlaSim model.

  13. Single axis control of ball position in magnetic levitation system using fuzzy logic control

    NASA Astrophysics Data System (ADS)

    Sahoo, Narayan; Tripathy, Ashis; Sharma, Priyaranjan

    2018-03-01

    This paper presents the design and real time implementation of Fuzzy logic control(FLC) for the control of the position of a ferromagnetic ball by manipulating the current flowing in an electromagnet that changes the magnetic field acting on the ball. This system is highly nonlinear and open loop unstable. Many un-measurable disturbances are also acting on the system, making the control of it highly complex but interesting for any researcher in control system domain. First the system is modelled using the fundamental laws, which gives a nonlinear equation. The nonlinear model is then linearized at an operating point. Fuzzy logic controller is designed after studying the system in closed loop under PID control action. The controller is then implemented in real time using Simulink real time environment. The controller is tuned manually to get a stable and robust performance. The set point tracking performance of FLC and PID controllers were compared and analyzed.

  14. Hourly predictive Levenberg-Marquardt ANN and multi linear regression models for predicting of dew point temperature

    NASA Astrophysics Data System (ADS)

    Zounemat-Kermani, Mohammad

    2012-08-01

    In this study, the ability of two models of multi linear regression (MLR) and Levenberg-Marquardt (LM) feed-forward neural network was examined to estimate the hourly dew point temperature. Dew point temperature is the temperature at which water vapor in the air condenses into liquid. This temperature can be useful in estimating meteorological variables such as fog, rain, snow, dew, and evapotranspiration and in investigating agronomical issues as stomatal closure in plants. The availability of hourly records of climatic data (air temperature, relative humidity and pressure) which could be used to predict dew point temperature initiated the practice of modeling. Additionally, the wind vector (wind speed magnitude and direction) and conceptual input of weather condition were employed as other input variables. The three quantitative standard statistical performance evaluation measures, i.e. the root mean squared error, mean absolute error, and absolute logarithmic Nash-Sutcliffe efficiency coefficient ( {| {{{Log}}({{NS}})} |} ) were employed to evaluate the performances of the developed models. The results showed that applying wind vector and weather condition as input vectors along with meteorological variables could slightly increase the ANN and MLR predictive accuracy. The results also revealed that LM-NN was superior to MLR model and the best performance was obtained by considering all potential input variables in terms of different evaluation criteria.

  15. A model for prediction of color change after tooth bleaching based on CIELAB color space

    NASA Astrophysics Data System (ADS)

    Herrera, Luis J.; Santana, Janiley; Yebra, Ana; Rivas, María. José; Pulgar, Rosa; Pérez, María. M.

    2017-08-01

    An experimental study aiming to develop a model based on CIELAB color space for prediction of color change after a tooth bleaching procedure is presented. Multivariate linear regression models were obtained to predict the L*, a*, b* and W* post-bleaching values using the pre-bleaching L*, a*and b*values. Moreover, univariate linear regression models were obtained to predict the variation in chroma (C*), hue angle (h°) and W*. The results demonstrated that is possible to estimate color change when using a carbamide peroxide tooth-bleaching system. The models obtained can be applied in clinic to predict the colour change after bleaching.

  16. On the Convenience of Using the Complete Linearization Method in Modelling the BLR of AGN

    NASA Astrophysics Data System (ADS)

    Patriarchi, P.; Perinotto, M.

    The Complete Linearization Method (Mihalas, 1978) consists in the determination of the radiation field (at a set of frequency points), atomic level populations, temperature, electron density etc., by resolving the system of radiative transfer, thermal equilibrium, statistical equilibrium equations simultaneously and self-consistently. Since the system is not linear, it must be solved by iteration after linearization, using a perturbative method, starting from an initial guess solution. Of course the Complete Linearization Method is more time consuming than the previous one. But how great can this disadvantage be in the age of supercomputers? It is possible to approximately evaluate the CPU time needed to run a model by computing the number of multiplications necessary to solve the system.

  17. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models.

    PubMed

    Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E

    2014-05-01

    The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.

  18. Comparison of linear and nonlinear models for coherent hemodynamics spectroscopy (CHS)

    NASA Astrophysics Data System (ADS)

    Sassaroli, Angelo; Kainerstorfer, Jana; Fantini, Sergio

    2015-03-01

    A recently proposed linear time-invariant hemodynamic model for coherent hemodynamics spectroscopy1 (CHS) relates the tissue concentrations of oxy- and deoxy-hemoglobin (outputs of the system) to given dynamics of the tissue blood volume, blood flow and rate constant of oxygen diffusion (inputs of the system). This linear model was derived in the limit of "small" perturbations in blood flow velocity. We have extended this model to a more general model (which will be referred to as the nonlinear extension to the original model) that yields the time-dependent changes of oxy and deoxy-hemoglobin concentrations in response to arbitrary dynamic changes in capillary blood flow velocity. The nonlinear extension to the model relies on a general solution of the partial differential equation that governs the spatio-temporal behavior of oxygen saturation of hemoglobin in capillaries and venules on the basis of dynamic (or time resolved) blood transit time. We show preliminary results where the CHS spectra obtained from the linear and nonlinear models are compared to quantify the limits of applicability of the linear model.

  19. [Dental arch form reverting by four-point method].

    PubMed

    Pan, Xiao-Gang; Qian, Yu-Fen; Weng, Si-En; Feng, Qi-Ping; Yu, Quan

    2008-04-01

    To explore a simple method of reverting individual dental arch form template for wire bending. Individual dental arch form was reverted by four-point method. By defining central point of bracket on bilateral lower second premolar and first molar, certain individual dental arch form could be generated. The arch form generating procedure was then be developed to computer software for printing arch form. Four-point method arch form was evaluated by comparing with direct model measurement on linear and angular parameters. The accuracy and reproducibility were assessed by paired t test and concordance correlation coefficient with Medcalc 9.3 software package. The arch form by four-point method was of good accuracy and reproducibility (linear concordance correlation coefficient was 0.9909 and angular concordance correlation coefficient was 0.8419). The dental arch form reverted by four-point method could reproduce the individual dental arch form.

  20. Short-term longitudinal changes in adult dental fear.

    PubMed

    Hagqvist, Outi; Tolvanen, Mimmi; Rantavuori, Kari; Karlsson, Linnea; Karlsson, Hasse; Lahti, Satu

    2018-06-26

    This study aimed to evaluate (i) longitudinal fluctuations and considerable changes in adult fear at five data-collection points during a 2.5-yr period and (ii) the stability of symptoms of depression in dental fear-change groups. Pilot data from the FinnBrain Birth Cohort study, of 254 families expecting a baby, were used. Data-collection points (DCPs) were: 18-20 and 32-34 gestational weeks; and 3, 12, and 24 months after delivery. At baseline, 119 women and 85 men completed the Modified Dental Anxiety Scale (MDAS) questionnaire. At all DCPs, 57 (48%) women and 35 (41%) men completed MDAS. Depression was measured using the Edinburgh Postnatal Depression Scale. Changes in MDAS were analyzed using general linear modelling for repeated measures. Stability of dental fear was assessed using dichotomized MDAS scores. Dental fear among women decreased statistically significantly in late pregnancy and increased thereafter. Among men, dental fear tended to increase in late pregnancy and decreased afterwards. Depression scores varied in high and fluctuating fear groups but the differences diminished towards the last DCP. Dental fear among adults experiencing a major life event does not seem to be stable. Clinicians should take this into account. The mechanisms behind these changes need further research. © 2018 Eur J Oral Sci.

  1. Do changes in residents' fear of crime impact their walking? Longitudinal results from RESIDE.

    PubMed

    Foster, Sarah; Knuiman, Matthew; Hooper, Paula; Christian, Hayley; Giles-Corti, Billie

    2014-05-01

    To examine the influence of fear of crime on walking for participants in a longitudinal study of residents in new suburbs. Participants (n=485) in Perth, Australia, completed a questionnaire about three years after moving to their neighbourhood (2007-2008), and again four years later (2011-2012). Measures included fear of crime, neighbourhood perceptions and walking (min/week). Objective environmental measures were generated for each participant's neighbourhood, defined as the 1600 m road network distance from home, at each time-point. Linear regression models examined the impact of changes in fear of crime on changes in walking, with progressive adjustment for other changes in the built environment, neighbourhood perceptions and demographics. An increase in fear of crime was associated with a decrease in residents' walking inside the local neighbourhood. For each increase in fear of crime (i.e., one level on a five-point Likert scale) total walking decreased by 22 min/week (p=0.002), recreational walking by 13 min/week (p=0.031) and transport walking by 7 min/week (p=0.064). This study provides longitudinal evidence that changes in residents' fear of crime influence their walking behaviours. Interventions that reduce fear of crime are likely to increase walking and produce public health gains. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Docosahexaenoic Acid Supplementation and Cognitive Decline in Alzheimer Disease

    PubMed Central

    Quinn, Joseph F.; Raman, Rema; Thomas, Ronald G.; Yurko-Mauro, Karin; Nelson, Edward B.; Van Dyck, Christopher; Galvin, James E.; Emond, Jennifer; Jack, Clifford R.; Weiner, Michael; Shinto, Lynne; Aisen, Paul S.

    2011-01-01

    Context Docosahexaenoic acid (DHA) is the most abundant long-chain polyunsaturated fatty acid in the brain. Epidemiological studies suggest that consumption of DHA is associated with a reduced incidence of Alzheimer disease. Animal studies demonstrate that oral intake of DHA reduces Alzheimer-like brain pathology. Objective To determine if supplementation with DHA slows cognitive and functional decline in individuals with Alzheimer disease. Design, Setting, and Patients A randomized, double-blind, placebo-controlled trial of DHA supplementation in individuals with mild to moderate Alzheimer disease (Mini-Mental State Examination scores, 14–26) was conducted between November 2007 and May 2009 at 51 US clinical research sites of the Alzheimer’s Disease Cooperative Study. Intervention Participants were randomly assigned to algal DHA at a dose of 2 g/d or to identical placebo (60% were assigned to DHA and 40% were assigned to placebo). Duration of treatment was 18 months. Main Outcome Measures Change in the cognitive subscale of the Alzheimer’s Disease Assessment Scale (ADAS-cog) and change in the Clinical Dementia Rating (CDR) sum of boxes. Rate of brain atrophy was also determined by volumetric magnetic resonance imaging in a subsample of participants (n = 102). Results A total of 402 individuals were randomized and a total of 295 participants completed the trial while taking study medication (DHA: 171; placebo: 124). Supplementation with DHA had no beneficial effect on rate of change on ADAS-cog score, which increased by a mean of 7.98 points (95% confidence interval [CI], 6.51–9.45 points) for the DHA group during 18 months vs 8.27 points (95% CI, 6.72–9.82 points) for the placebo group (linear mixed-effects model: P = .41). The CDR sum of boxes score increased by 2.87 points (95% CI, 2.44–3.30 points) for the DHA group during 18 months compared with 2.93 points (95% CI, 2.44–3.42 points) for the placebo group (linear mixed-effects model: P = .68). In the subpopulation of participants (DHA: 53; placebo: 49), the rate of brain atrophy was not affected by treatment with DHA. Individuals in the DHA group had a mean decline in total brain volume of 24.7 cm3 (95% CI, 21.4–28.0 cm3) during 18 months and a 1.32% (95% CI, 1.14%–1.50%) volume decline per year compared with 24.0 cm3 (95% CI, 20–28 cm3) for the placebo group during 18 months and a 1.29% (95% CI, 1.07%–1.51%) volume decline per year (P = .79). Conclusion Supplementation with DHA compared with placebo did not slow the rate of cognitive and functional decline in patients with mild to moderate Alzheimer disease. PMID:21045096

  3. Environmental factors and flow paths related to Escherichia coli concentrations at two beaches on Lake St. Clair, Michigan, 2002–2005

    USGS Publications Warehouse

    Holtschlag, David J.; Shively, Dawn; Whitman, Richard L.; Haack, Sheridan K.; Fogarty, Lisa R.

    2008-01-01

    Regression analyses and hydrodynamic modeling were used to identify environmental factors and flow paths associated with Escherichia coli (E. coli) concentrations at Memorial and Metropolitan Beaches on Lake St. Clair in Macomb County, Mich. Lake St. Clair is part of the binational waterway between the United States and Canada that connects Lake Huron with Lake Erie in the Great Lakes Basin. Linear regression, regression-tree, and logistic regression models were developed from E. coli concentration and ancillary environmental data. Linear regression models on log10 E. coli concentrations indicated that rainfall prior to sampling, water temperature, and turbidity were positively associated with bacteria concentrations at both beaches. Flow from Clinton River, changes in water levels, wind conditions, and log10 E. coli concentrations 2 days before or after the target bacteria concentrations were statistically significant at one or both beaches. In addition, various interaction terms were significant at Memorial Beach. Linear regression models for both beaches explained only about 30 percent of the variability in log10 E. coli concentrations. Regression-tree models were developed from data from both Memorial and Metropolitan Beaches but were found to have limited predictive capability in this study. The results indicate that too few observations were available to develop reliable regression-tree models. Linear logistic models were developed to estimate the probability of E. coli concentrations exceeding 300 most probable number (MPN) per 100 milliliters (mL). Rainfall amounts before bacteria sampling were positively associated with exceedance probabilities at both beaches. Flow of Clinton River, turbidity, and log10 E. coli concentrations measured before or after the target E. coli measurements were related to exceedances at one or both beaches. The linear logistic models were effective in estimating bacteria exceedances at both beaches. A receiver operating characteristic (ROC) analysis was used to determine cut points for maximizing the true positive rate prediction while minimizing the false positive rate. A two-dimensional hydrodynamic model was developed to simulate horizontal current patterns on Lake St. Clair in response to wind, flow, and water-level conditions at model boundaries. Simulated velocity fields were used to track hypothetical massless particles backward in time from the beaches along flow paths toward source areas. Reverse particle tracking for idealized steady-state conditions shows changes in expected flow paths and traveltimes with wind speeds and directions from 24 sectors. The results indicate that three to four sets of contiguous wind sectors have similar effects on flow paths in the vicinity of the beaches. In addition, reverse particle tracking was used for transient conditions to identify expected flow paths for 10 E. coli sampling events in 2004. These results demonstrate the ability to track hypothetical particles from the beaches, backward in time, to likely source areas. This ability, coupled with a greater frequency of bacteria sampling, may provide insight into changes in bacteria concentrations between source and sink areas.

  4. Information Fusion from the Point of View of Communication Theory; Fusing Information to Trade-Off the Resolution of Assessments Against the Probability of Mis-Assessment

    DTIC Science & Technology

    2013-08-19

    excellence in linear models , 2010. She successfully defended her dissertation, Linear System Design for Fusion and Compression, on Aug 13, 2013. Her work was...measurements into canonical coordinates, scaling, and rotation; there is a water-filling interpretation; (3) the optimum design of a linear secondary channel of...measurements to fuse with a primary linear channel of measurements maximizes a generalized Rayleigh quotient; (4) the asymptotically optimum

  5. An Airlift Hub-and-Spoke Location-Routing Model with Time Windows: Case Study of the CONUS-to-Korea Airlift Problem

    DTIC Science & Technology

    1998-03-01

    a point of embarkation to a point of debarkation. This study develops an alternative hub-and-spoke combined location-routing integer linear...programming prototype model, and uses this model to determine what advantages a hub-and-spoke system offers, and in which scenarios it is better-suited than the...extension on the following works: the hierarchical model of Perl and Daskin (1983), time windows features of Chan (1991), combining subtour-breaking and range

  6. Risky decision making from childhood through adulthood: Contributions of learning and sensitivity to negative feedback.

    PubMed

    Humphreys, Kathryn L; Telzer, Eva H; Flannery, Jessica; Goff, Bonnie; Gabard-Durnam, Laurel; Gee, Dylan G; Lee, Steve S; Tottenham, Nim

    2016-02-01

    Decision making in the context of risk is a complex and dynamic process that changes across development. Here, we assessed the influence of sensitivity to negative feedback (e.g., loss) and learning on age-related changes in risky decision making, both of which show unique developmental trajectories. In the present study, we examined risky decision making in 216 individuals, ranging in age from 3-26 years, using the balloon emotional learning task (BELT), a computerized task in which participants pump up a series of virtual balloons to earn points, but risk balloon explosion on each trial, which results in no points. It is important to note that there were 3 balloon conditions, signified by different balloon colors, ranging from quick- to slow-to-explode, and participants could learn the color-condition pairings through task experience. Overall, we found age-related increases in pumps made and points earned. However, in the quick-to-explode condition, there was a nonlinear adolescent peak for points earned. Follow-up analyses indicated that this adolescent phenotype occurred at the developmental intersection of linear age-related increases in learning and decreases in sensitivity to negative feedback. Adolescence was marked by intermediate values on both these processes. These findings show that a combination of linearly changing processes can result in nonlinear changes in risky decision making, the adolescent-specific nature of which is associated with developmental improvements in learning and reduced sensitivity to negative feedback. (c) 2016 APA, all rights reserved).

  7. Fecal Markers of Environmental Enteropathy and Subsequent Growth in Bangladeshi Children.

    PubMed

    Arndt, Michael B; Richardson, Barbra A; Ahmed, Tahmeed; Mahfuz, Mustafa; Haque, Rashidul; John-Stewart, Grace C; Denno, Donna M; Petri, William A; Kosek, Margaret; Walson, Judd L

    2016-09-07

    Environmental enteropathy (EE), a subclinical intestinal disorder characterized by mucosal inflammation, reduced barrier integrity, and malabsorption, appears to be associated with increased risk of stunting in children in low- and middle-income countries. Fecal biomarkers indicative of EE (neopterin [NEO], myeloperoxidase [MPO], and alpha-1-antitrypsin [AAT]) have been negatively associated with 6-month linear growth. Associations between fecal markers (NEO, MPO, and AAT) and short-term linear growth were examined in a birth cohort of 246 children in Bangladesh. Marker concentrations were categorized in stool samples based on their distribution (< first quartile, interquartile range, > third quartile), and a 10-point composite EE score was calculated. Piecewise linear mixed-effects models were used to examine the association between markers measured quarterly (in months 3-21, 3-9, and 12-21) and 3-month change in length-for-age z-score (ΔLAZ). Children with high MPO levels at quarterly time points lost significantly more LAZ per 3-month period during the second year of life than those with low MPO (ΔLAZ = -0.100; 95% confidence interval = -0.167 to -0.032). AAT and NEO were not associated with growth; however, composite EE score was negatively associated with subsequent 3-month growth. In this cohort of children from an urban setting in Bangladesh, elevated MPO levels, but not NEO or AAT levels, were associated with decreases in short-term linear growth during the second year of life, supporting previous data suggesting the relevance of MPO as a marker of EE. © The American Society of Tropical Medicine and Hygiene.

  8. Predicting Student Grade Point Average at a Community College from Scholastic Aptitude Tests and from Measures Representing Three Constructs in Vroom's Expectancy Theory Model of Motivation.

    ERIC Educational Resources Information Center

    Malloch, Douglas C.; Michael, William B.

    1981-01-01

    This study was designed to determine whether an unweighted linear combination of community college students' scores on standardized achievement tests and a measure of motivational constructs derived from Vroom's expectance theory model of motivation was predictive of academic success (grade point average earned during one quarter of an academic…

  9. Simulated linear test applied to quantitative proteomics.

    PubMed

    Pham, T V; Jimenez, C R

    2016-09-01

    Omics studies aim to find significant changes due to biological or functional perturbation. However, gene and protein expression profiling experiments contain inherent technical variation. In discovery proteomics studies where the number of samples is typically small, technical variation plays an important role because it contributes considerably to the observed variation. Previous methods place both technical and biological variations in tightly integrated mathematical models that are difficult to adapt for different technological platforms. Our aim is to derive a statistical framework that allows the inclusion of a wide range of technical variability. We introduce a new method called the simulated linear test, or the s-test, that is easy to implement and easy to adapt for different models of technical variation. It generates virtual data points from the observed values according to a pre-defined technical distribution and subsequently employs linear modeling for significance analysis. We demonstrate the flexibility of the proposed approach by deriving a new significance test for quantitative discovery proteomics for which missing values have been a major issue for traditional methods such as the t-test. We evaluate the result on two label-free (phospho) proteomics datasets based on ion-intensity quantitation. Available at http://www.oncoproteomics.nl/software/stest.html : t.pham@vumc.nl. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Combining structure-from-motion derived point clouds from satellites and unmanned aircraft systems images with ground-truth data to create high-resolution digital elevation models

    NASA Astrophysics Data System (ADS)

    Palaseanu, M.; Thatcher, C.; Danielson, J.; Gesch, D. B.; Poppenga, S.; Kottermair, M.; Jalandoni, A.; Carlson, E.

    2016-12-01

    Coastal topographic and bathymetric (topobathymetric) data with high spatial resolution (1-meter or better) and high vertical accuracy are needed to assess the vulnerability of Pacific Islands to climate change impacts, including sea level rise. According to the Intergovernmental Panel on Climate Change reports, low-lying atolls in the Pacific Ocean are extremely vulnerable to king tide events, storm surge, tsunamis, and sea-level rise. The lack of coastal topobathymetric data has been identified as a critical data gap for climate vulnerability and adaptation efforts in the Republic of the Marshall Islands (RMI). For Majuro Atoll, home to the largest city of RMI, the only elevation dataset currently available is the Shuttle Radar Topography Mission data which has a 30-meter spatial resolution and 16-meter vertical accuracy (expressed as linear error at 90%). To generate high-resolution digital elevation models (DEMs) in the RMI, elevation information and photographic imagery have been collected from field surveys using GNSS/total station and unmanned aerial vehicles for Structure-from-Motion (SfM) point cloud generation. Digital Globe WorldView II imagery was processed to create SfM point clouds to fill in gaps in the point cloud derived from the higher resolution UAS photos. The combined point cloud data is filtered and classified to bare-earth and georeferenced using the GNSS data acquired on roads and along survey transects perpendicular to the coast. A total station was used to collect elevation data under tree canopies where heavy vegetation cover blocked the view of GNSS satellites. A subset of the GPS / total station data was set aside for error assessment of the resulting DEM.

  11. Effects of non-steroidal anti-inflammatory drug treatments on cognitive decline vary by phase of pre-clinical Alzheimer disease: findings from the randomized controlled Alzheimer's Disease Anti-inflammatory Prevention Trial.

    PubMed

    Leoutsakos, Jeannie-Marie S; Muthen, Bengt O; Breitner, John C S; Lyketsos, Constantine G

    2012-04-01

    We examined the effects of non-steroidal anti-inflammatory drugs on cognitive decline as a function of phase of pre-clinical Alzheimer disease. Given recent findings that cognitive decline accelerates as clinical diagnosis is approached, we used rate of decline as a proxy for phase of pre-clinical Alzheimer disease. We fit growth mixture models of Modified Mini-Mental State (3MS) Examination trajectories with data from 2388 participants in the Alzheimer's Disease Anti-inflammatory Prevention Trial and included class-specific effects of naproxen and celecoxib. We identified three classes: "no decline", "slow decline", and "fast decline", and examined the effects of celecoxib and naproxen on linear slope and rate of change by class. Inclusion of quadratic terms improved fit of the model (-2 log likelihood difference: 369.23; p < 0.001) but resulted in reversal of effects over time. Over 4 years, participants in the slow-decline class on placebo typically lost 6.6 3MS points, whereas those on naproxen lost 3.1 points (p-value for difference: 0.19). Participants in the fast-decline class on placebo typically lost 11.2 points, but those on celecoxib first declined and then gained points (p-value for difference from placebo: 0.04), whereas those on naproxen showed a typical decline of 24.9 points (p-value for difference from placebo: <0.0001). Our results appeared statistically robust but provided some unexpected contrasts in effects of different treatments at different times. Naproxen may attenuate cognitive decline in slow decliners while accelerating decline in fast decliners. Celecoxib appeared to have similar effects at first but then attenuated change in fast decliners. Copyright © 2011 John Wiley & Sons, Ltd.

  12. On 2- and 3-person games on polyhedral sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belenky, A.S.

    1994-12-31

    Special classes of 3 person games are considered where the sets of players` allowable strategies are polyhedral and the payoff functions are defined as maxima, on a polyhedral set, of certain kind of sums of linear and bilinear functions. Necessary and sufficient conditions, which are easy to verify, for a Nash point in these games are established, and a finite method, based on these conditions, for calculating Nash points is proposed. It is shown that the game serves as a generalization of a model for a problem of waste products evacuation from a territory. The method makes it possible tomore » reduce calculation of a Nash point to solving some linear and quadratic programming problems formulated on the basis of the original 3-person game. A class of 2-person games on connected polyhedral sets is considered, with the payoff function being a sum of two linear functions and one bilinear function. Necessary and sufficient conditions are established for the min-max, the max-min, and for a certain equilibrium. It is shown that the corresponding points can be calculated from auxiliary linear programming problems formulated on the basis of the master game.« less

  13. Population age and initial density in a patchy environment affect the occurrence of abrupt transitions in a birth-and-death model of Taylor's law

    USGS Publications Warehouse

    Jiang, Jiang; DeAngelis, Donald L.; Zhang, B.; Cohen, J.E.

    2014-01-01

    Taylor's power law describes an empirical relationship between the mean and variance of population densities in field data, in which the variance varies as a power, b, of the mean. Most studies report values of b varying between 1 and 2. However, Cohen (2014a) showed recently that smooth changes in environmental conditions in a model can lead to an abrupt, infinite change in b. To understand what factors can influence the occurrence of an abrupt change in b, we used both mathematical analysis and Monte Carlo samples from a model in which populations of the same species settled on patches, and each population followed independently a stochastic linear birth-and-death process. We investigated how the power relationship responds to a smooth change of population growth rate, under different sampling strategies, initial population density, and population age. We showed analytically that, if the initial populations differ only in density, and samples are taken from all patches after the same time period following a major invasion event, Taylor's law holds with exponent b=1, regardless of the population growth rate. If samples are taken at different times from patches that have the same initial population densities, we calculate an abrupt shift of b, as predicted by Cohen (2014a). The loss of linearity between log variance and log mean is a leading indicator of the abrupt shift. If both initial population densities and population ages vary among patches, estimates of b lie between 1 and 2, as in most empirical studies. But the value of b declines to ~1 as the system approaches a critical point. Our results can inform empirical studies that might be designed to demonstrate an abrupt shift in Taylor's law.

  14. [The highest proportion of tobacco materials in the blend analysis using PPF projection method for the near-infrared spectrum and Monte Carlo method].

    PubMed

    Mi, Jin-Rui; Ma, Xiang; Zhang, Ya-Juan; Wang, Yi; Wen, Ya-Dong; Zhao, Long-Lian; Li, Jun-Hui; Zhang, Lu-Da

    2011-04-01

    The present paper builds a model based on Monte Carlo method in the projection of the blending tobacco. This model is made up of two parts: the projecting points of tobacco materials, whose coordinates are calculated by means of the PPF (projection based on principal component and Fisher criterion) projection method for the tobacco near-infrared spectrum; and the point of tobacco blend, which is produced by linear additive to the projecting point coordinates of tobacco materials. In order to analyze the projection points deviation from initial state levels, Monte Carlo method is introduced to simulate the differences and changes of raw material projection. The results indicate that there are two major factors affecting the relative deviation: the highest proportion of tobacco materials in the blend, which is too high to make the deviation under control; and the quantity of materials, which is so small to control the deviation. The conclusion is close to the principle of actual formulating designing, particularly, the more in the quantity while the lower in proportion of each. Finally the paper figures out the upper limit of the proportions in the different quantity of materials by theory. It also has important reference value for other agricultural products blend.

  15. Performance of linear and nonlinear texture measures in 2D and 3D for monitoring architectural changes in osteoporosis using computer-generated models of trabecular bone

    NASA Astrophysics Data System (ADS)

    Boehm, Holger F.; Link, Thomas M.; Monetti, Roberto A.; Mueller, Dirk; Rummeny, Ernst J.; Raeth, Christoph W.

    2005-04-01

    Osteoporosis is a metabolic bone disease leading to de-mineralization and increased risk of fracture. The two major factors that determine the biomechanical competence of bone are the degree of mineralization and the micro-architectural integrity. Today, modern imaging modalities (high resolution MRI, micro-CT) are capable of depicting structural details of trabecular bone tissue. From the image data, structural properties obtained by quantitative measures are analysed with respect to the presence of osteoporotic fractures of the spine (in-vivo) or correlated with biomechanical strength as derived from destructive testing (in-vitro). Fairly well established are linear structural measures in 2D that are originally adopted from standard histo-morphometry. Recently, non-linear techniques in 2D and 3D based on the scaling index method (SIM), the standard Hough transform (SHT), and the Minkowski Functionals (MF) have been introduced, which show excellent performance in predicting bone strength and fracture risk. However, little is known about the performance of the various parameters with respect to monitoring structural changes due to progression of osteoporosis or as a result of medical treatment. In this contribution, we generate models of trabecular bone with pre-defined structural properties which are exposed to simulated osteoclastic activity. We apply linear and non-linear texture measures to the models and analyse their performance with respect to detecting architectural changes. This study demonstrates, that the texture measures are capable of monitoring structural changes of complex model data. The diagnostic potential varies for the different parameters and is found to depend on the topological composition of the model and initial "bone density". In our models, non-linear texture measures tend to react more sensitively to small structural changes than linear measures. Best performance is observed for the 3rd and 4th Minkowski Functionals and for the scaling index method.

  16. Riemannian multi-manifold modeling and clustering in brain networks

    NASA Astrophysics Data System (ADS)

    Slavakis, Konstantinos; Salsabilian, Shiva; Wack, David S.; Muldoon, Sarah F.; Baidoo-Williams, Henry E.; Vettel, Jean M.; Cieslak, Matthew; Grafton, Scott T.

    2017-08-01

    This paper introduces Riemannian multi-manifold modeling in the context of brain-network analytics: Brainnetwork time-series yield features which are modeled as points lying in or close to a union of a finite number of submanifolds within a known Riemannian manifold. Distinguishing disparate time series amounts thus to clustering multiple Riemannian submanifolds. To this end, two feature-generation schemes for brain-network time series are put forth. The first one is motivated by Granger-causality arguments and uses an auto-regressive moving average model to map low-rank linear vector subspaces, spanned by column vectors of appropriately defined observability matrices, to points into the Grassmann manifold. The second one utilizes (non-linear) dependencies among network nodes by introducing kernel-based partial correlations to generate points in the manifold of positivedefinite matrices. Based on recently developed research on clustering Riemannian submanifolds, an algorithm is provided for distinguishing time series based on their Riemannian-geometry properties. Numerical tests on time series, synthetically generated from real brain-network structural connectivity matrices, reveal that the proposed scheme outperforms classical and state-of-the-art techniques in clustering brain-network states/structures.

  17. Linear fully dry polymer actuators

    NASA Astrophysics Data System (ADS)

    De Rossi, Danilo; Mazzoldi, Alberto

    1999-05-01

    In the last period, the interest in the development of devices that emulate the properties of the 'par excellence' biological actuator, the human muscle, is considerably grown. The recent advances in the field of conducting polymers open new interesting prospects in this direction: from this point of view polyaniline (PANi), since it is easily produced in fiber form, represents an interesting material. In this conference we report the development of a linear actuator prototype that makes use of PANi fiber. All fabrication steps (fiber extrusion, solid polymer electrolyte preparation, compound realization) and experimental set-up for the electromechanical characterization are described. Quantitative measurements of isotonic length changes and isometric stress generation during electrochemical stimulation are reported. An overall assessment of PANi fibers actuative properties in wet and dry conditions is reported and possible future developments are proposed. Finally, continuum and lumped parameter models formulated to describe passive and active contractile properties of conducting polymer actuators are briefly outlined.

  18. Job strain and changes in the body mass index among working women: A prospective study

    PubMed Central

    Fujishiro, Kaori; Lawson, Christina C.; Hibert, Eileen Lividoti; Chavarro, Jorge E.; Rich-Edwards, Janet W.

    2015-01-01

    Objectives The relationship between job strain and weight gain has been unclear, especially for women. Using data from over 52 000 working women, we compare the association between change in job strain and change in BMI across different levels of baseline BMI. Subjects/Methods We used data from participants in the Nurses’ Health Study II (n=52 656, mean age = 38.4), an ongoing prospective cohort study. Using linear regression, we modeled the change in BMI over 4 years as a function of the change in job strain, baseline BMI, and the interaction between the two. Change in job strain was characterized in four categories combining baseline and follow-up levels: consistently low strain [low at both points], decreased strain [high strain at baseline only], increased strain [high strain at follow-up only], and consistently high strain [high at both points]. Age, race/ethnicity, pregnancy history, job types, and health behaviors at baseline were controlled for in the model. Results In adjusted models, women who reported high job strain at least once during the four-year period had a greater increase in BMI (ΔBMI=0.06–0.12, p<0.05) than those who never reported high job strain. The association between the change in job strain exposure and the change in BMI depended on the baseline BMI level (p=0.015 for the interaction): the greater the baseline BMI, the greater the BMI gain associated with consistently high job strain. The BMI gain associated with increased or decreased job strain was uniform across the range of baseline BMI. Conclusions Women with higher BMI may be more vulnerable to BMI gain when exposed to constant work stress. Future research focusing on mediating mechanisms between job strain and BMI change should explore the possibility of differential responses to job strain by initial BMI. PMID:25986779

  19. ASYMPTOTICS FOR CHANGE-POINT MODELS UNDER VARYING DEGREES OF MIS-SPECIFICATION

    PubMed Central

    SONG, RUI; BANERJEE, MOULINATH; KOSOROK, MICHAEL R.

    2015-01-01

    Change-point models are widely used by statisticians to model drastic changes in the pattern of observed data. Least squares/maximum likelihood based estimation of change-points leads to curious asymptotic phenomena. When the change–point model is correctly specified, such estimates generally converge at a fast rate (n) and are asymptotically described by minimizers of a jump process. Under complete mis-specification by a smooth curve, i.e. when a change–point model is fitted to data described by a smooth curve, the rate of convergence slows down to n1/3 and the limit distribution changes to that of the minimizer of a continuous Gaussian process. In this paper we provide a bridge between these two extreme scenarios by studying the limit behavior of change–point estimates under varying degrees of model mis-specification by smooth curves, which can be viewed as local alternatives. We find that the limiting regime depends on how quickly the alternatives approach a change–point model. We unravel a family of ‘intermediate’ limits that can transition, at least qualitatively, to the limits in the two extreme scenarios. The theoretical results are illustrated via a set of carefully designed simulations. We also demonstrate how inference for the change-point parameter can be performed in absence of knowledge of the underlying scenario by resorting to subsampling techniques that involve estimation of the convergence rate. PMID:26681814

  20. Three-Dimensional Aerodynamic Instabilities In Multi-Stage Axial Compressors

    NASA Technical Reports Server (NTRS)

    Tan, Choon S.; Gong, Yifang; Suder, Kenneth L. (Technical Monitor)

    2001-01-01

    This thesis presents the conceptualization and development of a computational model for describing three-dimensional non-linear disturbances associated with instability and inlet distortion in multistage compressors. Specifically, the model is aimed at simulating the non-linear aspects of short wavelength stall inception, part span stall cells, and compressor response to three-dimensional inlet distortions. The computed results demonstrated the first-of-a-kind capability for simulating short wavelength stall inception in multistage compressors. The adequacy of the model is demonstrated by its application to reproduce the following phenomena: (1) response of a compressor to a square-wave total pressure inlet distortion; (2) behavior of long wavelength small amplitude disturbances in compressors; (3) short wavelength stall inception in a multistage compressor and the occurrence of rotating stall inception on the negatively sloped portion of the compressor characteristic; (4) progressive stalling behavior in the first stage in a mismatched multistage compressor; (5) change of stall inception type (from modal to spike and vice versa) due to IGV stagger angle variation, and "unique rotor tip incidence" at these points where the compressor stalls through short wavelength disturbances. The model has been applied to determine the parametric dependence of instability inception behavior in terms of amplitude and spatial distribution of initial disturbance, and intra-blade-row gaps. It is found that reducing the inter-blade row gaps suppresses the growth of short wavelength disturbances. It is also concluded from these parametric investigations that each local component group (rotor and its two adjacent stators) has its own instability point (i.e. conditions at which disturbances are sustained) for short wavelength disturbances, with the instability point for the compressor set by the most unstable component group. For completeness, the methodology has been extended to describe finite amplitude disturbances in high-speed compressors. Results are presented for the response of a transonic compressor subjected to inlet distortions.

  1. Effect of linear and non-linear blade modelling techniques on simulated fatigue and extreme loads using Bladed

    NASA Astrophysics Data System (ADS)

    Beardsell, Alec; Collier, William; Han, Tao

    2016-09-01

    There is a trend in the wind industry towards ever larger and more flexible turbine blades. Blade tip deflections in modern blades now commonly exceed 10% of blade length. Historically, the dynamic response of wind turbine blades has been analysed using linear models of blade deflection which include the assumption of small deflections. For modern flexible blades, this assumption is becoming less valid. In order to continue to simulate dynamic turbine performance accurately, routine use of non-linear models of blade deflection may be required. This can be achieved by representing the blade as a connected series of individual flexible linear bodies - referred to in this paper as the multi-part approach. In this paper, Bladed is used to compare load predictions using single-part and multi-part blade models for several turbines. The study examines the impact on fatigue and extreme loads and blade deflection through reduced sets of load calculations based on IEC 61400-1 ed. 3. Damage equivalent load changes of up to 16% and extreme load changes of up to 29% are observed at some turbine load locations. It is found that there is no general pattern in the loading differences observed between single-part and multi-part blade models. Rather, changes in fatigue and extreme loads with a multi-part blade model depend on the characteristics of the individual turbine and blade. Key underlying causes of damage equivalent load change are identified as differences in edgewise- torsional coupling between the multi-part and single-part models, and increased edgewise rotor mode damping in the multi-part model. Similarly, a causal link is identified between torsional blade dynamics and changes in ultimate load results.

  2. Protein linear indices of the 'macromolecular pseudograph alpha-carbon atom adjacency matrix' in bioinformatics. Part 1: prediction of protein stability effects of a complete set of alanine substitutions in Arc repressor.

    PubMed

    Marrero-Ponce, Yovani; Medina-Marrero, Ricardo; Castillo-Garit, Juan A; Romero-Zaldivar, Vicente; Torrens, Francisco; Castro, Eduardo A

    2005-04-15

    A novel approach to bio-macromolecular design from a linear algebra point of view is introduced. A protein's total (whole protein) and local (one or more amino acid) linear indices are a new set of bio-macromolecular descriptors of relevance to protein QSAR/QSPR studies. These amino-acid level biochemical descriptors are based on the calculation of linear maps on Rn[f k(xmi):Rn-->Rn] in canonical basis. These bio-macromolecular indices are calculated from the kth power of the macromolecular pseudograph alpha-carbon atom adjacency matrix. Total linear indices are linear functional on Rn. That is, the kth total linear indices are linear maps from Rn to the scalar R[f k(xm):Rn-->R]. Thus, the kth total linear indices are calculated by summing the amino-acid linear indices of all amino acids in the protein molecule. A study of the protein stability effects for a complete set of alanine substitutions in the Arc repressor illustrates this approach. A quantitative model that discriminates near wild-type stability alanine mutants from the reduced-stability ones in a training series was obtained. This model permitted the correct classification of 97.56% (40/41) and 91.67% (11/12) of proteins in the training and test set, respectively. It shows a high Matthews correlation coefficient (MCC=0.952) for the training set and an MCC=0.837 for the external prediction set. Additionally, canonical regression analysis corroborated the statistical quality of the classification model (Rcanc=0.824). This analysis was also used to compute biological stability canonical scores for each Arc alanine mutant. On the other hand, the linear piecewise regression model compared favorably with respect to the linear regression one on predicting the melting temperature (tm) of the Arc alanine mutants. The linear model explains almost 81% of the variance of the experimental tm (R=0.90 and s=4.29) and the LOO press statistics evidenced its predictive ability (q2=0.72 and scv=4.79). Moreover, the TOMOCOMD-CAMPS method produced a linear piecewise regression (R=0.97) between protein backbone descriptors and tm values for alanine mutants of the Arc repressor. A break-point value of 51.87 degrees C characterized two mutant clusters and coincided perfectly with the experimental scale. For this reason, we can use the linear discriminant analysis and piecewise models in combination to classify and predict the stability of the mutant Arc homodimers. These models also permitted the interpretation of the driving forces of such folding process, indicating that topologic/topographic protein backbone interactions control the stability profile of wild-type Arc and its alanine mutants.

  3. Orthogonal Regression: A Teaching Perspective

    ERIC Educational Resources Information Center

    Carr, James R.

    2012-01-01

    A well-known approach to linear least squares regression is that which involves minimizing the sum of squared orthogonal projections of data points onto the best fit line. This form of regression is known as orthogonal regression, and the linear model that it yields is known as the major axis. A similar method, reduced major axis regression, is…

  4. Modeling daily soil temperature over diverse climate conditions in Iran—a comparison of multiple linear regression and support vector regression techniques

    NASA Astrophysics Data System (ADS)

    Delbari, Masoomeh; Sharifazari, Salman; Mohammadi, Ehsan

    2018-02-01

    The knowledge of soil temperature at different depths is important for agricultural industry and for understanding climate change. The aim of this study is to evaluate the performance of a support vector regression (SVR)-based model in estimating daily soil temperature at 10, 30 and 100 cm depth at different climate conditions over Iran. The obtained results were compared to those obtained from a more classical multiple linear regression (MLR) model. The correlation sensitivity for the input combinations and periodicity effect were also investigated. Climatic data used as inputs to the models were minimum and maximum air temperature, solar radiation, relative humidity, dew point, and the atmospheric pressure (reduced to see level), collected from five synoptic stations Kerman, Ahvaz, Tabriz, Saghez, and Rasht located respectively in the hyper-arid, arid, semi-arid, Mediterranean, and hyper-humid climate conditions. According to the results, the performance of both MLR and SVR models was quite well at surface layer, i.e., 10-cm depth. However, SVR performed better than MLR in estimating soil temperature at deeper layers especially 100 cm depth. Moreover, both models performed better in humid climate condition than arid and hyper-arid areas. Further, adding a periodicity component into the modeling process considerably improved the models' performance especially in the case of SVR.

  5. Enumeration of Extended m-Regular Linear Stacks.

    PubMed

    Guo, Qiang-Hui; Sun, Lisa H; Wang, Jian

    2016-12-01

    The contact map of a protein fold in the two-dimensional (2D) square lattice has arc length at least 3, and each internal vertex has degree at most 2, whereas the two terminal vertices have degree at most 3. Recently, Chen, Guo, Sun, and Wang studied the enumeration of [Formula: see text]-regular linear stacks, where each arc has length at least [Formula: see text] and the degree of each vertex is bounded by 2. Since the two terminal points in a protein fold in the 2D square lattice may form contacts with at most three adjacent lattice points, we are led to the study of extended [Formula: see text]-regular linear stacks, in which the degree of each terminal point is bounded by 3. This model is closed to real protein contact maps. Denote the generating functions of the [Formula: see text]-regular linear stacks and the extended [Formula: see text]-regular linear stacks by [Formula: see text] and [Formula: see text], respectively. We show that [Formula: see text] can be written as a rational function of [Formula: see text]. For a certain [Formula: see text], by eliminating [Formula: see text], we obtain an equation satisfied by [Formula: see text] and derive the asymptotic formula of the numbers of [Formula: see text]-regular linear stacks of length [Formula: see text].

  6. Linear model describing three components of flow in karst aquifers using 18O data

    USGS Publications Warehouse

    Long, Andrew J.; Putnam, L.D.

    2004-01-01

    The stable isotope of oxygen, 18O, is used as a naturally occurring ground-water tracer. Time-series data for ??18O are analyzed to model the distinct responses and relative proportions of the conduit, intermediate, and diffuse flow components in karst aquifers. This analysis also describes mathematically the dynamics of the transient fluid interchange between conduits and diffusive networks. Conduit and intermediate flow are described by linear-systems methods, whereas diffuse flow is described by mass-balance methods. An automated optimization process estimates parameters of lognormal, Pearson type III, and gamma distributions, which are used as transfer functions in linear-systems analysis. Diffuse flow and mixing parameters also are estimated by these optimization methods. Results indicate the relative proximity of a well to a main conduit flowpath and can help to predict the movement and residence times of potential contaminants. The three-component linear model is applied to five wells, which respond to changes in the isotopic composition of point recharge water from a sinking stream in the Madison aquifer in the Black Hills of South Dakota. Flow velocities as much as 540 m/d and system memories of as much as 71 years are estimated by this method. Also, the mean, median, and standard deviation of traveltimes; time to peak response; and the relative fraction of flow for each of the three components are determined for these wells. This analysis infers that flow may branch apart and rejoin as a result of an anastomotic (or channeled) karst network.

  7. Method of Individual Adjustment for 3D CT Analysis: Linear Measurement.

    PubMed

    Kim, Dong Kyu; Choi, Dong Hun; Lee, Jeong Woo; Yang, Jung Dug; Chung, Ho Yun; Cho, Byung Chae; Choi, Kang Young

    2016-01-01

    Introduction . We aim to regularize measurement values in three-dimensional (3D) computed tomography (CT) reconstructed images for higher-precision 3D analysis, focusing on length-based 3D cephalometric examinations. Methods . We measure the linear distances between points on different skull models using Vernier calipers (real values). We use 10 differently tilted CT scans for 3D CT reconstruction of the models and measure the same linear distances from the picture archiving and communication system (PACS). In both cases, each measurement is performed three times by three doctors, yielding nine measurements. The real values are compared with the PACS values. Each PACS measurement is revised based on the display field of view (DFOV) values and compared with the real values. Results . The real values and the PACS measurement changes according to tilt value have no significant correlations ( p > 0.05). However, significant correlations appear between the real values and DFOV-adjusted PACS measurements ( p < 0.001). Hence, we obtain a correlation expression that can yield real physical values from PACS measurements. The DFOV value intervals for various age groups are also verified. Conclusion . Precise confirmation of individual preoperative length and precise analysis of postoperative improvements through 3D analysis is possible, which is helpful for facial-bone-surgery symmetry correction.

  8. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  9. Examining Differential Resilience Mechanisms by Comparing 'Tipping Points' of the Effects of Neighborhood Conditions on Anxiety by Race/Ethnicity.

    PubMed

    Coman, Emil Nicolae; Wu, Helen Zhao

    2018-02-20

    Exposure to adverse environmental and social conditions affects physical and mental health through complex mechanisms. Different racial/ethnic (R/E) groups may be more or less vulnerable to the same conditions, and the resilience mechanisms that can protect them likely operate differently in each population. We investigate how adverse neighborhood conditions (neighborhood disorder, NDis) differentially impact mental health (anxiety, Anx) in a sample of white and Black (African American) young women from Southeast Texas, USA. We illustrate a simple yet underutilized segmented regression model where linearity is relaxed to allow for a shift in the strength of the effect with the levels of the predictor. We compare how these effects change within R/E groups with the level of the predictor, but also how the "tipping points," where the effects change in strength, may differ by R/E. We find with classic linear regression that neighborhood disorder adversely affects Black women's anxiety, while in white women the effect seems negligible. Segmented regressions show that the Ndis → Anx effects in both groups of women appear to shift at similar levels, about one-fifth of a standard deviation below the mean of NDis, but the effect for Black women appears to start out as negative, then shifts in sign, i.e., to increase anxiety, while for white women, the opposite pattern emerges. Our findings can aid in devising better strategies for reducing health disparities that take into account different coping or resilience mechanisms operating differentially at distinct levels of adversity. We recommend that researchers investigate when adversity becomes exceedingly harmful and whether this happens differentially in distinct populations, so that intervention policies can be planned to reverse conditions that are more amenable to change, in effect pushing back the overall social risk factors below such tipping points.

  10. Criterion for estimation of stress-deformed state of SD-materials

    NASA Astrophysics Data System (ADS)

    Orekhov, Andrey V.

    2018-05-01

    A criterion is proposed that determines the moment when the growth pattern of the monotonic numerical sequence varies from the linear to the parabolic one. The criterion is based on the comparison of squares of errors for the linear and the incomplete quadratic approximation. The approximating functions are constructed locally, only at those points that are located near a possible change in nature of the increase in the sequence.

  11. Quadratic band touching points and flat bands in two-dimensional topological Floquet systems

    NASA Astrophysics Data System (ADS)

    Du, Liang; Zhou, Xiaoting; Fiete, Gregory; The CenterComplex Quantum Systems Team

    In this work we theoretically study, using Floquet-Bloch theory, the influence of circularly and linearly polarized light on two-dimensional band structures with Dirac and quadratic band touching points, and flat bands, taking the nearest neighbor hopping model on the kagome lattice as an example. We find circularly polarized light can invert the ordering of this three band model, while leaving the flat-band dispersionless. We find a small gap is also opened at the quadratic band touching point by 2-photon and higher order processes. By contrast, linearly polarized light splits the quadratic band touching point (into two Dirac points) by an amount that depends only on the amplitude and polarization direction of the light, independent of the frequency, and generally renders dispersion to the flat band. The splitting is perpendicular to the direction of the polarization of the light. We derive an effective low-energy theory that captures these key results. Finally, we compute the frequency dependence of the optical conductivity for this 3-band model and analyze the various interband contributions of the Floquet modes. Our results suggest strategies for optically controlling band structure and interaction strength in real systems. We gratefully acknowledge funding from ARO Grant W911NF-14-1-0579 and NSF DMR-1507621.

  12. 3D change detection in staggered voxels model for robotic sensing and navigation

    NASA Astrophysics Data System (ADS)

    Liu, Ruixu; Hampshire, Brandon; Asari, Vijayan K.

    2016-05-01

    3D scene change detection is a challenging problem in robotic sensing and navigation. There are several unpredictable aspects in performing scene change detection. A change detection method which can support various applications in varying environmental conditions is proposed. Point cloud models are acquired from a RGB-D sensor, which provides the required color and depth information. Change detection is performed on robot view point cloud model. A bilateral filter smooths the surface and fills the holes as well as keeps the edge details on depth image. Registration of the point cloud model is implemented by using Random Sample Consensus (RANSAC) algorithm. It uses surface normal as the previous stage for the ground and wall estimate. After preprocessing the data, we create a point voxel model which defines voxel as surface or free space. Then we create a color model which defines each voxel that has a color by the mean of all points' color value in this voxel. The preliminary change detection is detected by XOR subtract on the point voxel model. Next, the eight neighbors for this center voxel are defined. If they are neither all `changed' voxels nor all `no changed' voxels, a histogram of location and hue channel color is estimated. The experimental evaluations performed to evaluate the capability of our algorithm show promising results for novel change detection that indicate all the changing objects with very limited false alarm rate.

  13. Projected land photosynthesis constrained by changes in the seasonal cycle of atmospheric CO2.

    PubMed

    Wenzel, Sabrina; Cox, Peter M; Eyring, Veronika; Friedlingstein, Pierre

    2016-10-27

    Uncertainties in the response of vegetation to rising atmospheric CO 2 concentrations contribute to the large spread in projections of future climate change. Climate-carbon cycle models generally agree that elevated atmospheric CO 2 concentrations will enhance terrestrial gross primary productivity (GPP). However, the magnitude of this CO 2 fertilization effect varies from a 20 per cent to a 60 per cent increase in GPP for a doubling of atmospheric CO 2 concentrations in model studies. Here we demonstrate emergent constraints on large-scale CO 2 fertilization using observed changes in the amplitude of the atmospheric CO 2 seasonal cycle that are thought to be the result of increasing terrestrial GPP. Our comparison of atmospheric CO 2 measurements from Point Barrow in Alaska and Cape Kumukahi in Hawaii with historical simulations of the latest climate-carbon cycle models demonstrates that the increase in the amplitude of the CO 2 seasonal cycle at both measurement sites is consistent with increasing annual mean GPP, driven in part by climate warming, but with differences in CO 2 fertilization controlling the spread among the model trends. As a result, the relationship between the amplitude of the CO 2 seasonal cycle and the magnitude of CO 2 fertilization of GPP is almost linear across the entire ensemble of models. When combined with the observed trends in the seasonal CO 2 amplitude, these relationships lead to consistent emergent constraints on the CO 2 fertilization of GPP. Overall, we estimate a GPP increase of 37 ± 9 per cent for high-latitude ecosystems and 32 ± 9 per cent for extratropical ecosystems under a doubling of atmospheric CO 2 concentrations on the basis of the Point Barrow and Cape Kumukahi records, respectively.

  14. Automatic control of cryogenic wind tunnels

    NASA Technical Reports Server (NTRS)

    Balakrishna, S.

    1989-01-01

    Inadequate Reynolds number similarity in testing of scaled models affects the quality of aerodynamic data from wind tunnels. This is due to scale effects of boundary-layer shock wave interaction which is likely to be severe at transonic speeds. The idea of operation of wind tunnels using test gas cooled to cryogenic temperatures has yielded a quantrum jump in the ability to realize full scale Reynolds number flow similarity in small transonic tunnels. In such tunnels, the basic flow control problem consists of obtaining and maintaining the desired test section flow parameters. Mach number, Reynolds number, and dynamic pressure are the three flow parameters that are usually required to be kept constant during the period of model aerodynamic data acquisition. The series of activity involved in modeling, control law development, mechanization of the control laws on a microcomputer, and the performance of a globally stable automatic control system for the 0.3-m Transonic Cryogenic Tunnel (TCT) are discussed. A lumped multi-variable nonlinear dynamic model of the cryogenic tunnel, generation of a set of linear control laws for small perturbation, and nonlinear control strategy for large set point changes including tunnel trajectory control are described. The details of mechanization of the control laws on a 16 bit microcomputer system, the software features, operator interface, the display and safety are discussed. The controller is shown to provide globally stable and reliable temperature control to + or - 0.2 K, pressure to + or - 0.07 psi and Mach number to + or - 0.002 of the set point value. This performance is obtained both during large set point commands as for a tunnel cooldown, and during aerodynamic data acquisition with intrusive activity like geometrical changes in the test section such as angle of attack changes, drag rake movements, wall adaptation and sidewall boundary-layer removal. Feasibility of the use of an automatic Reynolds number control mode with fixed Mach number control is demonstrated.

  15. Linking point scale process non-linearity, catchment organization and linear system dynamics in a thermodynamic state space

    NASA Astrophysics Data System (ADS)

    Zehe, Erwin; Loritz, Ralf; Ehret, Uwe; Westhoff, Martijn; Kleidon, Axel; Savenije, Hubert

    2017-04-01

    It is flabbergasting to note that catchment systems often behave almost linearly, despite of the strong non-linearity of point scale soil water characteristics. In the present study we provide evidence that a thermodynamic treatment of environmental system dynamics is the key to understand how particularly a stronger spatial organization of catchments leads to a more linear rainfall runoff behavior. Our starting point is that water fluxes in a catchment are associated with fluxes of kinetic and potential energy while changes in subsurface water stocks go along with changes in potential energy and chemical energy of subsurface water. Steady state/local equilibrium of the entire system can be defined as a state of minimum free energy, reflecting an equilibrium subsurface water storage, which is determined catchment topography, soil water characteristics and water levels in the stream. Dynamics of the entire system, i.e. deviations from equilibrium storage, are 'pseudo' oscillations in a thermodynamic state space. Either to an excess potential energy in case of wetting while subsequent relaxation back to equilibrium requires drainage/water export. Or to an excess in capillary binding energy in case of driving, while relaxation back to equilibrium requires recharge of the subsurface water stock. While system dynamics is highly non-linear on the 'too dry branch' it is essentially linear on the 'too wet branch' in case of potential energy excess. A steepened topography, which reflects a stronger spatial organization, reduces the equilibrium storage of the catchment system to smaller values, thereby it increases the range of states where the systems behaves linearly due to an excess in potential energy. Contrarily to this a shift to finer textured soils increases the equilibrium storage, which implies that the range of states where the systems behaves linearly is reduced. In this context it is important to note that an increased internal organization of the system due to an elevated density of the preferential flow paths, imply a less non-linear system behavior. This is because they avoid persistence of very dry states system states by facilitating recharge of the soil moisture stock. Based on the proposed approach we compare dynamics of four distinctly different catchments in their respective state space and demonstrate the feasibility of the approach to explain differences and similarities in their rainfall runoff regimes.

  16. Role of Sink Density in Nonequilibrium Chemical Redistribution in Alloys

    DOE PAGES

    Martinez, Enrique Saez; Senninger, Oriane; Caro, Alfredo; ...

    2018-03-08

    Nonequilibrium chemical redistribution in open systems submitted to external forces, such as particle irradiation, leads to changes in the structural properties of the material, potentially driving the system to failure. Such redistribution is controlled by the complex interplay between the production of point defects, atomic transport rates, and the sink character of the microstructure. In this work, we analyze this interplay by means of a kinetic Monte Carlo (KMC) framework with an underlying atomistic model for the Fe-Cr model alloy to study the effect of ideal defect sinks on Cr concentration profiles, with a particular focus on the role ofmore » interface density. We observe that the amount of segregation decreases linearly with decreasing interface spacing. Within the framework of the thermodynamics of irreversible processes, a general analytical model is derived and assessed against the KMC simulations to elucidate the structure-property relationship of this system. Interestingly, in the kinetic regime where elimination of point defects at sinks is dominant over bulk recombination, the solute segregation does not directly depend on the dose rate but only on the density of sinks. Furthermore, this model provides new insight into the design of microstructures that mitigate chemical redistribution and improve radiation tolerance.« less

  17. Role of Sink Density in Nonequilibrium Chemical Redistribution in Alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martinez, Enrique Saez; Senninger, Oriane; Caro, Alfredo

    Nonequilibrium chemical redistribution in open systems submitted to external forces, such as particle irradiation, leads to changes in the structural properties of the material, potentially driving the system to failure. Such redistribution is controlled by the complex interplay between the production of point defects, atomic transport rates, and the sink character of the microstructure. In this work, we analyze this interplay by means of a kinetic Monte Carlo (KMC) framework with an underlying atomistic model for the Fe-Cr model alloy to study the effect of ideal defect sinks on Cr concentration profiles, with a particular focus on the role ofmore » interface density. We observe that the amount of segregation decreases linearly with decreasing interface spacing. Within the framework of the thermodynamics of irreversible processes, a general analytical model is derived and assessed against the KMC simulations to elucidate the structure-property relationship of this system. Interestingly, in the kinetic regime where elimination of point defects at sinks is dominant over bulk recombination, the solute segregation does not directly depend on the dose rate but only on the density of sinks. Furthermore, this model provides new insight into the design of microstructures that mitigate chemical redistribution and improve radiation tolerance.« less

  18. Role of Sink Density in Nonequilibrium Chemical Redistribution in Alloys

    NASA Astrophysics Data System (ADS)

    Martínez, Enrique; Senninger, Oriane; Caro, Alfredo; Soisson, Frédéric; Nastar, Maylise; Uberuaga, Blas P.

    2018-03-01

    Nonequilibrium chemical redistribution in open systems submitted to external forces, such as particle irradiation, leads to changes in the structural properties of the material, potentially driving the system to failure. Such redistribution is controlled by the complex interplay between the production of point defects, atomic transport rates, and the sink character of the microstructure. In this work, we analyze this interplay by means of a kinetic Monte Carlo (KMC) framework with an underlying atomistic model for the Fe-Cr model alloy to study the effect of ideal defect sinks on Cr concentration profiles, with a particular focus on the role of interface density. We observe that the amount of segregation decreases linearly with decreasing interface spacing. Within the framework of the thermodynamics of irreversible processes, a general analytical model is derived and assessed against the KMC simulations to elucidate the structure-property relationship of this system. Interestingly, in the kinetic regime where elimination of point defects at sinks is dominant over bulk recombination, the solute segregation does not directly depend on the dose rate but only on the density of sinks. This model provides new insight into the design of microstructures that mitigate chemical redistribution and improve radiation tolerance.

  19. Non-linearities in Theory-of-Mind Development.

    PubMed

    Blijd-Hoogewys, Els M A; van Geert, Paul L C

    2016-01-01

    Research on Theory-of-Mind (ToM) has mainly focused on ages of core ToM development. This article follows a quantitative approach focusing on the level of ToM understanding on a measurement scale, the ToM Storybooks, in 324 typically developing children between 3 and 11 years of age. It deals with the eventual occurrence of developmental non-linearities in ToM functioning, using smoothing techniques, dynamic growth model building and additional indicators, namely moving skewness, moving growth rate changes and moving variability. The ToM sum-scores showed an overall developmental trend that leveled off toward the age of 10 years. Within this overall trend two non-linearities in the group-based change pattern were found: a plateau at the age of around 56 months and a dip at the age of 72-78 months. These temporary regressions in ToM sum-score were accompanied by a decrease in growth rate and variability, and a change in skewness of the ToM data, all suggesting a developmental shift in ToM understanding. The temporary decreases also occurred in the different ToM sub-scores and most clearly so in the core ToM component of beliefs. It was also found that girls had an earlier growth spurt than boys and that the underlying developmental path was more salient in girls than in boys. The consequences of these findings are discussed from various theoretical points of view, with an emphasis on a dynamic systems interpretation of the underlying developmental paths.

  20. Non-linearities in Theory-of-Mind Development

    PubMed Central

    Blijd-Hoogewys, Els M. A.; van Geert, Paul L. C.

    2017-01-01

    Research on Theory-of-Mind (ToM) has mainly focused on ages of core ToM development. This article follows a quantitative approach focusing on the level of ToM understanding on a measurement scale, the ToM Storybooks, in 324 typically developing children between 3 and 11 years of age. It deals with the eventual occurrence of developmental non-linearities in ToM functioning, using smoothing techniques, dynamic growth model building and additional indicators, namely moving skewness, moving growth rate changes and moving variability. The ToM sum-scores showed an overall developmental trend that leveled off toward the age of 10 years. Within this overall trend two non-linearities in the group-based change pattern were found: a plateau at the age of around 56 months and a dip at the age of 72–78 months. These temporary regressions in ToM sum-score were accompanied by a decrease in growth rate and variability, and a change in skewness of the ToM data, all suggesting a developmental shift in ToM understanding. The temporary decreases also occurred in the different ToM sub-scores and most clearly so in the core ToM component of beliefs. It was also found that girls had an earlier growth spurt than boys and that the underlying developmental path was more salient in girls than in boys. The consequences of these findings are discussed from various theoretical points of view, with an emphasis on a dynamic systems interpretation of the underlying developmental paths. PMID:28101065

  1. Physical lumping methods for developing linear reduced models for high speed propulsion systems

    NASA Technical Reports Server (NTRS)

    Immel, S. M.; Hartley, Tom T.; Deabreu-Garcia, J. Alex

    1991-01-01

    In gasdynamic systems, information travels in one direction for supersonic flow and in both directions for subsonic flow. A shock occurs at the transition from supersonic to subsonic flow. Thus, to simulate these systems, any simulation method implemented for the quasi-one-dimensional Euler equations must have the ability to capture the shock. In this paper, a technique combining both backward and central differencing is presented. The equations are subsequently linearized about an operating point and formulated into a linear state space model. After proper implementation of the boundary conditions, the model order is reduced from 123 to less than 10 using the Schur method of balancing. Simulations comparing frequency and step response of the reduced order model and the original system models are presented.

  2. An asymptotic Reissner-Mindlin plate model

    NASA Astrophysics Data System (ADS)

    Licht, Christian; Weller, Thibaut

    2018-06-01

    A mathematical study via variational convergence of a periodic distribution of classical linearly elastic thin plates softly abutted together shows that it is not necessary to use a different continuum model nor to make constitutive symmetry hypothesis as starting points to deduce the Reissner-Mindlin plate model.

  3. Lifespan development of pro- and anti-saccades: multiple regression models for point estimates.

    PubMed

    Klein, Christoph; Foerster, Friedrich; Hartnegg, Klaus; Fischer, Burkhart

    2005-12-07

    The comparative study of anti- and pro-saccade task performance contributes to our functional understanding of the frontal lobes, their alterations in psychiatric or neurological populations, and their changes during the life span. In the present study, we apply regression analysis to model life span developmental effects on various pro- and anti-saccade task parameters, using data of a non-representative sample of 327 participants aged 9 to 88 years. Development up to the age of about 27 years was dominated by curvilinear rather than linear effects of age. Furthermore, the largest developmental differences were found for intra-subject variability measures and the anti-saccade task parameters. Ageing, by contrast, had the shape of a global linear decline of the investigated saccade functions, lacking the differential effects of age observed during development. While these results do support the assumption that frontal lobe functions can be distinguished from other functions by their strong and protracted development, they do not confirm the assumption of disproportionate deterioration of frontal lobe functions with ageing. We finally show that the regression models applied here to quantify life span developmental effects can also be used for individual predictions in applied research contexts or clinical practice.

  4. Geometry of the scalar sector

    DOE PAGES

    Alonso, Rodrigo; Jenkins, Elizabeth E.; Manohar, Aneesh V.

    2016-08-17

    The S-matrix of a quantum field theory is unchanged by field redefinitions, and so it only depends on geometric quantities such as the curvature of field space. Whether the Higgs multiplet transforms linearly or non-linearly under electroweak symmetry is a subtle question since one can make a coordinate change to convert a field that transforms linearly into one that transforms non-linearly. Renormalizability of the Standard Model (SM) does not depend on the choice of scalar fields or whether the scalar fields transform linearly or non-linearly under the gauge group, but only on the geometric requirement that the scalar field manifoldmore » M is flat. Standard Model Effective Field Theory (SMEFT) and Higgs Effective Field Theory (HEFT) have curved M, since they parametrize deviations from the flat SM case. We show that the HEFT Lagrangian can be written in SMEFT form if and only ifMhas a SU(2) L U(1) Y invariant fixed point. Experimental observables in HEFT depend on local geometric invariants of M such as sectional curvatures, which are of order 1/Λ 2 , where Λ is the EFT scale. We give explicit expressions for these quantities in terms of the structure constants for a general G → H symmetry breaking pattern. The one-loop radiative correction in HEFT is determined using a covariant expansion which preserves manifest invariance of M under coordinate redefinitions. The formula for the radiative correction is simple when written in terms of the curvature of M and the gauge curvature field strengths. We also extend the CCWZ formalism to non-compact groups, and generalize the HEFT curvature computation to the case of multiple singlet scalar fields.« less

  5. The Prediction of Scattered Broadband Shock-Associated Noise

    NASA Technical Reports Server (NTRS)

    Miller, Steven A. E.

    2015-01-01

    A mathematical model is developed for the prediction of scattered broadband shock-associated noise. Model arguments are dependent on the vector Green's function of the linearized Euler equations, steady Reynolds-averaged Navier-Stokes solutions, and the two-point cross-correlation of the equivalent source. The equivalent source is dependent on steady Reynolds-averaged Navier-Stokes solutions of the jet flow, that capture the nozzle geometry and airframe surface. Contours of the time-averaged streamwise velocity component and turbulent kinetic energy are examined with varying airframe position relative to the nozzle exit. Propagation effects are incorporated by approximating the vector Green's function of the linearized Euler equations. This approximation involves the use of ray theory and an assumption that broadband shock-associated noise is relatively unaffected by the refraction of the jet shear layer. A non-dimensional parameter is proposed that quantifies the changes of the broadband shock-associated noise source with varying jet operating condition and airframe position. Scattered broadband shock-associated noise possesses a second set of broadband lobes that are due to the effect of scattering. Presented predictions demonstrate relatively good agreement compared to a wide variety of measurements.

  6. An overview of longitudinal data analysis methods for neurological research.

    PubMed

    Locascio, Joseph J; Atri, Alireza

    2011-01-01

    The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models.

  7. Impact of comorbidities on stroke rehabilitation outcomes: does the method matter?

    PubMed

    Berlowitz, Dan R; Hoenig, Helen; Cowper, Diane C; Duncan, Pamela W; Vogel, W Bruce

    2008-10-01

    To examine the impact of comorbidities in predicting stroke rehabilitation outcomes and to examine differences among 3 commonly used comorbidity measures--the Charlson Index, adjusted clinical groups (ACGs), and diagnosis cost groups (DCGs)--in how well they predict these outcomes. Inception cohort of patients followed for 6 months. Department of Veterans Affairs (VA) hospitals. A total of 2402 patients beginning stroke rehabilitation at a VA facility in 2001 and included in the Integrated Stroke Outcomes Database. Not applicable. Three outcomes were evaluated: 6-month mortality, 6-month rehospitalization, and change in FIM score. During 6 months of follow-up, 27.6% of patients were rehospitalized and 8.6% died. The mean FIM score increased an average of 20 points during rehabilitation. Addition of comorbidities to the age and sex models improved their performance in predicting these outcomes based on changes in c statistics for logistic and R(2) values for linear regression models. While ACG and DCG models performed similarly, the best models, based on DCGs, had a c statistic of .74 for 6-month mortality and .63 for 6-month rehospitalization, and an R(2) of .111 for change in FIM score. Comorbidities are important predictors of stroke rehabilitation outcomes. How they are classified has important implications for models that may be used in assessing quality of care.

  8. Probabilistic structural analysis by extremum methods

    NASA Technical Reports Server (NTRS)

    Nafday, Avinash M.

    1990-01-01

    The objective is to demonstrate discrete extremum methods of structural analysis as a tool for structural system reliability evaluation. Specifically, linear and multiobjective linear programming models for analysis of rigid plastic frames under proportional and multiparametric loadings, respectively, are considered. Kinematic and static approaches for analysis form a primal-dual pair in each of these models and have a polyhedral format. Duality relations link extreme points and hyperplanes of these polyhedra and lead naturally to dual methods for system reliability evaluation.

  9. ACTOMP - AUTOCAD TO MASS PROPERTIES

    NASA Technical Reports Server (NTRS)

    Jones, A.

    1994-01-01

    AutoCAD to Mass Properties was developed to facilitate quick mass properties calculations of structures having many simple elements in a complex configuration such as trusses or metal sheet containers. Calculating the mass properties of structures of this type can be a tedious and repetitive process, but ACTOMP helps automate the calculations. The structure can be modelled in AutoCAD or a compatible CAD system in a matter of minutes using the 3-Dimensional elements. This model provides all the geometric data necessary to make a mass properties calculation of the structure. ACTOMP reads the geometric data of a drawing from the Drawing Interchange File (DXF) used in AutoCAD. The geometric entities recognized by ACTOMP include POINTs, 3DLINEs, and 3DFACEs. ACTOMP requests mass, linear density, or area density of the elements for each layer, sums all the elements and calculates the total mass, center of mass (CM) and the mass moments of inertia (MOI). AutoCAD utilizes layers to define separate drawing planes. ACTOMP uses layers to differentiate between multiple types of similar elements. For example if a structure is made of various types of beams, modeled as 3DLINEs, each with a different linear density, the beams can be grouped by linear density and each group placed on a separate layer. The program will request the linear density of 3DLINEs for each new layer it finds as it processes the drawing information. The same is true with POINTs and 3DFACEs. By using layers this way a very complex model can be created. POINTs are used for point masses such as bolts, small machine parts, or small electronic boxes. 3DLINEs are used for beams, bars, rods, cables, and other similarly slender elements. 3DFACEs are used for planar elements. 3DFACEs may be created as 3 or 4 Point faces. Some examples of elements that might be modelled using 3DFACEs are plates, sheet metal, fabric, boxes, large diameter hollow cylinders and evenly distributed masses. ACTOMP was written in Microsoft QuickBasic (Version 2.0). It was developed for the IBM PC microcomputer and has been implemented on an IBM PC compatible under DOS 3.21. ACTOMP was developed in 1988 and requires approximately 5K bytes to operate.

  10. Method of performing computational aeroelastic analyses

    NASA Technical Reports Server (NTRS)

    Silva, Walter A. (Inventor)

    2011-01-01

    Computational aeroelastic analyses typically use a mathematical model for the structural modes of a flexible structure and a nonlinear aerodynamic model that can generate a plurality of unsteady aerodynamic responses based on the structural modes for conditions defining an aerodynamic condition of the flexible structure. In the present invention, a linear state-space model is generated using a single execution of the nonlinear aerodynamic model for all of the structural modes where a family of orthogonal functions is used as the inputs. Then, static and dynamic aeroelastic solutions are generated using computational interaction between the mathematical model and the linear state-space model for a plurality of periodic points in time.

  11. Testing the consistency of three-point halo clustering in Fourier and configuration space

    NASA Astrophysics Data System (ADS)

    Hoffmann, K.; Gaztañaga, E.; Scoccimarro, R.; Crocce, M.

    2018-05-01

    We compare reduced three-point correlations Q of matter, haloes (as proxies for galaxies) and their cross-correlations, measured in a total simulated volume of ˜100 (h-1 Gpc)3, to predictions from leading order perturbation theory on a large range of scales in configuration space. Predictions for haloes are based on the non-local bias model, employing linear (b1) and non-linear (c2, g2) bias parameters, which have been constrained previously from the bispectrum in Fourier space. We also study predictions from two other bias models, one local (g2 = 0) and one in which c2 and g2 are determined by b1 via approximately universal relations. Overall, measurements and predictions agree when Q is derived for triangles with (r1r2r3)1/3 ≳60 h-1 Mpc, where r1 - 3 are the sizes of the triangle legs. Predictions for Qmatter, based on the linear power spectrum, show significant deviations from the measurements at the BAO scale (given our small measurement errors), which strongly decrease when adding a damping term or using the non-linear power spectrum, as expected. Predictions for Qhalo agree best with measurements at large scales when considering non-local contributions. The universal bias model works well for haloes and might therefore be also useful for tightening constraints on b1 from Q in galaxy surveys. Such constraints are independent of the amplitude of matter density fluctuation (σ8) and hence break the degeneracy between b1 and σ8, present in galaxy two-point correlations.

  12. Estimation for the Linear Model With Uncertain Covariance Matrices

    NASA Astrophysics Data System (ADS)

    Zachariah, Dave; Shariati, Nafiseh; Bengtsson, Mats; Jansson, Magnus; Chatterjee, Saikat

    2014-03-01

    We derive a maximum a posteriori estimator for the linear observation model, where the signal and noise covariance matrices are both uncertain. The uncertainties are treated probabilistically by modeling the covariance matrices with prior inverse-Wishart distributions. The nonconvex problem of jointly estimating the signal of interest and the covariance matrices is tackled by a computationally efficient fixed-point iteration as well as an approximate variational Bayes solution. The statistical performance of estimators is compared numerically to state-of-the-art estimators from the literature and shown to perform favorably.

  13. Magnetic ordering induced giant optical property change in tetragonal BiFeO3

    NASA Astrophysics Data System (ADS)

    Tong, Wen-Yi; Ding, Hang-Chen; Gong, Shi Jing; Wan, Xiangang; Duan, Chun-Gang

    2015-12-01

    Magnetic ordering could have significant influence on band structures, spin-dependent transport, and other important properties of materials. Its measurement, especially for the case of antiferromagnetic (AFM) ordering, however, is generally difficult to be achieved. Here we demonstrate the feasibility of magnetic ordering detection using a noncontact and nondestructive optical method. Taking the tetragonal BiFeO3 (BFO) as an example and combining density functional theory calculations with tight-binding models, we find that when BFO changes from C1-type to G-type AFM phase, the top of valance band shifts from the Z point to Γ point, which makes the original direct band gap become indirect. This can be explained by Slater-Koster parameters using the Harrison approach. The impact of magnetic ordering on band dispersion dramatically changes the optical properties. For the linear ones, the energy shift of the optical band gap could be as large as 0.4 eV. As for the nonlinear ones, the change is even larger. The second-harmonic generation coefficient d33 of G-AFM becomes more than 13 times smaller than that of C1-AFM case. Finally, we propose a practical way to distinguish the two AFM phases of BFO using the optical method, which is of great importance in next-generation information storage technologies.

  14. Cardiac troponin I for the prediction of functional recovery and left ventricular remodelling following primary percutaneous coronary intervention for ST-elevation myocardial infarction.

    PubMed

    Hallén, Jonas; Jensen, Jesper K; Fagerland, Morten W; Jaffe, Allan S; Atar, Dan

    2010-12-01

    To investigate the ability of cardiac troponin I (cTnI) to predict functional recovery and left ventricular remodelling following primary percutaneous coronary intervention (pPCI) in ST-elevation myocardial infarction (STEMI). Post hoc study extending from randomised controlled trial. 132 patients with STEMI receiving pPCI. Left ventricular ejection fraction (LVEF), end-diastolic and end-systolic volume index (EDVI and ESVI) and changes in these parameters from day 5 to 4 months after the index event. Cardiac magnetic resonance examination performed at 5 days and 4 months for evaluation of LVEF, EDVI and ESVI. cTnI was sampled at 24 and 48 h. In linear regression models adjusted for early (5 days) assessment of LVEF, ESVI and EDVI, single-point cTnI at either 24 or 48 h were independent and strong predictors of changes in LVEF (p<0.01), EDVI (p<0.01) and ESVI (p<0.01) during the follow-up period. In a logistic regression analysis for prediction of an LVEF below 40% at 4 months, single-point cTnI significantly improved the prognostic strength of the model (area under the curve = 0.94, p<0.01) in comparison with the combination of clinical variables and LVEF at 5 days. Single-point sampling of cTnI after pPCI for STEMI provides important prognostic information on the time-dependent evolution of left ventricular function and volumes.

  15. Influence of multiple scattering and absorption on the full scattering profile and the isobaric point in tissue

    NASA Astrophysics Data System (ADS)

    Duadi, Hamootal; Fixler, Dror

    2015-05-01

    Light reflectance and transmission from soft tissue has been utilized in noninvasive clinical measurement devices such as the photoplethysmograph (PPG) and reflectance pulse oximeter. Incident light on the skin travels into the underlying layers and is in part reflected back to the surface, in part transferred and in part absorbed. Most methods of near infrared (NIR) spectroscopy focus on the volume reflectance from a semi-infinite sample, while very few measure transmission. We have previously shown that examining the full scattering profile (angular distribution of exiting photons) provides more comprehensive information when measuring from a cylindrical tissue. Furthermore, an isobaric point was found which is not dependent on changes in the reduced scattering coefficient. The angle corresponding to this isobaric point depends on the tissue diameter. We investigated the role of multiple scattering and absorption on the full scattering profile of a cylindrical tissue. First, we define the range in which multiple scattering occurs for different tissue diameters. Next, we examine the role of the absorption coefficient in the attenuation of the full scattering profile. We demonstrate that the absorption linearly influences the intensity at each angle of the full scattering profile and, more importantly, the absorption does not change the position of the isobaric point. The findings of this work demonstrate a realistic model for optical tissue measurements such as NIR spectroscopy, PPG, and pulse oximetery.

  16. Data-Derived Modeling Characterizes Plasticity of MAPK Signaling in Melanoma

    PubMed Central

    Bernardo-Faura, Marti; Massen, Stefan; Falk, Christine S.; Brady, Nathan R.; Eils, Roland

    2014-01-01

    The majority of melanomas have been shown to harbor somatic mutations in the RAS-RAF-MEK-MAPK and PI3K-AKT pathways, which play a major role in regulation of proliferation and survival. The prevalence of these mutations makes these kinase signal transduction pathways an attractive target for cancer therapy. However, tumors have generally shown adaptive resistance to treatment. This adaptation is achieved in melanoma through its ability to undergo neovascularization, migration and rearrangement of signaling pathways. To understand the dynamic, nonlinear behavior of signaling pathways in cancer, several computational modeling approaches have been suggested. Most of those models require that the pathway topology remains constant over the entire observation period. However, changes in topology might underlie adaptive behavior to drug treatment. To study signaling rearrangements, here we present a new approach based on Fuzzy Logic (FL) that predicts changes in network architecture over time. This adaptive modeling approach was used to investigate pathway dynamics in a newly acquired experimental dataset describing total and phosphorylated protein signaling over four days in A375 melanoma cell line exposed to different kinase inhibitors. First, a generalized strategy was established to implement a parameter-reduced FL model encoding non-linear activity of a signaling network in response to perturbation. Next, a literature-based topology was generated and parameters of the FL model were derived from the full experimental dataset. Subsequently, the temporal evolution of model performance was evaluated by leaving time-defined data points out of training. Emerging discrepancies between model predictions and experimental data at specific time points allowed the characterization of potential network rearrangement. We demonstrate that this adaptive FL modeling approach helps to enhance our mechanistic understanding of the molecular plasticity of melanoma. PMID:25188314

  17. Modeling, Control and Simulation of Three-Dimensional Robotic Systems with Applications to Biped Locomotion.

    NASA Astrophysics Data System (ADS)

    Zheng, Yuan-Fang

    A three-dimensional, five link biped system is established. Newton-Euler state space formulation is employed to derive the equations of the system. The constraint forces involved in the equations can be eliminated by projection onto a smaller state space system for deriving advanced control laws. A model-referenced adaptive control scheme is developed to control the system. Digital computer simulations of point to point movement are carried out to show that the model-referenced adaptive control increases the dynamic range and speeds up the response of the system in comparison with linear and nonlinear feedback control. Further, the implementation of the controller is simpler. Impact effects of biped contact with the environment are modeled and studied. The instant velocity change at the moment of impact is derived as a function of the biped state and contact speed. The effects of impact on the state, as well as constraints are studied in biped landing on heels and toes simultaneously or on toes first. Rate and nonlinear position feedback are employed for stability of the biped after the impact. The complex structure of the foot is properly modeled. A spring and dashpot pair is suggested to represent the action of plantar fascia during the impact. This action prevents the arch of the foot from collapsing. A mathematical model of the skeletal muscle is discussed. A direct relationship between the stimulus rate and the active state is established. A piecewise linear relation between the length of the contractile element and the isometric force is considered. Hill's characteristic equation is maintained for determining the actual output force during different shortening velocities. A physical threshold model is proposed for recruitment which encompasses the size principle, its manifestations and exceptions to the size principle. Finally the role of spindle feedback in stability of the model is demonstrated by study of a pair of muscles.

  18. Thickness noise of a propeller and its relation to blade sweep

    NASA Astrophysics Data System (ADS)

    Amiet, R. K.

    1988-07-01

    Linear acoustic theory is used to determine the thickness noise produced by a supersonic propeller with sharp leading and trailing edges. The method reveals details of the calculated waveform. Abrupt changes of slope in the pressure-time waveform which are produced by singular points entering or leaving the tip blade are pointed out. It is found that the behavior of the pressure-time waveform is closely related to changes in the retarded rotor shape. The results indicate that logarithmic singularities in the waveform are produced by regions on the blade edges that move towards the observer at sonic speed, with the edge normal to the line joining the source point and the observer.

  19. Descriptive Linear modeling of steady-state visual evoked response

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Junker, A. M.; Kenner, K.

    1986-01-01

    A study is being conducted to explore use of the steady state visual-evoke electrocortical response as an indicator of cognitive task loading. Application of linear descriptive modeling to steady state Visual Evoked Response (VER) data is summarized. Two aspects of linear modeling are reviewed: (1) unwrapping the phase-shift portion of the frequency response, and (2) parsimonious characterization of task-loading effects in terms of changes in model parameters. Model-based phase unwrapping appears to be most reliable in applications, such as manual control, where theoretical models are available. Linear descriptive modeling of the VER has not yet been shown to provide consistent and readily interpretable results.

  20. Changes in erectile dysfunction over time in relation to Framingham cardiovascular risk in the Boston Area Community Health (BACH) Survey.

    PubMed

    Fang, Shona C; Rosen, Raymond C; Vita, Joseph A; Ganz, Peter; Kupelian, Varant

    2015-01-01

    Erectile dysfunction (ED) is associated with cardiovascular disease (CVD); however, the association between change in ED status over time and future underlying CVD risk is unclear. The aim of this study was to investigate the association between change in ED status and Framingham CVD risk, as well change in Framingham risk. We studied 965 men free of CVD in the Boston Area Community Health (BACH) Survey, a longitudinal cohort study with three assessments. ED was assessed with the five-item International Index of Erectile Function at BACH I (2002-2005) and BACH II (2007-2010) and classified as no ED/transient ED/persistent ED. CVD risk was assessed with 10-year Framingham CVD risk algorithm at BACH I and BACH III (2010-2012). Linear regression models controlled for baseline age, socio-demographic and lifestyle factors, as well as baseline Framingham risk. Models were also stratified by age (≥/< 50 years). Framingham CVD risk and change in Framingham CVD risk were the main outcome measures. Transient and persistent ED was significantly associated with increased Framingham risk and change in risk over time in univariate and age-adjusted models. In younger men, persistent ED was associated with a Framingham risk that was 1.58 percentage points higher (95% confidence interval [CI]: 0.11, 3.06) and in older men, a Framingham risk that was 2.54 percentage points higher (95% CI: -1.5, 6.59), compared with those without ED. Change in Framingham risk over time was also associated with transient and persistent ED in men <50 years, but not in older men. Data suggest that even after taking into account other CVD risk factors, transient and persistent ED is associated with Framingham CVD risk and a greater increase in Framingham risk over time, particularly in younger men. Findings further support clinical assessment of CVD risk in men presenting with ED, especially those under 50 years. © 2014 International Society for Sexual Medicine.

  1. Coherent Change Detection: Theoretical Description and Experimental Results

    DTIC Science & Technology

    2006-08-01

    Elementary Linear Algebra With Applications. John Wiley and sons, 1987. 49. J. Lee, K. W. Hoppel, and A. R. Miller, “Intensity and phase statistics of...kx, ky, kz = 0). The nature of the image recovered by the PFA may be ascertained by considering a scene consisting of an elementary point scatter...registered image pair estimate any dominant relative linear phase term between the primary image and the resampled repeat pass image and remove this

  2. Stefan problem for a finite liquid phase and its application to laser or electron beam welding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kasuya, T.; Shimoda, N.

    1997-10-01

    An exact solution of a heat conduction problem with the effect of latent heat of solidification (Stefan problem) is derived. The solution of the one dimensional Stefan problem for a finite liquid phase initially existing in a semi-infinite body is applied to evaluate temperature fields produced by laser or electron beam welding. The solution of the model has not been available before, as Carslaw and Jaeger [{ital Conduction of Heat in Solids}, 2nd ed. (Oxford University Press, New York, 1959)] pointed out. The heat conduction calculations are performed using thermal properties of carbon steel, and the comparison of the Stefanmore » problem with a simplified linear heat conduction model reveals that the solidification rate and cooling curve over 1273 K significantly depend on which model (Stefan or linear heat conduction problem) is applied, and that the type of the thermal model applied has little meaning for cooling curve below 1273 K. Since the heat conduction problems with a phase change arise in many important industrial fields, the solution derived in this study is ready to be used not only for welding but also for other industrial applications. {copyright} {ital 1997 American Institute of Physics.}« less

  3. Two-point method uncertainty during control and measurement of cylindrical element diameters

    NASA Astrophysics Data System (ADS)

    Glukhov, V. I.; Shalay, V. V.; Radev, H.

    2018-04-01

    The topic of the article is devoted to the urgent problem of the reliability of technical products geometric specifications measurements. The purpose of the article is to improve the quality of parts linear sizes control by the two-point measurement method. The article task is to investigate methodical extended uncertainties in measuring cylindrical element linear sizes. The investigation method is a geometric modeling of the element surfaces shape and location deviations in a rectangular coordinate system. The studies were carried out for elements of various service use, taking into account their informativeness, corresponding to the kinematic pairs classes in theoretical mechanics and the number of constrained degrees of freedom in the datum element function. Cylindrical elements with informativity of 4, 2, 1 and θ (zero) were investigated. The uncertainties estimation of in two-point measurements was made by comparing the results of of linear dimensions measurements with the functional diameters maximum and minimum of the element material. Methodical uncertainty is formed when cylindrical elements with maximum informativeness have shape deviations of the cut and the curvature types. Methodical uncertainty is formed by measuring the element average size for all types of shape deviations. The two-point measurement method cannot take into account the location deviations of a dimensional element, so its use for elements with informativeness less than the maximum creates unacceptable methodical uncertainties in measurements of the maximum, minimum and medium linear dimensions. Similar methodical uncertainties also exist in the arbitration control of the linear dimensions of the cylindrical elements by limiting two-point gauges.

  4. Local linear regression for function learning: an analysis based on sample discrepancy.

    PubMed

    Cervellera, Cristiano; Macciò, Danilo

    2014-11-01

    Local linear regression models, a kind of nonparametric structures that locally perform a linear estimation of the target function, are analyzed in the context of empirical risk minimization (ERM) for function learning. The analysis is carried out with emphasis on geometric properties of the available data. In particular, the discrepancy of the observation points used both to build the local regression models and compute the empirical risk is considered. This allows to treat indifferently the case in which the samples come from a random external source and the one in which the input space can be freely explored. Both consistency of the ERM procedure and approximating capabilities of the estimator are analyzed, proving conditions to ensure convergence. Since the theoretical analysis shows that the estimation improves as the discrepancy of the observation points becomes smaller, low-discrepancy sequences, a family of sampling methods commonly employed for efficient numerical integration, are also analyzed. Simulation results involving two different examples of function learning are provided.

  5. A multiphase non-linear mixed effects model: An application to spirometry after lung transplantation.

    PubMed

    Rajeswaran, Jeevanantham; Blackstone, Eugene H

    2017-02-01

    In medical sciences, we often encounter longitudinal temporal relationships that are non-linear in nature. The influence of risk factors may also change across longitudinal follow-up. A system of multiphase non-linear mixed effects model is presented to model temporal patterns of longitudinal continuous measurements, with temporal decomposition to identify the phases and risk factors within each phase. Application of this model is illustrated using spirometry data after lung transplantation using readily available statistical software. This application illustrates the usefulness of our flexible model when dealing with complex non-linear patterns and time-varying coefficients.

  6. Towards a threshold climate for emergency lower respiratory hospital admissions.

    PubMed

    Islam, Muhammad Saiful; Chaussalet, Thierry J; Koizumi, Naoru

    2017-02-01

    Identification of 'cut-points' or thresholds of climate factors would play a crucial role in alerting risks of climate change and providing guidance to policymakers. This study investigated a 'Climate Threshold' for emergency hospital admissions of chronic lower respiratory diseases by using a distributed lag non-linear model (DLNM). We analysed a unique longitudinal dataset (10 years, 2000-2009) on emergency hospital admissions, climate, and pollution factors for the Greater London. Our study extends existing work on this topic by considering non-linearity, lag effects between climate factors and disease exposure within the DLNM model considering B-spline as smoothing technique. The final model also considered natural cubic splines of time since exposure and 'day of the week' as confounding factors. The results of DLNM indicated a significant improvement in model fitting compared to a typical GLM model. The final model identified the thresholds of several climate factors including: high temperature (≥27°C), low relative humidity (≤ 40%), high Pm10 level (≥70-µg/m 3 ), low wind speed (≤ 2 knots) and high rainfall (≥30mm). Beyond the threshold values, a significantly higher number of emergency admissions due to lower respiratory problems would be expected within the following 2-3 days after the climate shift in the Greater London. The approach will be useful to initiate 'region and disease specific' climate mitigation plans. It will help identify spatial hot spots and the most sensitive areas and population due to climate change, and will eventually lead towards a diversified health warning system tailored to specific climate zones and populations. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Structural vascular disease in Africans: Performance of ethnic-specific waist circumference cut points using logistic regression and neural network analyses: The SABPA study.

    PubMed

    Botha, J; de Ridder, J H; Potgieter, J C; Steyn, H S; Malan, L

    2013-10-01

    A recently proposed model for waist circumference cut points (RPWC), driven by increased blood pressure, was demonstrated in an African population. We therefore aimed to validate the RPWC by comparing the RPWC and the Joint Statement Consensus (JSC) models via Logistic Regression (LR) and Neural Networks (NN) analyses. Urban African gender groups (N=171) were stratified according to the JSC and RPWC cut point models. Ultrasound carotid intima media thickness (CIMT), blood pressure (BP) and fasting bloods (glucose, high density lipoprotein (HDL) and triglycerides) were obtained in a well-controlled setting. The RPWC male model (LR ROC AUC: 0.71, NN ROC AUC: 0.71) was practically equal to the JSC model (LR ROC AUC: 0.71, NN ROC AUC: 0.69) to predict structural vascular -disease. Similarly, the female RPWC model (LR ROC AUC: 0.84, NN ROC AUC: 0.82) and JSC model (LR ROC AUC: 0.82, NN ROC AUC: 0.81) equally predicted CIMT as surrogate marker for structural vascular disease. Odds ratios supported validity where prediction of CIMT revealed -clinical -significance, well over 1, for both the JSC and RPWC models in African males and females (OR 3.75-13.98). In conclusion, the proposed RPWC model was substantially validated utilizing linear and non-linear analyses. We therefore propose ethnic-specific WC cut points (African males, ≥90 cm; -females, ≥98 cm) to predict a surrogate marker for structural vascular disease. © J. A. Barth Verlag in Georg Thieme Verlag KG Stuttgart · New York.

  8. Racial-ethnic identity in mid-adolescence: content and change as predictors of academic achievement.

    PubMed

    Altschul, Inna; Oyserman, Daphna; Bybee, Deborah

    2006-01-01

    Three aspects of racial-ethnic identity (REI)-feeling connected to one's racial-ethnic group (Connectedness), being aware that others may not value the in-group (Awareness of Racism), and feeling that one's in-group is characterized by academic attainment (Embedded Achievement)-were hypothesized to promote academic achievement. Youth randomly selected from 3 low-income, urban schools (n=98 African American, n=41 Latino) reported on their REI 4 times over 2 school years. Hierarchical linear modeling shows a small increase in REI and the predicted REI-grades relationship. Youth high in both REI Connectedness and Embedded Achievement attained better grade point average (GPA) at each point in time; youth high in REI Connectedness and Awareness of Racism at the beginning of 8th grade attained better GPA through 9th grade. Effects are not moderated by race-ethnicity.

  9. Do Nondomestic Undergraduates Choose a Major Field in Order to Maximize Grade Point Averages?

    ERIC Educational Resources Information Center

    Bergman, Matthew E.; Fass-Holmes, Barry

    2016-01-01

    The authors investigated whether undergraduates attending an American West Coast public university who were not U.S. citizens (nondomestic) maximized their grade point averages (GPA) through their choice of major field. Multiple regression hierarchical linear modeling analyses showed that major field's effect size was small for these…

  10. A numerical study on the limitations of modal Iwan models for impulsive excitations

    NASA Astrophysics Data System (ADS)

    Lacayo, Robert M.; Deaner, Brandon J.; Allen, Matthew S.

    2017-03-01

    Structures with mechanical joints are difficult to model accurately. Even if the natural frequencies of the system remain essentially constant, the damping introduced by the joints is often observed to change dramatically with amplitude. Although models for individual joints have been employed with some success, accurately modeling a structure with many joints remains a significant obstacle. To this end, Segalman proposed a modal Iwan model, which simplifies the analysis by modeling a system with a linear superposition of weakly-nonlinear, uncoupled single degree-of-freedom systems or modes. Given a simulation model with discrete joints, one can identify the model for each mode by selectively exciting each mode one at a time and observing how the transient response decays. However, in the environment of interest several modes may be excited simultaneously, such as in an experiment when an impulse is applied at a discrete point. In this work, the modal Iwan model framework is assessed numerically to understand how well it captures the dynamic response of typical structures with joints when they are excited with impulsive forces applied at point locations. This is done by comparing the effective natural frequency and modal damping of the uncoupled modal models with those of truth models that include nonlinear modal coupling. These concepts are explored for two structures, a simple spring-mass system and a finite element model of a beam, both of which contain physical Iwan elements to model joint nonlinearity. The results show that modal Iwan models can effectively capture the variations in frequency and damping with amplitude, which, for damping, can increase by as much as two orders of magnitude in the microslip regime. However, even in the microslip regime the accuracy of a modal Iwan model is found to depend on whether the mode in question is dominant in the response; in some cases the effective damping that the uncoupled model predicts is found to be in error by tens of percent. Nonetheless, the modal model captures the response qualitatively and is still far superior to a linear model.

  11. Artificial equilibrium points in binary asteroid systems with continuous low-thrust

    NASA Astrophysics Data System (ADS)

    Bu, Shichao; Li, Shuang; Yang, Hongwei

    2017-08-01

    The positions and dynamical characteristics of artificial equilibrium points (AEPs) in the vicinity of a binary asteroid with continuous low-thrust are studied. The restricted ellipsoid-ellipsoid model of binary system is employed for the binary asteroid system. The positions of AEPs are obtained by this model. It is found that the set of the point L1 or L2 forms a shape of an ellipsoid while the set of the point L3 forms a shape like a "banana". The effect of the continuous low-thrust on the feasible region of motion is analyzed by zero velocity curves. Because of using the low-thrust, the unreachable region can become reachable. The linearized equations of motion are derived for stability's analysis. Based on the characteristic equation of the linearized equations, the stability conditions are derived. The stable regions of AEPs are investigated by a parametric analysis. The effect of the mass ratio and ellipsoid parameters on stable region is also discussed. The results show that the influence of the mass ratio on the stable regions is more significant than the parameters of ellipsoid.

  12. The Ponzano-Regge Model and Parametric Representation

    NASA Astrophysics Data System (ADS)

    Li, Dan

    2014-04-01

    We give a parametric representation of the effective noncommutative field theory derived from a -deformation of the Ponzano-Regge model and define a generalized Kirchhoff polynomial with -correction terms, obtained in a -linear approximation. We then consider the corresponding graph hypersurfaces and the question of how the presence of the correction term affects their motivic nature. We look in particular at the tetrahedron graph, which is the basic case of relevance to quantum gravity. With the help of computer calculations, we verify that the number of points over finite fields of the corresponding hypersurface does not fit polynomials with integer coefficients, hence the hypersurface of the tetrahedron is not polynomially countable. This shows that the correction term can change significantly the motivic properties of the hypersurfaces, with respect to the classical case.

  13. Impurity doping effects on the orbital thermodynamic properties of hydrogenated graphene, graphane, in Harrison model

    NASA Astrophysics Data System (ADS)

    Yarmohammadi, Mohsen

    2016-12-01

    Using the Harrison model and Green's function technique, impurity doping effects on the orbital density of states (DOS), electronic heat capacity (EHC) and magnetic susceptibility (MS) of a monolayer hydrogenated graphene, chair-like graphane, are investigated. The effect of scattering between electrons and dilute charged impurities is discussed in terms of the self-consistent Born approximation. Our results show that the graphane is a semiconductor and its band gap decreases with impurity. As a remarkable point, comparatively EHC reaches almost linearly to Schottky anomaly and does not change at low temperatures in the presence of impurity. Generally, EHC and MS increases with impurity doping. Surprisingly, impurity doping only affects the salient behavior of py orbital contribution of carbon atoms due to the symmetry breaking.

  14. Stable long-time semiclassical description of zero-point energy in high-dimensional molecular systems.

    PubMed

    Garashchuk, Sophya; Rassolov, Vitaly A

    2008-07-14

    Semiclassical implementation of the quantum trajectory formalism [J. Chem. Phys. 120, 1181 (2004)] is further developed to give a stable long-time description of zero-point energy in anharmonic systems of high dimensionality. The method is based on a numerically cheap linearized quantum force approach; stabilizing terms compensating for the linearization errors are added into the time-evolution equations for the classical and nonclassical components of the momentum operator. The wave function normalization and energy are rigorously conserved. Numerical tests are performed for model systems of up to 40 degrees of freedom.

  15. Can fractal methods applied to video tracking detect the effects of deltamethrin pesticide or mercury on the locomotion behavior of shrimps?

    PubMed

    Tenorio, Bruno Mendes; da Silva Filho, Eurípedes Alves; Neiva, Gentileza Santos Martins; da Silva, Valdemiro Amaro; Tenorio, Fernanda das Chagas Angelo Mendes; da Silva, Themis de Jesus; Silva, Emerson Carlos Soares E; Nogueira, Romildo de Albuquerque

    2017-08-01

    Shrimps can accumulate environmental toxicants and suffer behavioral changes. However, methods to quantitatively detect changes in the behavior of these shrimps are still needed. The present study aims to verify whether mathematical and fractal methods applied to video tracking can adequately describe changes in the locomotion behavior of shrimps exposed to low concentrations of toxic chemicals, such as 0.15µgL -1 deltamethrin pesticide or 10µgL -1 mercuric chloride. Results showed no change after 1min, 4, 24, and 48h of treatment. However, after 72 and 96h of treatment, both the linear methods describing the track length, mean speed, mean distance from the current to the previous track point, as well as the non-linear methods of fractal dimension (box counting or information entropy) and multifractal analysis were able to detect changes in the locomotion behavior of shrimps exposed to deltamethrin. Analysis of angular parameters of the track points vectors and lacunarity were not sensitive to those changes. None of the methods showed adverse effects to mercury exposure. These mathematical and fractal methods applicable to software represent low cost useful tools in the toxicological analyses of shrimps for quality of food, water and biomonitoring of ecosystems. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. To Tip or Not to Tip: The Case of the Congo Basin Rainforest Realm

    NASA Astrophysics Data System (ADS)

    Pietsch, S.; Bednar, J. E.; Fath, B. D.; Winter, P. A.

    2017-12-01

    The future response of the Congo basin rainforest, the second largest tropical carbon reservoir, to climate change is still under debate. Different Climate projections exist stating increase and decrease in rainfall and different changes in rainfall patterns. Within this study we assess all options of climate change possibilities to define the climatic thresholds of Congo basin rainforest stability and assess the limiting conditions for rainforest persistence. We use field data from 199 research plots from the Western Congo basin to calibrate and validate a complex BioGeoChemistry model (BGC-MAN) and assess model performance against an array of possible future climates. Next, we analyze the reasons for the occurrence of tipping points, their spatial and temporal probability of occurrence, will present effects of hysteresis and derive probabilistic spatial-temporal resilience landscapes for the region. Additionally, we will analyze attractors of forest growth dynamics and assess common linear measures for early warning signals of sudden shifts in system dynamics for their robustness in the context of the Congo Basin case, and introduce the correlation integral as a nonlinear measure of risk assessment.

  17. Healthy Aging Delays Scalp EEG Sensitivity to Noise in a Face Discrimination Task

    PubMed Central

    Rousselet, Guillaume A.; Gaspar, Carl M.; Pernet, Cyril R.; Husk, Jesse S.; Bennett, Patrick J.; Sekuler, Allison B.

    2010-01-01

    We used a single-trial ERP approach to quantify age-related changes in the time-course of noise sensitivity. A total of 62 healthy adults, aged between 19 and 98, performed a non-speeded discrimination task between two faces. Stimulus information was controlled by parametrically manipulating the phase spectrum of these faces. Behavioral 75% correct thresholds increased with age. This result may be explained by lower signal-to-noise ratios in older brains. ERP from each subject were entered into a single-trial general linear regression model to identify variations in neural activity statistically associated with changes in image structure. The fit of the model, indexed by R2, was computed at multiple post-stimulus time points. The time-course of the R2 function showed significantly delayed noise sensitivity in older observers. This age effect is reliable, as demonstrated by test–retest in 24 subjects, and started about 120 ms after stimulus onset. Our analyses suggest also a qualitative change from a young to an older pattern of brain activity at around 47 ± 4 years old. PMID:21833194

  18. Determining vehicle operating speed and lateral position along horizontal curves using linear mixed-effects models.

    PubMed

    Fitzsimmons, Eric J; Kvam, Vanessa; Souleyrette, Reginald R; Nambisan, Shashi S; Bonett, Douglas G

    2013-01-01

    Despite recent improvements in highway safety in the United States, serious crashes on curves remain a significant problem. To assist in better understanding causal factors leading to this problem, this article presents and demonstrates a methodology for collection and analysis of vehicle trajectory and speed data for rural and urban curves using Z-configured road tubes. For a large number of vehicle observations at 2 horizontal curves located in Dexter and Ames, Iowa, the article develops vehicle speed and lateral position prediction models for multiple points along these curves. Linear mixed-effects models were used to predict vehicle lateral position and speed along the curves as explained by operational, vehicle, and environmental variables. Behavior was visually represented for an identified subset of "risky" drivers. Linear mixed-effect regression models provided the means to predict vehicle speed and lateral position while taking into account repeated observations of the same vehicle along horizontal curves. Speed and lateral position at point of entry were observed to influence trajectory and speed profiles. Rural horizontal curve site models are presented that indicate that the following variables were significant and influenced both vehicle speed and lateral position: time of day, direction of travel (inside or outside lane), and type of vehicle.

  19. Quantifying and Reducing Curve-Fitting Uncertainty in Isc

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-06-14

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less

  20. Dynamics of f(R) gravity models and asymmetry of time

    NASA Astrophysics Data System (ADS)

    Verma, Murli Manohar; Yadav, Bal Krishna

    We solve the field equations of modified gravity for f(R) model in metric formalism. Further, we obtain the fixed points of the dynamical system in phase-space analysis of f(R) models, both with and without the effects of radiation. The stability of these points is studied against the perturbations in a smooth spatial background by applying the conditions on the eigenvalues of the matrix obtained in the linearized first-order differential equations. Following this, these fixed points are used for analyzing the dynamics of the system during the radiation, matter and acceleration-dominated phases of the universe. Certain linear and quadratic forms of f(R) are determined from the geometrical and physical considerations and the behavior of the scale factor is found for those forms. Further, we also determine the Hubble parameter H(t), the Ricci scalar R and the scale factor a(t) for these cosmic phases. We show the emergence of an asymmetry of time from the dynamics of the scalar field exclusively owing to the f(R) gravity in the Einstein frame that may lead to an arrow of time at a classical level.

  1. Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less

  2. Changes in Self-reported Insurance Coverage, Access to Care, and Health Under the Affordable Care Act.

    PubMed

    Sommers, Benjamin D; Gunja, Munira Z; Finegold, Kenneth; Musco, Thomas

    2015-07-28

    The Affordable Care Act (ACA) completed its second open enrollment period in February 2015. Assessing the law's effects has major policy implications. To estimate national changes in self-reported coverage, access to care, and health during the ACA's first 2 open enrollment periods and to assess differences between low-income adults in states that expanded Medicaid and in states that did not expand Medicaid. Analysis of the 2012-2015 Gallup-Healthways Well-Being Index, a daily national telephone survey. Using multivariable regression to adjust for pre-ACA trends and sociodemographics, we examined changes in outcomes for the nonelderly US adult population aged 18 through 64 years (n = 507,055) since the first open enrollment period began in October 2013. Linear regressions were used to model each outcome as a function of a linear monthly time trend and quarterly indicators. Then, pre-ACA (January 2012-September 2013) and post-ACA (January 2014-March 2015) changes for adults with incomes below 138% of the poverty level in Medicaid expansion states (n = 48,905 among 28 states and Washington, DC) vs nonexpansion states (n = 37,283 among 22 states) were compared using a differences-in-differences approach. Beginning of the ACA's first open enrollment period (October 2013). Self-reported rates of being uninsured, lacking a personal physician, lacking easy access to medicine, inability to afford needed care, overall health status, and health-related activity limitations. Among the 507,055 adults in this survey, pre-ACA trends were significantly worsening for all outcomes. Compared with the pre-ACA trends, by the first quarter of 2015, the adjusted proportions who were uninsured decreased by 7.9 percentage points (95% CI, -9.1 to -6.7); who lacked a personal physician, -3.5 percentage points (95% CI, -4.8 to -2.2); who lacked easy access to medicine, -2.4 percentage points (95% CI, -3.3 to -1.5); who were unable to afford care, -5.5 percentage points (95% CI, -6.7 to -4.2); who reported fair/poor health, -3.4 percentage points (95% CI, -4.6 to -2.2); and the percentage of days with activities limited by health, -1.7 percentage points (95% CI, -2.4 to -0.9). Coverage changes were largest among minorities; for example, the decrease in the uninsured rate was larger among Latino adults (-11.9 percentage points [95% CI, -15.3 to -8.5]) than white adults (-6.1 percentage points [95% CI, -7.3 to -4.8]). Medicaid expansion was associated with significant reductions among low-income adults in the uninsured rate (differences-in-differences estimate, -5.2 percentage points [95% CI, -7.9 to -2.6]), lacking a personal physician (-1.8 percentage points [95% CI, -3.4 to -0.3]), and difficulty accessing medicine (-2.2 percentage points [95% CI, -3.8 to -0.7]). The ACA's first 2 open enrollment periods were associated with significantly improved trends in self-reported coverage, access to primary care and medications, affordability, and health. Low-income adults in states that expanded Medicaid reported significant gains in insurance coverage and access compared with adults in states that did not expand Medicaid.

  3. A frost formation model and its validation under various experimental conditions

    NASA Technical Reports Server (NTRS)

    Dietenberger, M. A.

    1982-01-01

    A numerical model that was used to calculate the frost properties for all regimes of frost growth is described. In the first regime of frost growth, the initial frost density and thickness was modeled from the theories of crystal growth. The 'frost point' temperature was modeled as a linear interpolation between the dew point temperature and the fog point temperature, based upon the nucleating capability of the particular condensing surfaces. For a second regime of frost growth, the diffusion model was adopted with the following enhancements: the generalized correlation of the water frost thermal conductivity was applied to practically all water frost layers being careful to ensure that the calculated heat and mass transfer coefficients agreed with experimental measurements of the same coefficients.

  4. Performance indicators related to points scoring and winning in international rugby sevens.

    PubMed

    Higham, Dean G; Hopkins, Will G; Pyne, David B; Anson, Judith M

    2014-05-01

    Identification of performance indicators related to scoring points and winning is needed to inform tactical approaches to international rugby sevens competition. The aim of this study was to characterize team performance indicators in international rugby sevens and quantify their relationship with a team's points scored and probability of winning. Performance indicators of each team during 196 matches of the 2011/2012 International Rugby Board Sevens World Series were modeled for their linear relationships with points scored and likelihood of winning within (changes in team values from match to match) and between (differences between team values averaged over all matches) teams. Relationships were evaluated as the change and difference in points and probability of winning associated with a two within- and between-team standard deviations increase in performance indicator values. Inferences about relationships were assessed using a smallest meaningful difference of one point and a 10% probability of a team changing the outcome of a close match. All indicators exhibited high within-team match-to-match variability (intraclass correlation coefficients ranged from 0.00 to 0.23). Excluding indicators representing points-scoring actions or events occurring on average less than once per match, 13 of 17 indicators had substantial clear within-team relationships with points scored and/or likelihood of victory. Relationships between teams were generally similar in magnitude but unclear. Tactics that increase points scoring and likelihood of winning should be based on greater ball possession, fewer rucks, mauls, turnovers, penalties and free kicks, and limited passing. Key pointsSuccessful international rugby sevens teams tend to maintain ball possession; more frequently avoid taking the ball into contact; concede fewer turnovers, penalties and free kicks; retain possession in scrums, rucks and mauls; and limit passing the ball.Selected performance indicators may be used to evaluate team performances and plan more effective tactical approaches to competition.There is greater match-to-match variability in performance indicator values within than between international rugby sevens teams.The priorities for a rugby sevens team's technical and tactical preparation should reflect the magnitudes of the relationships between performance indicators, points scoring and the likelihood of winning.

  5. Performance Indicators Related to Points Scoring and Winning in International Rugby Sevens

    PubMed Central

    Higham, Dean G.; Hopkins, Will G.; Pyne, David B.; Anson, Judith M.

    2014-01-01

    Identification of performance indicators related to scoring points and winning is needed to inform tactical approaches to international rugby sevens competition. The aim of this study was to characterize team performance indicators in international rugby sevens and quantify their relationship with a team’s points scored and probability of winning. Performance indicators of each team during 196 matches of the 2011/2012 International Rugby Board Sevens World Series were modeled for their linear relationships with points scored and likelihood of winning within (changes in team values from match to match) and between (differences between team values averaged over all matches) teams. Relationships were evaluated as the change and difference in points and probability of winning associated with a two within- and between-team standard deviations increase in performance indicator values. Inferences about relationships were assessed using a smallest meaningful difference of one point and a 10% probability of a team changing the outcome of a close match. All indicators exhibited high within-team match-to-match variability (intraclass correlation coefficients ranged from 0.00 to 0.23). Excluding indicators representing points-scoring actions or events occurring on average less than once per match, 13 of 17 indicators had substantial clear within-team relationships with points scored and/or likelihood of victory. Relationships between teams were generally similar in magnitude but unclear. Tactics that increase points scoring and likelihood of winning should be based on greater ball possession, fewer rucks, mauls, turnovers, penalties and free kicks, and limited passing. Key points Successful international rugby sevens teams tend to maintain ball possession; more frequently avoid taking the ball into contact; concede fewer turnovers, penalties and free kicks; retain possession in scrums, rucks and mauls; and limit passing the ball. Selected performance indicators may be used to evaluate team performances and plan more effective tactical approaches to competition. There is greater match-to-match variability in performance indicator values within than between international rugby sevens teams. The priorities for a rugby sevens team’s technical and tactical preparation should reflect the magnitudes of the relationships between performance indicators, points scoring and the likelihood of winning. PMID:24790490

  6. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Scattered data interpolation is a problem of interest in numerous areas such as electronic imaging, smooth surface modeling, and computational geometry. Our motivation arises from applications in geology and mining, which often involve large scattered data sets and a demand for high accuracy. The method of choice is ordinary kriging. This is because it is a best unbiased estimator. Unfortunately, this interpolant is computationally very expensive to compute exactly. For n scattered data points, computing the value of a single interpolant involves solving a dense linear system of size roughly n x n. This is infeasible for large n. In practice, kriging is solved approximately by local approaches that are based on considering only a relatively small'number of points that lie close to the query point. There are many problems with this local approach, however. The first is that determining the proper neighborhood size is tricky, and is usually solved by ad hoc methods such as selecting a fixed number of nearest neighbors or all the points lying within a fixed radius. Such fixed neighborhood sizes may not work well for all query points, depending on local density of the point distribution. Local methods also suffer from the problem that the resulting interpolant is not continuous. Meyer showed that while kriging produces smooth continues surfaces, it has zero order continuity along its borders. Thus, at interface boundaries where the neighborhood changes, the interpolant behaves discontinuously. Therefore, it is important to consider and solve the global system for each interpolant. However, solving such large dense systems for each query point is impractical. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. The problems arise from the fact that the covariance functions that are used in kriging have global support. Our implementations combine, utilize, and enhance a number of different approaches that have been introduced in literature for solving large linear systems for interpolation of scattered data points. For very large systems, exact methods such as Gaussian elimination are impractical since they require 0(n(exp 3)) time and 0(n(exp 2)) storage. As Billings et al. suggested, we use an iterative approach. In particular, we use the SYMMLQ method, for solving the large but sparse ordinary kriging systems that result from tapering. The main technical issue that need to be overcome in our algorithmic solution is that the points' covariance matrix for kriging should be symmetric positive definite. The goal of tapering is to obtain a sparse approximate representation of the covariance matrix while maintaining its positive definiteness. Furrer et al. used tapering to obtain a sparse linear system of the form Ax = b, where A is the tapered symmetric positive definite covariance matrix. Thus, Cholesky factorization could be used to solve their linear systems. They implemented an efficient sparse Cholesky decomposition method. They also showed if these tapers are used for a limited class of covariance models, the solution of the system converges to the solution of the original system. Matrix A in the ordinary kriging system, while symmetric, is not positive definite. Thus, their approach is not applicable to the ordinary kriging system. Therefore, we use tapering only to obtain a sparse linear system. Then, we use SYMMLQ to solve the ordinary kriging system. We show that solving large kriging systems becomes practical via tapering and iterative methods, and results in lower estimation errors compared to traditional local approaches, and significant memory savings compared to the original global system. We also developed a more efficient variant of the sparse SYMMLQ method for large ordinary kriging systems. This approach adaptively finds the correct local neighborhood for each query point in the interpolation process.

  7. Frame Shift/warp Compensation for the ARID Robot System

    NASA Technical Reports Server (NTRS)

    Latino, Carl D.

    1991-01-01

    The Automatic Radiator Inspection Device (ARID) is a system aimed at automating the tedious task of inspecting orbiter radiator panels. The ARID must have the ability to aim a camera accurately at the desired inspection points, which are in the order of 13,000. The ideal inspection points are known; however, the panel may be relocated due to inaccurate parking and warpage. A method of determining the mathematical description of a translated as well as a warped surface by accurate measurement of only a few points on this surface is developed here. The method uses a linear warp model whose effect is superimposed on the rigid body translation. Due to the angles involved, small angle approximations are possible, which greatly reduces the computational complexity. Given an accurate linear warp model, all the desired translation and warp parameters can be obtained by knowledge of the ideal locations of four fiducial points and the corresponding measurements of these points on the actual radiator surface. The method uses three of the fiducials to define a plane and the fourth to define the warp. Given this information, it is possible to determine a transformation that will enable the ARID system to translate any desired inspection point on the ideal surface to its corresponding value on the actual surface.

  8. Modeling Menstrual Cycle Length and Variability at the Approach of Menopause Using Hierarchical Change Point Models

    PubMed Central

    Huang, Xiaobi; Elliott, Michael R.; Harlow, Siobán D.

    2013-01-01

    SUMMARY As women approach menopause, the patterns of their menstrual cycle lengths change. To study these changes, we need to jointly model both the mean and variability of cycle length. Our proposed model incorporates separate mean and variance change points for each woman and a hierarchical model to link them together, along with regression components to include predictors of menopausal onset such as age at menarche and parity. Additional complexity arises from the fact that the calendar data have substantial missingness due to hormone use, surgery, and failure to report. We integrate multiple imputation and time-to event modeling in a Bayesian estimation framework to deal with different forms of the missingness. Posterior predictive model checks are applied to evaluate the model fit. Our method successfully models patterns of women’s menstrual cycle trajectories throughout their late reproductive life and identifies change points for mean and variability of segment length, providing insight into the menopausal process. More generally, our model points the way toward increasing use of joint mean-variance models to predict health outcomes and better understand disease processes. PMID:24729638

  9. Lung function in type 2 diabetes: the Normative Aging Study.

    PubMed

    Litonjua, Augusto A; Lazarus, Ross; Sparrow, David; Demolles, Debbie; Weiss, Scott T

    2005-12-01

    Cross-sectional studies have noted that subjects with diabetes have lower lung function than non-diabetic subjects. We conducted this analysis to determine whether diabetic subjects have different rates of lung function change compared with non-diabetic subjects. We conducted a nested case-control analysis in 352 men who developed diabetes and 352 non-diabetic subjects in a longitudinal observational study of aging in men. We assessed lung function among cases and controls at three time points: Time0, prior to meeting the definition of diabetes; Time1, the point when the definition of diabetes was met; and Time2, the most recent follow-up exam. Cases had lower forced expiratory volume in 1s (FEV1) and forced vital capacity (FVC) at all time points, even with adjustment for age, height, weight, and smoking. In multiple linear regression models adjusting for relevant covariates, there were no differences in rates of FEV1 or FVC change over time between cases and controls. Men who are predisposed to develop diabetes have decreased lung function many years prior to the diagnosis, compared with men who do not develop diabetes. This decrement in lung function remains after the development of diabetes. We postulate that mechanisms involved in the insulin resistant state contribute to the diminished lung function observed in our subjects.

  10. Nonlinear Impedance Analysis of La 0.4Sr 0.6Co 0.2Fe 0.8O 3-δ Thin Film Oxygen Electrodes

    DOE PAGES

    Geary, Tim C.; Lee, Dongkyu; Shao-Horn, Yang; ...

    2016-07-23

    Here, linear and nonlinear electrochemical impedance spectroscopy (EIS, NLEIS) were used to study 20 nm thin film La 0.6Sr 0.4Co 0.2Fe 0.8O 3-δ (LSCF-6428) electrodes at 600°C in oxygen environments. LSCF films were epitaxially deposited on single crystal yttria-stabilized zirconia (YSZ) with a 5 nm gadolinium-doped ceria (GDC) protective interlayer. Impedance measurements reveal an oxygen storage capacity similar to independent thermogravimetry measurements on semi-porous pellets. However, the impedance data fail to obey a homogeneous semiconductor point-defect model. Two consistent scenarios were considered: a homogeneous film with non-ideal thermodynamics (constrained by thermogravimetry measurements), or an inhomogeneous film (constrained by a semiconductormore » point-defect model with a Sr maldistribution). The latter interpretation suggests that gradients in Sr composition would have to extend beyond the space-charge region of the gas-electrode interface. While there is growing evidence supporting an equilibrium Sr segregation at the LSCF surface monolayer, a long-range, non-equilibrium Sr stratification caused by electrode processing conditions offers a possible explanation for the large volume of highly reducible LSCF. Additionally, all thin films exhibited fluctuations in both linear and nonlinear impedance over the hundred-hour measurement period. This behavior is inconsistent with changes solely in the surface rate coefficient and possibly caused by variations in the surface thermodynamics over exposure time.« less

  11. The Use of Crow-AMSAA Plots to Assess Mishap Trends

    NASA Technical Reports Server (NTRS)

    Dawson, Jeffrey W.

    2011-01-01

    Crow-AMSAA (CA) plots are used to model reliability growth. Use of CA plots has expanded into other areas, such as tracking events of interest to management, maintenance problems, and safety mishaps. Safety mishaps can often be successfully modeled using a Poisson probability distribution. CA plots show a Poisson process in log-log space. If the safety mishaps are a stable homogenous Poisson process, a linear fit to the points in a CA plot will have a slope of one. Slopes of greater than one indicate a nonhomogenous Poisson process, with increasing occurrence. Slopes of less than one indicate a nonhomogenous Poisson process, with decreasing occurrence. Changes in slope, known as "cusps," indicate a change in process, which could be an improvement or a degradation. After presenting the CA conceptual framework, examples are given of trending slips, trips and falls, and ergonomic incidents at NASA (from Agency-level data). Crow-AMSAA plotting is a robust tool for trending safety mishaps that can provide insight into safety performance over time.

  12. Loading Deformation Characteristic Simulation Study of Engineering Vehicle Refurbished Tire

    NASA Astrophysics Data System (ADS)

    Qiang, Wang; Xiaojie, Qi; Zhao, Yang; Yunlong, Wang; Guotian, Wang; Degang, Lv

    2018-05-01

    The paper constructed engineering vehicle refurbished tire computer geometry model, mechanics model, contact model, finite element analysis model, did simulation study on load-deformation property of engineering vehicle refurbished tire by comparing with that of the new and the same type tire, got load-deformation of engineering vehicle refurbished tire under the working condition of static state and ground contact. The analysis result shows that change rules of radial-direction deformation and side-direction deformation of engineering vehicle refurbished tire are close to that of the new tire, radial-direction and side-direction deformation value is a little less than that of the new tire. When air inflation pressure was certain, radial-direction deformation linear rule of engineer vehicle refurbished tire would increase with load adding, however, side-direction deformation showed linear change rule, when air inflation pressure was low; and it would show increase of non-linear change rule, when air inflation pressure was very high.

  13. Toward efficient biomechanical-based deformable image registration of lungs for image-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Al-Mayah, Adil; Moseley, Joanne; Velec, Mike; Brock, Kristy

    2011-08-01

    Both accuracy and efficiency are critical for the implementation of biomechanical model-based deformable registration in clinical practice. The focus of this investigation is to evaluate the potential of improving the efficiency of the deformable image registration of the human lungs without loss of accuracy. Three-dimensional finite element models have been developed using image data of 14 lung cancer patients. Each model consists of two lungs, tumor and external body. Sliding of the lungs inside the chest cavity is modeled using a frictionless surface-based contact model. The effect of the type of element, finite deformation and elasticity on the accuracy and computing time is investigated. Linear and quadrilateral tetrahedral elements are used with linear and nonlinear geometric analysis. Two types of material properties are applied namely: elastic and hyperelastic. The accuracy of each of the four models is examined using a number of anatomical landmarks representing the vessels bifurcation points distributed across the lungs. The registration error is not significantly affected by the element type or linearity of analysis, with an average vector error of around 2.8 mm. The displacement differences between linear and nonlinear analysis methods are calculated for all lungs nodes and a maximum value of 3.6 mm is found in one of the nodes near the entrance of the bronchial tree into the lungs. The 95 percentile of displacement difference ranges between 0.4 and 0.8 mm. However, the time required for the analysis is reduced from 95 min in the quadratic elements nonlinear geometry model to 3.4 min in the linear element linear geometry model. Therefore using linear tetrahedral elements with linear elastic materials and linear geometry is preferable for modeling the breathing motion of lungs for image-guided radiotherapy applications.

  14. Composite solvers for linear saddle point problems arising from the incompressible Stokes equations with highly heterogeneous viscosity structure

    NASA Astrophysics Data System (ADS)

    Sanan, P.; Schnepp, S. M.; May, D.; Schenk, O.

    2014-12-01

    Geophysical applications require efficient forward models for non-linear Stokes flow on high resolution spatio-temporal domains. The bottleneck in applying the forward model is solving the linearized, discretized Stokes problem which takes the form of a large, indefinite (saddle point) linear system. Due to the heterogeniety of the effective viscosity in the elliptic operator, devising effective preconditioners for saddle point problems has proven challenging and highly problem-dependent. Nevertheless, at least three approaches show promise for preconditioning these difficult systems in an algorithmically scalable way using multigrid and/or domain decomposition techniques. The first is to work with a hierarchy of coarser or smaller saddle point problems. The second is to use the Schur complement method to decouple and sequentially solve for the pressure and velocity. The third is to use the Schur decomposition to devise preconditioners for the full operator. These involve sub-solves resembling inexact versions of the sequential solve. The choice of approach and sub-methods depends crucially on the motivating physics, the discretization, and available computational resources. Here we examine the performance trade-offs for preconditioning strategies applied to idealized models of mantle convection and lithospheric dynamics, characterized by large viscosity gradients. Due to the arbitrary topological structure of the viscosity field in geodynamical simulations, we utilize low order, inf-sup stable mixed finite element spatial discretizations which are suitable when sharp viscosity variations occur in element interiors. Particular attention is paid to possibilities within the decoupled and approximate Schur complement factorization-based monolithic approaches to leverage recently-developed flexible, communication-avoiding, and communication-hiding Krylov subspace methods in combination with `heavy' smoothers, which require solutions of large per-node sub-problems, well-suited to solution on hybrid computational clusters. To manage the combinatorial explosion of solver options (which include hybridizations of all the approaches mentioned above), we leverage the modularity of the PETSc library.

  15. Parameterized Linear Longitudinal Airship Model

    NASA Technical Reports Server (NTRS)

    Kulczycki, Eric; Elfes, Alberto; Bayard, David; Quadrelli, Marco; Johnson, Joseph

    2010-01-01

    A parameterized linear mathematical model of the longitudinal dynamics of an airship is undergoing development. This model is intended to be used in designing control systems for future airships that would operate in the atmospheres of Earth and remote planets. Heretofore, the development of linearized models of the longitudinal dynamics of airships has been costly in that it has been necessary to perform extensive flight testing and to use system-identification techniques to construct models that fit the flight-test data. The present model is a generic one that can be relatively easily specialized to approximate the dynamics of specific airships at specific operating points, without need for further system identification, and with significantly less flight testing. The approach taken in the present development is to merge the linearized dynamical equations of an airship with techniques for estimation of aircraft stability derivatives, and to thereby make it possible to construct a linearized dynamical model of the longitudinal dynamics of a specific airship from geometric and aerodynamic data pertaining to that airship. (It is also planned to develop a model of the lateral dynamics by use of the same methods.) All of the aerodynamic data needed to construct the model of a specific airship can be obtained from wind-tunnel testing and computational fluid dynamics

  16. Fiber-optic epoxy composite cure sensor. I. Dependence of refractive index of an autocatalytic reaction epoxy system at 850 nm on temperature and extent of cure

    NASA Astrophysics Data System (ADS)

    Lam, Kai-Yuen; Afromowitz, Martin A.

    1995-09-01

    We discuss the behavior of the refractive index of a typical epoxy-aromatic diamine system. Near 850 nm the index of refraction is found to be largely controlled by the density of the epoxy. Models are derived to describe its dependence on temperature and extent of cure. Within the range of temperatures studied, the refractive index decreases linearly with increasing temperature. In addition, as the epoxy is cured, the refractive index increases linearly with conversion to the gel point. >From then on, shrinkage in the volume of the epoxy is restricted by local viscosity. Therefore the linear relationship between the refractive index and the extent of cure does not hold beyond the gel point.

  17. Analysis of b quark pair production signal from neutral 2HDM Higgs bosons at future linear colliders

    NASA Astrophysics Data System (ADS)

    Hashemi, Majid; MahdaviKhorrami, Mostafa

    2018-06-01

    In this paper, the b quark pair production events are analyzed as a source of neutral Higgs bosons of the two Higgs doublet model type I at linear colliders. The production mechanism is e+e- → Z^{(*)} → HA → b{\\bar{b}}b{\\bar{b}} assuming a fully hadronic final state. The analysis aim is to identify both CP-even and CP-odd Higgs bosons in different benchmark points accommodating moderate boson masses. Due to pair production of Higgs bosons, the analysis is most suitable for a linear collider operating at √{s} = 1 TeV. Results show that in selected benchmark points, signal peaks are observable in the b-jet pair invariant mass distributions at integrated luminosity of 500 fb^{-1}.

  18. Escaping the snare of chronological growth and launching a free curve alternative: general deviance as latent growth model.

    PubMed

    Wood, Phillip Karl; Jackson, Kristina M

    2013-08-01

    Researchers studying longitudinal relationships among multiple problem behaviors sometimes characterize autoregressive relationships across constructs as indicating "protective" or "launch" factors or as "developmental snares." These terms are used to indicate that initial or intermediary states of one problem behavior subsequently inhibit or promote some other problem behavior. Such models are contrasted with models of "general deviance" over time in which all problem behaviors are viewed as indicators of a common linear trajectory. When fit of the "general deviance" model is poor and fit of one or more autoregressive models is good, this is taken as support for the inhibitory or enhancing effect of one construct on another. In this paper, we argue that researchers consider competing models of growth before comparing deviance and time-bound models. Specifically, we propose use of the free curve slope intercept (FCSI) growth model (Meredith & Tisak, 1990) as a general model to typify change in a construct over time. The FCSI model includes, as nested special cases, several statistical models often used for prospective data, such as linear slope intercept models, repeated measures multivariate analysis of variance, various one-factor models, and hierarchical linear models. When considering models involving multiple constructs, we argue the construct of "general deviance" can be expressed as a single-trait multimethod model, permitting a characterization of the deviance construct over time without requiring restrictive assumptions about the form of growth over time. As an example, prospective assessments of problem behaviors from the Dunedin Multidisciplinary Health and Development Study (Silva & Stanton, 1996) are considered and contrasted with earlier analyses of Hussong, Curran, Moffitt, and Caspi (2008), which supported launch and snare hypotheses. For antisocial behavior, the FCSI model fit better than other models, including the linear chronometric growth curve model used by Hussong et al. For models including multiple constructs, a general deviance model involving a single trait and multimethod factors (or a corresponding hierarchical factor model) fit the data better than either the "snares" alternatives or the general deviance model previously considered by Hussong et al. Taken together, the analyses support the view that linkages and turning points cannot be contrasted with general deviance models absent additional experimental intervention or control.

  19. Escaping the snare of chronological growth and launching a free curve alternative: General deviance as latent growth model

    PubMed Central

    WOOD, PHILLIP KARL; JACKSON, KRISTINA M.

    2014-01-01

    Researchers studying longitudinal relationships among multiple problem behaviors sometimes characterize autoregressive relationships across constructs as indicating “protective” or “launch” factors or as “developmental snares.” These terms are used to indicate that initial or intermediary states of one problem behavior subsequently inhibit or promote some other problem behavior. Such models are contrasted with models of “general deviance” over time in which all problem behaviors are viewed as indicators of a common linear trajectory. When fit of the “general deviance” model is poor and fit of one or more autoregressive models is good, this is taken as support for the inhibitory or enhancing effect of one construct on another. In this paper, we argue that researchers consider competing models of growth before comparing deviance and time-bound models. Specifically, we propose use of the free curve slope intercept (FCSI) growth model (Meredith & Tisak, 1990) as a general model to typify change in a construct over time. The FCSI model includes, as nested special cases, several statistical models often used for prospective data, such as linear slope intercept models, repeated measures multivariate analysis of variance, various one-factor models, and hierarchical linear models. When considering models involving multiple constructs, we argue the construct of “general deviance” can be expressed as a single-trait multimethod model, permitting a characterization of the deviance construct over time without requiring restrictive assumptions about the form of growth over time. As an example, prospective assessments of problem behaviors from the Dunedin Multidisciplinary Health and Development Study (Silva & Stanton, 1996) are considered and contrasted with earlier analyses of Hussong, Curran, Moffitt, and Caspi (2008), which supported launch and snare hypotheses. For antisocial behavior, the FCSI model fit better than other models, including the linear chronometric growth curve model used by Hussong et al. For models including multiple constructs, a general deviance model involving a single trait and multimethod factors (or a corresponding hierarchical factor model) fit the data better than either the “snares” alternatives or the general deviance model previously considered by Hussong et al. Taken together, the analyses support the view that linkages and turning points cannot be contrasted with general deviance models absent additional experimental intervention or control. PMID:23880389

  20. Changes in Body Weight and Health-Related Quality of Life: 2 Cohorts of US Women

    PubMed Central

    Pan, An; Kawachi, Ichiro; Luo, Nan; Manson, JoAnn E.; Willett, Walter C.; Hu, Frank B.; Okereke, Olivia I.

    2014-01-01

    Studies have shown that body weight is a determinant of health-related quality of life (HRQoL). However, few studies have examined long-term weight change with changes in HRQoL. We followed 52,682 women aged 46–71 years in the Nurses' Health Study (in 1992–2000) and 52,587 women aged 29–46 years in the Nurses’ Health Study II (in 1993–2001). Body weight was self-reported, HRQoL was measured by the Medical Outcomes Study's 36-Item Short Form Health Survey, and both were updated every 4 years. The relationship between changes in weight and HRQoL scores was evaluated at 4-year intervals by using a generalized linear regression model with multivariate adjustment for baseline age, ethnicity, menopausal status, and changes in comorbidities and lifestyle factors. Weight gain of 15 lbs (1 lb = 0.45 kg) or more over a 4-year period was associated with 2.05-point lower (95% confidence interval: 2.14, 1.95) physical component scores, whereas weight loss of 15 lbs or more was associated with 0.89-point higher (95% confidence interval: 0.75, 1.03) physical component scores. Inverse associations were also found between weight change and physical function, role limitations due to physical problems, bodily pain, general health, and vitality. However, the relations of weight change with mental component scores, social functioning, mental health, and role limitations due to emotional problems were small. PMID:24966215

  1. A framework for emissions source apportionment in industrial areas: MM5/CALPUFF in a near-field application.

    PubMed

    Ghannam, K; El-Fadel, M

    2013-02-01

    This paper examines the relative source contribution to ground-level concentrations of carbon monoxide (CO), nitrogen dioxide (NO2), and PM10 (particulate matter with an aerodynamic diameter < 10 microm) in a coastal urban area due to emissions from an industrial complex with multiple stacks, quarrying activities, and a nearby highway. For this purpose, an inventory of CO, oxide of nitrogen (NO(x)), and PM10 emissions was coupled with the non-steady-state Mesoscale Model 5/California Puff Dispersion Modeling system to simulate individual source contributions under several spatial and temporal scales. As the contribution of a particular source to ground-level concentrations can be evaluated by simulating this single-source emissions or otherwise total emissions except that source, a set of emission sensitivity simulations was designed to examine if CALPUFF maintains a linear relationship between emission rates and predicted concentrations in cases where emitted plumes overlap and chemical transformations are simulated. Source apportionment revealed that ground-level releases (i.e., highway and quarries) extended over large areas dominated the contribution to exposure levels over elevated point sources, despite the fact that cumulative emissions from point sources are higher. Sensitivity analysis indicated that chemical transformations of NO(x) are insignificant, possibly due to short-range plume transport, with CALPUFF exhibiting a linear response to changes in emission rate. The current paper points to the significance of ground-level emissions in contributing to urban air pollution exposure and questions the viability of the prevailing paradigm of point-source emission reduction, especially that the incremental improvement in air quality associated with this common abatement strategy may not accomplish the desirable benefit in terms of lower exposure with costly emissions capping. The application of atmospheric dispersion models for source apportionment helps in identifying major contributors to regional air pollution. In industrial urban areas where multiple sources with different geometry contribute to emissions, ground-level releases extended over large areas such as roads and quarries often dominate the contribution to ground-level air pollution. Industrial emissions released at elevated stack heights may experience significant dilution, resulting in minor contribution to exposure at ground level. In such contexts, emission reduction, which is invariably the abatement strategy targeting industries at a significant investment in control equipment or process change, may result in minimal return on investment in terms of improvement in air quality at sensitive receptors.

  2. Fast and local non-linear evolution of steep wave-groups on deep water: A comparison of approximate models to fully non-linear simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adcock, T. A. A.; Taylor, P. H.

    2016-01-15

    The non-linear Schrödinger equation and its higher order extensions are routinely used for analysis of extreme ocean waves. This paper compares the evolution of individual wave-packets modelled using non-linear Schrödinger type equations with packets modelled using fully non-linear potential flow models. The modified non-linear Schrödinger Equation accurately models the relatively large scale non-linear changes to the shape of wave-groups, with a dramatic contraction of the group along the mean propagation direction and a corresponding extension of the width of the wave-crests. In addition, as extreme wave form, there is a local non-linear contraction of the wave-group around the crest whichmore » leads to a localised broadening of the wave spectrum which the bandwidth limited non-linear Schrödinger Equations struggle to capture. This limitation occurs for waves of moderate steepness and a narrow underlying spectrum.« less

  3. Analysis of a Spatial Point Pattern: Examining the Damage to Pavement and Pipes in Santa Clara Valley Resulting from the Loma Prieta Earthquake

    USGS Publications Warehouse

    Phelps, G.A.

    2008-01-01

    This report describes some simple spatial statistical methods to explore the relationships of scattered points to geologic or other features, represented by points, lines, or areas. It also describes statistical methods to search for linear trends and clustered patterns within the scattered point data. Scattered points are often contained within irregularly shaped study areas, necessitating the use of methods largely unexplored in the point pattern literature. The methods take advantage of the power of modern GIS toolkits to numerically approximate the null hypothesis of randomly located data within an irregular study area. Observed distributions can then be compared with the null distribution of a set of randomly located points. The methods are non-parametric and are applicable to irregularly shaped study areas. Patterns within the point data are examined by comparing the distribution of the orientation of the set of vectors defined by each pair of points within the data with the equivalent distribution for a random set of points within the study area. A simple model is proposed to describe linear or clustered structure within scattered data. A scattered data set of damage to pavement and pipes, recorded after the 1989 Loma Prieta earthquake, is used as an example to demonstrate the analytical techniques. The damage is found to be preferentially located nearer a set of mapped lineaments than randomly scattered damage, suggesting range-front faulting along the base of the Santa Cruz Mountains is related to both the earthquake damage and the mapped lineaments. The damage also exhibit two non-random patterns: a single cluster of damage centered in the town of Los Gatos, California, and a linear alignment of damage along the range front of the Santa Cruz Mountains, California. The linear alignment of damage is strongest between 45? and 50? northwest. This agrees well with the mean trend of the mapped lineaments, measured as 49? northwest.

  4. Knee medial and lateral contact forces in a musculoskeletal model with subject-specific contact point trajectories.

    PubMed

    Zeighami, A; Aissaoui, R; Dumas, R

    2018-03-01

    Contact point (CP) trajectory is a crucial parameter in estimating medial/lateral tibio-femoral contact forces from the musculoskeletal (MSK) models. The objective of the present study was to develop a method to incorporate the subject-specific CP trajectories into the MSK model. Ten healthy subjects performed 45 s treadmill gait trials. The subject-specific CP trajectories were constructed on the tibia and femur as a function of extension-flexion using low-dose bi-plane X-ray images during a quasi-static squat. At each extension-flexion position, the tibia and femur CPs were superimposed in the three directions on the medial side, and in the anterior-posterior and proximal-distal directions on the lateral side to form the five kinematic constraints of the knee joint. The Lagrange multipliers associated to these constraints directly yielded the medial/lateral contact forces. The results from the personalized CP trajectory model were compared against the linear CP trajectory and sphere-on-plane CP trajectory models which were adapted from the commonly used MSK models. Changing the CP trajectory had a remarkable impact on the knee kinematics and changed the medial and lateral contact forces by 1.03 BW and 0.65 BW respectively, in certain subjects. The direction and magnitude of the medial/lateral contact force were highly variable among the subjects and the medial-lateral shift of the CPs alone could not determine the increase/decrease pattern of the contact forces. The suggested kinematic constraints are adaptable to the CP trajectories derived from a variety of joint models and those experimentally measured from the 3D imaging techniques. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Robust hopping based on virtual pendulum posture control.

    PubMed

    Sharbafi, Maziar A; Maufroy, Christophe; Ahmadabadi, Majid Nili; Yazdanpanah, Mohammad J; Seyfarth, Andre

    2013-09-01

    A new control approach to achieve robust hopping against perturbations in the sagittal plane is presented in this paper. In perturbed hopping, vertical body alignment has a significant role for stability. Our approach is based on the virtual pendulum concept, recently proposed, based on experimental findings in human and animal locomotion. In this concept, the ground reaction forces are pointed to a virtual support point, named virtual pivot point (VPP), during motion. This concept is employed in designing the controller to balance the trunk during the stance phase. New strategies for leg angle and length adjustment besides the virtual pendulum posture control are proposed as a unified controller. This method is investigated by applying it on an extension of the spring loaded inverted pendulum (SLIP) model. Trunk, leg mass and damping are added to the SLIP model in order to make the model more realistic. The stability is analyzed by Poincaré map analysis. With fixed VPP position, stability, disturbance rejection and moderate robustness are achieved, but with a low convergence speed. To improve the performance and attain higher robustness, an event-based control of the VPP position is introduced, using feedback of the system states at apexes. Discrete linear quartic regulator is used to design the feedback controller. Considerable enhancements with respect to stability, convergence speed and robustness against perturbations and parameter changes are achieved.

  6. An Overview of Longitudinal Data Analysis Methods for Neurological Research

    PubMed Central

    Locascio, Joseph J.; Atri, Alireza

    2011-01-01

    The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models. PMID:22203825

  7. Effect of non-linear fluid pressure diffusion on modeling induced seismicity during reservoir stimulation

    NASA Astrophysics Data System (ADS)

    Gischig, V.; Goertz-Allmann, B. P.; Bachmann, C. E.; Wiemer, S.

    2012-04-01

    Success of future enhanced geothermal systems relies on an appropriate pre-estimate of seismic risk associated with fluid injection at high pressure. A forward-model based on a semi-stochastic approach was created, which is able to compute synthetic earthquake catalogues. It proved to be able to reproduce characteristics of the seismic cloud detected during the geothermal project in Basel (Switzerland), such as radial dependence of stress drop and b-values as well as higher probability of large magnitude earthquakes (M>3) after shut-in. The modeling strategy relies on a simplistic fluid pressure model used to trigger failure points (so-called seeds) that are randomly distributed around an injection well. The seed points are assigned principal stress magnitudes drawn from Gaussian distributions representative of the ambient stress field. Once the effective stress state at a seed point meets a pre-defined Mohr-Coulomb failure criterion due to a fluid pressure increase a seismic event is induced. We assume a negative linear relationship between b-values and differential stress. Thus, for each event a magnitude can be drawn from a Gutenberg-Richter distribution with a b-value corresponding to differential stress at failure. The result is a seismic cloud evolving in time and space. Triggering of seismic events depends on appropriately calculating the transient fluid pressure field. Hence an effective continuum reservoir model able to reasonably reproduce the hydraulic behavior of the reservoir during stimulation is required. While analytical solutions for pressure diffusion are computationally efficient, they rely on linear pressure diffusion with constant hydraulic parameters, and only consider well head pressure while neglecting fluid injection rate. They cannot be considered appropriate in a stimulation experiment where permeability irreversibly increases by orders of magnitude during injection. We here suggest a numerical continuum model of non-linear pressure diffusion. Permeability increases both reversibly and, if a certain pressure threshold is reached, irreversibly in the form of a smoothed step-function. The models are able to reproduce realistic well head pressure magnitudes for injection rates common during reservoir stimulation. We connect this numerical model with the semi-stochastic seismicity model, and demonstrate the role of non-linear pressure diffusion on earthquakes probability estimates. We further use the model to explore various injection histories to assess the dependence of seismicity on injection strategy. It allows to qualitatively explore the probability of larger magnitude earthquakes (M>3) for different injection volumes, injection times, as well as injection build-up and shut-in strategies.

  8. Registration of terrestrial mobile laser data on 2D or 3D geographic database by use of a non-rigid ICP approach.

    NASA Astrophysics Data System (ADS)

    Monnier, F.; Vallet, B.; Paparoditis, N.; Papelard, J.-P.; David, N.

    2013-10-01

    This article presents a generic and efficient method to register terrestrial mobile data with imperfect location on a geographic database with better overall accuracy but less details. The registration method proposed in this paper is based on a semi-rigid point to plane ICP ("Iterative Closest Point"). The main applications of such registration is to improve existing geographic databases, particularly in terms of accuracy, level of detail and diversity of represented objects. Other applications include fine geometric modelling and fine façade texturing, object extraction such as trees, poles, road signs marks, facilities, vehicles, etc. The geopositionning system of mobile mapping systems is affected by GPS masks that are only partially corrected by an Inertial Navigation System (INS) which can cause an important drift. As this drift varies non-linearly, but slowly in time, it will be modelled by a translation defined as a piecewise linear function of time which variation over time will be minimized (rigidity term). For each iteration of the ICP, the drift is estimated in order to minimise the distance between laser points and planar model primitives (data attachment term). The method has been tested on real data (a scan of the city of Paris of 3.6 million laser points registered on a 3D model of approximately 71,400 triangles).

  9. Analytical Incorporation of Velocity Parameters into Ice Sheet Elevation Change Rate Computations

    NASA Astrophysics Data System (ADS)

    Nagarajan, S.; Ahn, Y.; Teegavarapu, R. S. V.

    2014-12-01

    NASA, ESA and various other agencies have been collecting laser, optical and RADAR altimetry data through various missions to study the elevation changes of the Cryosphere. The laser altimetry collected by various airborne and spaceborne missions provides multi-temporal coverage of Greenland and Antarctica since 1993 to now. Though these missions have increased the data coverage, considering the dynamic nature of the ice surface, it is still sparse both spatially and temporally for accurate elevation change detection studies. The temporal and spatial gaps are usually filled by interpolation techniques. This presentation will demonstrate a method to improve the temporal interpolation. Considering the accuracy, repeat coverage and spatial distribution, the laser scanning data has been widely used to compute elevation change rate of Greenland and Antarctica ice sheets. A major problem with these approaches is non-consideration of ice sheet velocity dynamics into change rate computations. Though the correlation between velocity and elevation change rate have been noticed by Hurkmans et al., 2012, the corrections for velocity changes were applied after computing elevation change rates by assuming linear or higher polynomial relationship. This research will discuss the possibilities of parameterizing ice sheet dynamics as unknowns (dX and dY) in the adjustment mathematical model that computes elevation change (dZ) rates. It is a simultaneous computation of changes in all three directions of the ice surface. Also, the laser points between two time epochs in a crossover area have different distribution and count. Therefore, a registration method that does not require point-to-point correspondence is required to recover the unknown elevation and velocity parameters. This research will experiment the possibilities of registering multi-temporal datasets using volume minimization algorithm, which determines the unknown dX, dY and dZ that minimizes the volume between two or more time-epoch point clouds. In order to make use of other existing data as well as to constrain the adjustment, InSAR velocity will be used as initial values for the parameters dX and dY. The presentation will discuss the results of analytical incorporation of parameters and the volume based registration method for a test site in Greenland.

  10. A phenomenological biological dose model for proton therapy based on linear energy transfer spectra.

    PubMed

    Rørvik, Eivind; Thörnqvist, Sara; Stokkevåg, Camilla H; Dahle, Tordis J; Fjaera, Lars Fredrik; Ytre-Hauge, Kristian S

    2017-06-01

    The relative biological effectiveness (RBE) of protons varies with the radiation quality, quantified by the linear energy transfer (LET). Most phenomenological models employ a linear dependency of the dose-averaged LET (LET d ) to calculate the biological dose. However, several experiments have indicated a possible non-linear trend. Our aim was to investigate if biological dose models including non-linear LET dependencies should be considered, by introducing a LET spectrum based dose model. The RBE-LET relationship was investigated by fitting of polynomials from 1st to 5th degree to a database of 85 data points from aerobic in vitro experiments. We included both unweighted and weighted regression, the latter taking into account experimental uncertainties. Statistical testing was performed to decide whether higher degree polynomials provided better fits to the data as compared to lower degrees. The newly developed models were compared to three published LET d based models for a simulated spread out Bragg peak (SOBP) scenario. The statistical analysis of the weighted regression analysis favored a non-linear RBE-LET relationship, with the quartic polynomial found to best represent the experimental data (P = 0.010). The results of the unweighted regression analysis were on the borderline of statistical significance for non-linear functions (P = 0.053), and with the current database a linear dependency could not be rejected. For the SOBP scenario, the weighted non-linear model estimated a similar mean RBE value (1.14) compared to the three established models (1.13-1.17). The unweighted model calculated a considerably higher RBE value (1.22). The analysis indicated that non-linear models could give a better representation of the RBE-LET relationship. However, this is not decisive, as inclusion of the experimental uncertainties in the regression analysis had a significant impact on the determination and ranking of the models. As differences between the models were observed for the SOBP scenario, both non-linear LET spectrum- and linear LET d based models should be further evaluated in clinically realistic scenarios. © 2017 American Association of Physicists in Medicine.

  11. Extensions of D-optimal Minimal Designs for Symmetric Mixture Models.

    PubMed

    Li, Yanyan; Raghavarao, Damaraju; Chervoneva, Inna

    2017-01-01

    The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations.

  12. Evidence of anthropogenic tipping points in fluvial dynamics in Europe

    NASA Astrophysics Data System (ADS)

    Notebaert, Bastiaan; Broothaerts, Nils; Verstraeten, Gert

    2018-05-01

    In this study the occurrence of thresholds in fluvial style changes during the Holocene are discussed for three different catchments: the Dijle and Amblève catchments (Belgium) and the Valdaine Region (France). We consider tipping points to be a specific type of threshold, defined as relatively rapid and irreversible changes in the system. Field data demonstrate that fluvial style has varied in all three catchments over time, and that different tipping points can be identified. An increase in sediment load as a result of human induced soil erosion lead to a permanent change in the Dijle floodplains from a forested peaty marsh towards open landscape with clastic deposition and a well-defined river channel. In the Valdaine catchment, an increase in coarse sediment load, caused by increased erosion in the mountainous upper catchment, altered the floodplains from a meandering pattern to a braided pattern. Other changes in fluvial style appeared to be reversible. Rivers in the Valdaine were prone to different aggradation and incision phases due to changes in peak water discharge and sediment delivery, but the impact was too low for these changes to be irreversible. Likewise the Dijle River has recently be prone to an incision phase due to a clear water effect, and also this change is expected to be reversible. Finally, the Amblève River did not undergo major changes in style during the last 2000 to 5000 years, even though floodplain sedimentation rates increased tenfold during the last 600 years. Overall, these examples demonstrate how changes in fluvial style depend on the crossing of thresholds in sediment supply and water discharge. Although changes in these controlling parameters are caused by anthropogenic land use changes, the link between those land use changes and changes in fluvial style is not linear. This is due to the temporal variability in landscape connectivity and sediment transport and the non-linear relationship between land use intensity and soil erosion.

  13. Power Laws, Scale Invariance and the Generalized Frobenius Series:

    NASA Astrophysics Data System (ADS)

    Visser, Matt; Yunes, Nicolas

    We present a self-contained formalism for calculating the background solution, the linearized solutions and a class of generalized Frobenius-like solutions to a system of scale-invariant differential equations. We first cast the scale-invariant model into its equidimensional and autonomous forms, find its fixed points, and then obtain power-law background solutions. After linearizing about these fixed points, we find a second linearized solution, which provides a distinct collection of power laws characterizing the deviations from the fixed point. We prove that generically there will be a region surrounding the fixed point in which the complete general solution can be represented as a generalized Frobenius-like power series with exponents that are integer multiples of the exponents arising in the linearized problem. While discussions of the linearized system are common, and one can often find a discussion of power-series with integer exponents, power series with irrational (indeed complex) exponents are much rarer in the extant literature. The Frobenius-like series we encounter can be viewed as a variant of the rarely-discussed Liapunov expansion theorem (not to be confused with the more commonly encountered Liapunov functions and Liapunov exponents). As specific examples we apply these ideas to Newtonian and relativistic isothermal stars and construct two separate power series with the overlapping radius of convergence. The second of these power series solutions represents an expansion around "spatial infinity," and in realistic models it is this second power series that gives information about the stellar core, and the damped oscillations in core mass and core radius as the central pressure goes to infinity. The power-series solutions we obtain extend classical results; as exemplified for instance by the work of Lane, Emden, and Chandrasekhar in the Newtonian case, and that of Harrison, Thorne, Wakano, and Wheeler in the relativistic case. We also indicate how to extend these ideas to situations where fixed points may not exist — either due to "monotone" flow or due to the presence of limit cycles. Monotone flow generically leads to logarithmic deviations from scaling, while limit cycles generally lead to discrete self-similar solutions.

  14. A Multiphase Non-Linear Mixed Effects Model: An Application to Spirometry after Lung Transplantation

    PubMed Central

    Rajeswaran, Jeevanantham; Blackstone, Eugene H.

    2014-01-01

    In medical sciences, we often encounter longitudinal temporal relationships that are non-linear in nature. The influence of risk factors may also change across longitudinal follow-up. A system of multiphase non-linear mixed effects model is presented to model temporal patterns of longitudinal continuous measurements, with temporal decomposition to identify the phases and risk factors within each phase. Application of this model is illustrated using spirometry data after lung transplantation using readily available statistical software. This application illustrates the usefulness of our flexible model when dealing with complex non-linear patterns and time varying coefficients. PMID:24919830

  15. The end of the decline in cervical cancer mortality in Spain: trends across the period 1981-2012.

    PubMed

    Cervantes-Amat, Marta; López-Abente, Gonzalo; Aragonés, Nuria; Pollán, Marina; Pastor-Barriuso, Roberto; Pérez-Gómez, Beatriz

    2015-04-15

    In Spain, cervical cancer prevention is based on opportunistic screening, due to the disease's traditionally low incidence and mortality rates. Changes in sexual behaviour, tourism and migration have, however, modified the probability of exposure to human papilloma virus among Spaniards. This study thus sought to evaluate recent cervical cancer mortality trends in Spain. We used annual female population figures and individual records of deaths certified as cancer of cervix, reclassifying deaths recorded as unspecified uterine cancer to correct coding quality problems. Joinpoint models were fitted to estimate change points in trends, as well as the annual (APC) and average annual percentage change. Log-linear Poisson models were also used to study age-period-cohort effects on mortality trends and their change points. 1981 marked the beginning of a decline in cervical cancer mortality (APC(1981-2003): -3.2; 95% CI:-3.4;-3.0) that ended in 2003, with rates reaching a plateau in the last decade (APC2003-2012: 0.1; 95% CI:-0.9; 1.2). This trend, which was observable among women aged 45-46 years (APC(2003-2012): 1.4; 95% CI:-0.1;2.9) and over 65 years (APC(2003-2012): -0.1; 95% CI:-1.9;1.7), was clearest in Spain's Mediterranean and Southern regions. The positive influence of opportunistic screening is not strong enough to further reduce cervical cancer mortality rates in the country. Our results suggest that the Spanish Health Authorities should reform current prevention programmes and surveillance strategies in order to confront the challenges posed by cervical cancer.

  16. The WZNW model on PSU(1,1|2)

    NASA Astrophysics Data System (ADS)

    Götz, Gerhard; Quella, Thomas; Schomerus, Volker

    2007-03-01

    According to the work of Berkovits, Vafa and Witten, the non-linear sigma model on the supergroup PSU(1,1|2) is the essential building block for string theory on AdS3 × S3 × T4. Models associated with a non-vanishing value of the RR flux can be obtained through a psu(1,1|2) invariant marginal deformation of the WZNW model on PSU(1,1|2). We take this as a motivation to present a manifestly psu(1,1|2) covariant construction of the model at the Wess-Zumino point, corresponding to a purely NSNS background 3-form flux. At this point the model possesses an enhanced psu with wide hat(1,1|2) current algebra symmetry whose representation theory, including explicit character formulas, is developed systematically in the first part of the paper. The space of vertex operators and a free fermion representation for their correlation functions is our main subject in the second part. Contrary to a widespread claim, bosonic and fermionic fields are necessarily coupled to each other. The interaction changes the supersymmetry transformations, with drastic consequences for the multiplets of localized normalizable states in the model. It is only this fact which allows us to decompose the full state space into multiplets of the global supersymmetry. We analyze these decompositions systematically as a preparation for a forthcoming study of the RR deformation.

  17. Born-Infeld Gravity Revisited

    NASA Astrophysics Data System (ADS)

    Setare, M. R.; Sahraee, M.

    2013-12-01

    In this paper, we investigate the behavior of linearized gravitational excitation in the Born-Infeld gravity in AdS3 space. We obtain the linearized equation of motion and show that this higher-order gravity propagate two gravitons, massless and massive, on the AdS3 background. In contrast to the R2 models, such as TMG or NMG, Born-Infeld gravity does not have a critical point for any regular choice of parameters. So the logarithmic solution is not a solution of this model, due to this one cannot find a logarithmic conformal field theory as a dual model for Born-Infeld gravity.

  18. A Model Stitching Architecture for Continuous Full Flight-Envelope Simulation of Fixed-Wing Aircraft and Rotorcraft from Discrete Point Linear Models

    DTIC Science & Technology

    2016-04-01

    incorporated with nonlinear elements to produce a continuous, quasi -nonlinear simulation model. Extrapolation methods within the model stitching architecture...Simulation Model, Quasi -Nonlinear, Piloted Simulation, Flight-Test Implications, System Identification, Off-Nominal Loading Extrapolation, Stability...incorporated with nonlinear elements to produce a continuous, quasi -nonlinear simulation model. Extrapolation methods within the model stitching

  19. A Simulation of Alternatives for Wholesale Inventory Replenishment

    DTIC Science & Technology

    2016-03-01

    algorithmic details. The last method is a mixed-integer, linear optimization model. Comparative Inventory Simulation, a discrete event simulation model, is...simulation; event graphs; reorder point; fill-rate; backorder; discrete event simulation; wholesale inventory optimization model 15. NUMBER OF PAGES...model. Comparative Inventory Simulation, a discrete event simulation model, is designed to find fill rates achieved for each National Item

  20. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 1. Theory

    USGS Publications Warehouse

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-01-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  1. An Efficient Deterministic-Probabilistic Approach to Modeling Regional Groundwater Flow: 1. Theory

    NASA Astrophysics Data System (ADS)

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-07-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  2. A scientific and statistical analysis of accelerated aging for pharmaceuticals. Part 1: accuracy of fitting methods.

    PubMed

    Waterman, Kenneth C; Swanson, Jon T; Lippold, Blake L

    2014-10-01

    Three competing mathematical fitting models (a point-by-point estimation method, a linear fit method, and an isoconversion method) of chemical stability (related substance growth) when using high temperature data to predict room temperature shelf-life were employed in a detailed comparison. In each case, complex degradant formation behavior was analyzed by both exponential and linear forms of the Arrhenius equation. A hypothetical reaction was used where a drug (A) degrades to a primary degradant (B), which in turn degrades to a secondary degradation product (C). Calculated data with the fitting models were compared with the projected room-temperature shelf-lives of B and C, using one to four time points (in addition to the origin) for each of three accelerated temperatures. Isoconversion methods were found to provide more accurate estimates of shelf-life at ambient conditions. Of the methods for estimating isoconversion, bracketing the specification limit at each condition produced the best estimates and was considerably more accurate than when extrapolation was required. Good estimates of isoconversion produced similar shelf-life estimates fitting either linear or nonlinear forms of the Arrhenius equation, whereas poor isoconversion estimates favored one method or the other depending on which condition was most in error. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  3. Linear modal stability analysis of bowed-strings.

    PubMed

    Debut, V; Antunes, J; Inácio, O

    2017-03-01

    Linearised models are often invoked as a starting point to study complex dynamical systems. Besides their attractive mathematical simplicity, they have a central role for determining the stability properties of static or dynamical states, and can often shed light on the influence of the control parameters on the system dynamical behaviour. While the bowed string dynamics has been thoroughly studied from a number of points of view, mainly by time-domain computer simulations, this paper proposes to explore its dynamical behaviour adopting a linear framework, linearising the friction force near an equilibrium state in steady sliding conditions, and using a modal representation of the string dynamics. Starting from the simplest idealisation of the friction force given by Coulomb's law with a velocity-dependent friction coefficient, the linearised modal equations of the bowed string are presented, and the dynamical changes of the system as a function of the bowing parameters are studied using linear stability analysis. From the computed complex eigenvalues and eigenvectors, several plots of the evolution of the modal frequencies, damping values, and modeshapes with the bowing parameters are produced, as well as stability charts for each system mode. By systematically exploring the influence of the parameters, this approach appears as a preliminary numerical characterisation of the bifurcations of the bowed string dynamics, with the advantage of being very simple compared to sophisticated numerical approaches which demand the regularisation of the nonlinear interaction force. To fix the idea about the potential of the proposed approach, the classic one-degree-of-freedom friction-excited oscillator is first considered, and then the case of the bowed string. Even if the actual stick-slip behaviour is rather far from the linear description adopted here, the results show that essential musical features of bowed string vibrations can be interpreted from this simple approach, at least qualitatively. Notably, the technique provides an instructive and original picture of bowed motions, in terms of groups of well-defined unstable modes, which is physically intuitive to discuss tonal changes observed in real bowed string.

  4. Trajectory tracking in quadrotor platform by using PD controller and LQR control approach

    NASA Astrophysics Data System (ADS)

    Islam, Maidul; Okasha, Mohamed; Idres, Moumen Mohammad

    2017-11-01

    The purpose of the paper is to discuss a comparative evaluation of performance of two different controllers i.e. Proportional-Derivative Controller (PD) and Linear Quadratic Regulation (LQR) in Quadrotor dynamic system that is under-actuated with high nonlinearity. As only four states can be controlled at the same time in the Quadrotor, the trajectories are designed on the basis of the four states whereas three dimensional position and rotation along an axis, known as yaw movement are considered. In this work, both the PD controller and LQR control approach are used for Quadrotor nonlinear model to track the trajectories. LQR control approach for nonlinear model is designed on the basis of a linear model of the Quadrotor because the performance of linear model and nonlinear model around certain nominal point is almost similar. Simulink and MATLAB software is used to design the controllers and to evaluate the performance of both the controllers.

  5. Quantum description of light propagation in generalized media

    NASA Astrophysics Data System (ADS)

    Häyrynen, Teppo; Oksanen, Jani

    2016-02-01

    Linear quantum input-output relation based models are widely applied to describe the light propagation in a lossy medium. The details of the interaction and the associated added noise depend on whether the device is configured to operate as an amplifier or an attenuator. Using the traveling wave (TW) approach, we generalize the linear material model to simultaneously account for both the emission and absorption processes and to have point-wise defined noise field statistics and intensity dependent interaction strengths. Thus, our approach describes the quantum input-output relations of linear media with net attenuation, amplification or transparency without pre-selection of the operation point. The TW approach is then applied to investigate materials at thermal equilibrium, inverted materials, the transparency limit where losses are compensated, and the saturating amplifiers. We also apply the approach to investigate media in nonuniform states which can be e.g. consequences of a temperature gradient over the medium or a position dependent inversion of the amplifier. Furthermore, by using the generalized model we investigate devices with intensity dependent interactions and show how an initial thermal field transforms to a field having coherent statistics due to gain saturation.

  6. A Simulation Study Comparison of Bayesian Estimation with Conventional Methods for Estimating Unknown Change Points

    ERIC Educational Resources Information Center

    Wang, Lijuan; McArdle, John J.

    2008-01-01

    The main purpose of this research is to evaluate the performance of a Bayesian approach for estimating unknown change points using Monte Carlo simulations. The univariate and bivariate unknown change point mixed models were presented and the basic idea of the Bayesian approach for estimating the models was discussed. The performance of Bayesian…

  7. Incomplete Recovery of CD4 count, CD4 Percentage, and CD4/CD8 ratio in HIV-Infected Patients on Long-Term Antiretroviral Therapy with Suppressed Viremia.

    PubMed

    Mutoh, Yoshikazu; Nishijima, Takeshi; Inaba, Yosuke; Tanaka, Noriko; Kikuchi, Yoshimi; Gatanaga, Hiroyuki; Oka, Shinichi

    2018-03-02

    The extent and duration of long-term recovery of CD4 count, CD4%, and CD4/CD8 ratio after initiation of combination antiretroviral therapy (cART) in patients with suppressed viral load are largely unknown. HIV-1 infected patients who started cART between January 2004 and January 2012 and showed persistent viral suppression (<200 copies/mL) for at least 4 years were followed up at AIDS Clinical Center, Tokyo. Change point analysis was used to determine the time point where CD4 count recovery shows a plateau, and linear mixed model was applied to estimate CD4 count at the change point. Data of 752 patients were analyzed [93% males, median age 38, median baseline CD4 count 172/µL (IQR, 62-253), CD4% 13.8% (IQR, 7.7-18.5), and CD4/8 ratio 0.23 (IQR, 0.12-0.35)]. The median follow-up period was 81.2 months and 91 (12.1%) patients were followed for >10 years. Change point analysis showed that CD4 count, CD4%, and CD4/CD8 ratio, continued to increase until 78.6, 62.2, and 64.3 months, respectively, with adjusted mean of 590 /µL (95%CI 572-608), 29.5% (29-30.1), and 0.89 (0.86-0.93), respectively, at the change point. Although 73.8% of the study patients achieved CD4 count ≥500 /μL, 48.2% of the patients with baseline CD4 count <100 /μL did not achieve CD4 count ≥500 /μL. Neither CD4% nor CD4/CD8 ratio normalized in a majority of patients. The results showed lack of normalization of CD4 count, CD4%, and CD4/CD8 ratio to the levels seen in healthy individuals even after long-term successful cART in patients with suppressed viral load.

  8. Disease Spread and Its Effect on Population Dynamics in Heterogeneous Environment

    NASA Astrophysics Data System (ADS)

    Upadhyay, Ranjit Kumar; Roy, Parimita

    In this paper, an eco-epidemiological model in which both species diffuse along a spatial gradient has been shown to exhibit temporal chaos at a fixed point in space. The proposed model is a modification of the model recently presented by Upadhyay and Roy [2014]. The spatial interactions among the species have been represented in the form of reaction-diffusion equations. The model incorporates the intrinsic growth rate of fish population which varies linearly with the depth of water. Numerical results show that diffusion can drive otherwise stable system into aperiodic behavior with sensitivity to initial conditions. We show that spatially induced chaos plays an important role in spatial pattern formation in heterogeneous environment. Spatiotemporal distributions of species have been simulated using the diffusivity assumptions realistic for natural eco-epidemic systems. We found that in heterogeneous environment, the temporal dynamics of both the species are drastically different and show chaotic behavior. It was also found that the instability observed in the model is due to spatial heterogeneity and diffusion-driven. Cumulative death rate of predator has an appreciable effect on model dynamics as the spatial distribution of all constituent populations exhibit significant changes when this model parameter is changed and it acts as a regularizing factor.

  9. A Two-Phase Space Resection Model for Accurate Topographic Reconstruction from Lunar Imagery with PushbroomScanners.

    PubMed

    Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen

    2016-04-11

    Exterior orientation parameters' (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang'E-1, compared to the existing space resection model.

  10. Image interpolation via regularized local linear regression.

    PubMed

    Liu, Xianming; Zhao, Debin; Xiong, Ruiqin; Ma, Siwei; Gao, Wen; Sun, Huifang

    2011-12-01

    The linear regression model is a very attractive tool to design effective image interpolation schemes. Some regression-based image interpolation algorithms have been proposed in the literature, in which the objective functions are optimized by ordinary least squares (OLS). However, it is shown that interpolation with OLS may have some undesirable properties from a robustness point of view: even small amounts of outliers can dramatically affect the estimates. To address these issues, in this paper we propose a novel image interpolation algorithm based on regularized local linear regression (RLLR). Starting with the linear regression model where we replace the OLS error norm with the moving least squares (MLS) error norm leads to a robust estimator of local image structure. To keep the solution stable and avoid overfitting, we incorporate the l(2)-norm as the estimator complexity penalty. Moreover, motivated by recent progress on manifold-based semi-supervised learning, we explicitly consider the intrinsic manifold structure by making use of both measured and unmeasured data points. Specifically, our framework incorporates the geometric structure of the marginal probability distribution induced by unmeasured samples as an additional local smoothness preserving constraint. The optimal model parameters can be obtained with a closed-form solution by solving a convex optimization problem. Experimental results on benchmark test images demonstrate that the proposed method achieves very competitive performance with the state-of-the-art interpolation algorithms, especially in image edge structure preservation. © 2011 IEEE

  11. Area under the curve predictions of dalbavancin, a new lipoglycopeptide agent, using the end of intravenous infusion concentration data point by regression analyses such as linear, log-linear and power models.

    PubMed

    Bhamidipati, Ravi Kanth; Syed, Muzeeb; Mullangi, Ramesh; Srinivas, Nuggehally

    2018-02-01

    1. Dalbavancin, a lipoglycopeptide, is approved for treating gram-positive bacterial infections. Area under plasma concentration versus time curve (AUC inf ) of dalbavancin is a key parameter and AUC inf /MIC ratio is a critical pharmacodynamic marker. 2. Using end of intravenous infusion concentration (i.e. C max ) C max versus AUC inf relationship for dalbavancin was established by regression analyses (i.e. linear, log-log, log-linear and power models) using 21 pairs of subject data. 3. The predictions of the AUC inf were performed using published C max data by application of regression equations. The quotient of observed/predicted values rendered fold difference. The mean absolute error (MAE)/root mean square error (RMSE) and correlation coefficient (r) were used in the assessment. 4. MAE and RMSE values for the various models were comparable. The C max versus AUC inf exhibited excellent correlation (r > 0.9488). The internal data evaluation showed narrow confinement (0.84-1.14-fold difference) with a RMSE < 10.3%. The external data evaluation showed that the models predicted AUC inf with a RMSE of 3.02-27.46% with fold difference largely contained within 0.64-1.48. 5. Regardless of the regression models, a single time point strategy of using C max (i.e. end of 30-min infusion) is amenable as a prospective tool for predicting AUC inf of dalbavancin in patients.

  12. Effects of cold and hot temperature on dehydration: a mechanism of cardiovascular burden.

    PubMed

    Lim, Youn-Hee; Park, Min-Seon; Kim, Yoonhee; Kim, Ho; Hong, Yun-Chul

    2015-08-01

    The association between temperature (cold or heat) and cardiovascular mortality has been well documented. However, few studies have investigated the underlying mechanism of the cold or heat effect. The main goal of this study was to examine the effect of temperature on dehydration markers and to explain the pathophysiological disturbances caused by changes of temperature. We investigated the relationship between outdoor temperature and dehydration markers (blood urea nitrogen (BUN)/creatinine ratio, urine specific gravity, plasma tonicity and haematocrit) in 43,549 adults from Seoul, South Korea, during 1995-2008. We used piece-wise linear regression to find the flexion point of apparent temperature and estimate the effects below or above the apparent temperature. Levels of dehydration markers decreased linearly with an increase in the apparent temperature until a point between 22 and 27 °C, which was regarded as the flexion point of apparent temperature, and then increased with apparent temperature. Because the associations between temperature and cardiovascular mortality are known to be U-shaped, our findings suggest that temperature-related changes in hydration status underlie the increased cardiovascular mortality and morbidity during high- or low-temperature conditions.

  13. Simulating the dynamics of linear forests in great plains agroecosystems under changing climates

    Treesearch

    Qinfeng Guo; J. Brandle; Michele Schoeneberger; D. Buettner

    2004-01-01

    Most forest growth models are not suitable for the highly fragmented, linear (or linearly shaped) forests in the Great Plains agroecosystems (e.g., windbreaks, riparian forest buffers), where such forests are a minor but ecologically important component of the land mosaics. This study used SEEI)SCAPE, a recently modified gap model designed for cultivated land mosaics...

  14. Stratification for the propensity score compared with linear regression techniques to assess the effect of treatment or exposure.

    PubMed

    Senn, Stephen; Graf, Erika; Caputo, Angelika

    2007-12-30

    Stratifying and matching by the propensity score are increasingly popular approaches to deal with confounding in medical studies investigating effects of a treatment or exposure. A more traditional alternative technique is the direct adjustment for confounding in regression models. This paper discusses fundamental differences between the two approaches, with a focus on linear regression and propensity score stratification, and identifies points to be considered for an adequate comparison. The treatment estimators are examined for unbiasedness and efficiency. This is illustrated in an application to real data and supplemented by an investigation on properties of the estimators for a range of underlying linear models. We demonstrate that in specific circumstances the propensity score estimator is identical to the effect estimated from a full linear model, even if it is built on coarser covariate strata than the linear model. As a consequence the coarsening property of the propensity score-adjustment for a one-dimensional confounder instead of a high-dimensional covariate-may be viewed as a way to implement a pre-specified, richly parametrized linear model. We conclude that the propensity score estimator inherits the potential for overfitting and that care should be taken to restrict covariates to those relevant for outcome. Copyright (c) 2007 John Wiley & Sons, Ltd.

  15. Comparing Regression Coefficients between Nested Linear Models for Clustered Data with Generalized Estimating Equations

    ERIC Educational Resources Information Center

    Yan, Jun; Aseltine, Robert H., Jr.; Harel, Ofer

    2013-01-01

    Comparing regression coefficients between models when one model is nested within another is of great practical interest when two explanations of a given phenomenon are specified as linear models. The statistical problem is whether the coefficients associated with a given set of covariates change significantly when other covariates are added into…

  16. Nonlinear time series modeling and forecasting the seismic data of the Hindu Kush region

    NASA Astrophysics Data System (ADS)

    Khan, Muhammad Yousaf; Mittnik, Stefan

    2018-01-01

    In this study, we extended the application of linear and nonlinear time models in the field of earthquake seismology and examined the out-of-sample forecast accuracy of linear Autoregressive (AR), Autoregressive Conditional Duration (ACD), Self-Exciting Threshold Autoregressive (SETAR), Threshold Autoregressive (TAR), Logistic Smooth Transition Autoregressive (LSTAR), Additive Autoregressive (AAR), and Artificial Neural Network (ANN) models for seismic data of the Hindu Kush region. We also extended the previous studies by using Vector Autoregressive (VAR) and Threshold Vector Autoregressive (TVAR) models and compared their forecasting accuracy with linear AR model. Unlike previous studies that typically consider the threshold model specifications by using internal threshold variable, we specified these models with external transition variables and compared their out-of-sample forecasting performance with the linear benchmark AR model. The modeling results show that time series models used in the present study are capable of capturing the dynamic structure present in the seismic data. The point forecast results indicate that the AR model generally outperforms the nonlinear models. However, in some cases, threshold models with external threshold variables specification produce more accurate forecasts, indicating that specification of threshold time series models is of crucial importance. For raw seismic data, the ACD model does not show an improved out-of-sample forecasting performance over the linear AR model. The results indicate that the AR model is the best forecasting device to model and forecast the raw seismic data of the Hindu Kush region.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartolac, S; Letourneau, D; University of Toronto, Toronto, Ontario

    Purpose: Application of process control theory in quality assurance programs promises to allow earlier identification of problems and potentially better quality in delivery than traditional paradigms based primarily on tolerances and action levels. The purpose of this project was to characterize underlying seasonal variations in linear accelerator output that can be used to improve performance or trigger preemptive maintenance. Methods: Review of runtime plots of daily (6 MV) output data acquired using in house ion chamber based devices over three years and for fifteen linear accelerators of varying make and model were evaluated. Shifts in output due to known interventionsmore » with the machines were subtracted from the data to model an uncorrected scenario for each linear accelerator. Observable linear trends were also removed from the data prior to evaluation of periodic variations. Results: Runtime plots of output revealed sinusoidal, seasonal variations that were consistent across all units, irrespective of manufacturer, model or age of machine. The average amplitude of the variation was on the order of 1%. Peak and minimum variations were found to correspond to early April and September, respectively. Approximately 48% of output adjustments made over the period examined were potentially avoidable if baseline levels had corresponded to the mean output, rather than to points near a peak or valley. Linear trends were observed for three of the fifteen units, with annual increases in output ranging from 2–3%. Conclusion: Characterization of cyclical seasonal trends allows for better separation of potentially innate accelerator behaviour from other behaviours (e.g. linear trends) that may be better described as true out of control states (i.e. non-stochastic deviations from otherwise expected behavior) and could indicate service requirements. Results also pointed to an optimal setpoint for accelerators such that output of machines is maintained within set tolerances and interventions are required less frequently.« less

  18. Behavioral Effects of a Locomotor-Based Physical Activity Intervention in Preschoolers.

    PubMed

    Burkart, Sarah; Roberts, Jasmin; Davidson, Matthew C; Alhassan, Sofiya

    2018-01-01

    Poor adaptive learning behaviors (ie, distractibility, inattention, and disruption) are associated with behavior problems and underachievement in school, as well as indicating potential attention-deficit hyperactivity disorder. Strategies are needed to limit these behaviors. Physical activity (PA) has been suggested to improve behavior in school-aged children, but little is known about this relationship in preschoolers. This study examined the effects of a PA intervention on classroom behaviors in preschool-aged children. Eight preschool classrooms (n = 71 children; age = 3.8 ± 0.7 y) with children from low socioeconomic environments were randomized to a locomotor-based PA (LB-PA) or unstructured free playtime (UF-PA) group. Both interventions were implemented by classroom teachers and delivered for 30 minutes per day, 5 days per week for 6 months. Classroom behavior was measured in both groups at 3 time points, whereas PA was assessed at 2 time points over a 6-month period and analyzed with hierarchical linear modeling. Linear growth models showed significant decreases in hyperactivity (LB-PA: -2.58 points, P = .001; UF-PA: 2.33 points, P = .03), aggression (LB-PA: -2.87 points, P = .01; UF-PA: 0.97 points, P = .38) and inattention (LB-PA: 1.59 points, P < .001; UF-PA: 3.91 points, P < .001). This research provides promising evidence for the efficacy of LB-PA as a strategy to improve classroom behavior in preschoolers.

  19. 3DFEMWATER: A three-dimensional finite element model of water flow through saturated-unsaturated media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeh, G.T.

    1987-08-01

    The 3DFEMWATER model is designed to treat heterogeneous and anisotropic media consisting of as many geologic formations as desired, consider both distributed and point sources/sinks that are spatially and temporally dependent, accept the prescribed initial conditions or obtain them by simulating a steady state version of the system under consideration, deal with a transient head distributed over the Dirichlet boundary, handle time-dependent fluxes due to pressure gradient varying along the Neumann boundary, treat time-dependent total fluxes distributed over the Cauchy boundary, automatically determine variable boundary conditions of evaporation, infiltration, or seepage on the soil-air interface, include the off-diagonal hydraulic conductivitymore » components in the modified Richards equation for dealing with cases when the coordinate system does not coincide with the principal directions of the hydraulic conductivity tensor, give three options for estimating the nonlinear matrix, include two options (successive subregion block iterations and successive point interactions) for solving the linearized matrix equations, automatically reset time step size when boundary conditions or source/sinks change abruptly, and check the mass balance computation over the entire region for every time step. The model is verified with analytical solutions or other numerical models for three examples.« less

  20. Bilinear effect in complex systems

    NASA Astrophysics Data System (ADS)

    Lam, Lui; Bellavia, David C.; Han, Xiao-Pu; Alston Liu, Chih-Hui; Shu, Chang-Qing; Wei, Zhengjin; Zhou, Tao; Zhu, Jichen

    2010-09-01

    The distribution of the lifetime of Chinese dynasties (as well as that of the British Isles and Japan) in a linear Zipf plot is found to consist of two straight lines intersecting at a transition point. This two-section piecewise-linear distribution is different from the power law or the stretched exponent distribution, and is called the Bilinear Effect for short. With assumptions mimicking the organization of ancient Chinese regimes, a 3-layer network model is constructed. Numerical results of this model show the bilinear effect, providing a plausible explanation of the historical data. The bilinear effect in two other social systems is presented, indicating that such a piecewise-linear effect is widespread in social systems.

  1. Precision Pointing in Space Using Arrays of Shape Memory Based Linear Actuators

    NASA Astrophysics Data System (ADS)

    Sonawane, Nikhil

    Space systems such as communication satellites, earth observation satellites and telescope require accurate pointing to observe fixed targets over prolonged time. These systems typically use reaction wheels to slew the spacecraft and gimballing systems containing motors to achieve precise pointing. Motor based actuators have limited life as they contain moving parts that require lubrication in space. Alternate methods have utilized piezoelectric actuators. This paper presents Shape memory alloys (SMA) actuators for control of a deployable antenna placed on a satellite. The SMAs are operated as a series of distributed linear actuators. These distributed linear actuators are not prone to single point failures and although each individual actuator is imprecise due to hysteresis and temperature variation, the system as a whole achieves reliable results. The SMAs can be programmed to perform a series of periodic motion and operate as a mechanical guidance system that is not prone to damage from radiation or space weather. Efforts are focused on developing a system that can achieve 1 degree pointing accuracy at first, with an ultimate goal of achieving a few arc seconds accuracy. Bench top model of the actuator system has been developed and working towards testing the system under vacuum. A demonstration flight of the technology is planned aboard a CubeSat.

  2. Covariances and spectra of the kinematics and dynamics of nonlinear waves

    NASA Technical Reports Server (NTRS)

    Tung, C. C.; Huang, N. E.

    1985-01-01

    Using the Stokes waves as a model of nonlinear waves and considering the linear component as a narrow-band Gaussian process, the covariances and spectra of velocity and acceleration components and pressure for points in the vicinity of still water level were derived taking into consideration the effects of free surface fluctuations. The results are compared with those obtained earlier using linear Gaussian waves.

  3. Nonlinear changes in brain activity during continuous word repetition: an event-related multiparametric functional MR imaging study.

    PubMed

    Hagenbeek, R E; Rombouts, S A R B; Veltman, D J; Van Strien, J W; Witter, M P; Scheltens, P; Barkhof, F

    2007-10-01

    Changes in brain activation as a function of continuous multiparametric word recognition have not been studied before by using functional MR imaging (fMRI), to our knowledge. Our aim was to identify linear changes in brain activation and, what is more interesting, nonlinear changes in brain activation as a function of extended word repetition. Fifteen healthy young right-handed individuals participated in this study. An event-related extended continuous word-recognition task with 30 target words was used to study the parametric effect of word recognition on brain activation. Word-recognition-related brain activation was studied as a function of 9 word repetitions. fMRI data were analyzed with a general linear model with regressors for linearly changing signal intensity and nonlinearly changing signal intensity, according to group average reaction time (RT) and individual RTs. A network generally associated with episodic memory recognition showed either constant or linearly decreasing brain activation as a function of word repetition. Furthermore, both anterior and posterior cingulate cortices and the left middle frontal gyrus followed the nonlinear curve of the group RT, whereas the anterior cingulate cortex was also associated with individual RT. Linear alteration in brain activation as a function of word repetition explained most changes in blood oxygen level-dependent signal intensity. Using a hierarchically orthogonalized model, we found evidence for nonlinear activation associated with both group and individual RTs.

  4. Boiling points of halogenated ethanes: an explanatory model implicating weak intermolecular hydrogen-halogen bonding.

    PubMed

    Beauchamp, Guy

    2008-10-23

    This study explores via structural clues the influence of weak intermolecular hydrogen-halogen bonds on the boiling point of halogenated ethanes. The plot of boiling points of 86 halogenated ethanes versus the molar refraction (linked to polarizability) reveals a series of straight lines, each corresponding to one of nine possible arrangements of hydrogen and halogen atoms on the two-carbon skeleton. A multiple linear regression model of the boiling points could be designed based on molar refraction and subgroup structure as independent variables (R(2) = 0.995, standard error of boiling point 4.2 degrees C). The model is discussed in view of the fact that molar refraction can account for approximately 83.0% of the observed variation in boiling point, while 16.5% could be ascribed to weak C-X...H-C intermolecular interactions. The difference in the observed boiling point of molecules having similar molar refraction values but differing in hydrogen-halogen intermolecular bonds can reach as much as 90 degrees C.

  5. The longitudinal, bidirectional relationships between parent reports of child secondhand smoke exposure and child smoking trajectories.

    PubMed

    Clawson, Ashley H; McQuaid, Elizabeth L; Dunsiger, Shira; Bartlett, Kiera; Borrelli, Belinda

    2018-04-01

    This study examines the longitudinal relationships between child smoking and secondhand smoke exposure (SHSe). Participants were 222 parent-child dyads. The parents smoked, had a child with (48%) or without asthma, and were enrolled in a smoking/health intervention. Parent-reported child SHSe was measured at baseline and 4, 6, and 12-month follow-ups; self-reported child smoking was assessed at these points and at 2-months. A parallel process growth model was used. Baseline child SHSe and smoking were correlated (r = 0.30). Changes in child SHSe and child smoking moved in tandem as evidenced by a correlation between the linear slopes of child smoking and SHSe (r = 0.32), and a correlation between the linear slope of child smoking and the quadratic slope of child SHSe (r = - 0.44). Results may inform interventions with the potential to reduce child SHSe and smoking among children at increased risk due to their exposure to parental smoking.

  6. Modeling spatially-varying landscape change points in species occurrence thresholds

    USGS Publications Warehouse

    Wagner, Tyler; Midway, Stephen R.

    2014-01-01

    Predicting species distributions at scales of regions to continents is often necessary, as large-scale phenomena influence the distributions of spatially structured populations. Land use and land cover are important large-scale drivers of species distributions, and landscapes are known to create species occurrence thresholds, where small changes in a landscape characteristic results in abrupt changes in occurrence. The value of the landscape characteristic at which this change occurs is referred to as a change point. We present a hierarchical Bayesian threshold model (HBTM) that allows for estimating spatially varying parameters, including change points. Our model also allows for modeling estimated parameters in an effort to understand large-scale drivers of variability in land use and land cover on species occurrence thresholds. We use range-wide detection/nondetection data for the eastern brook trout (Salvelinus fontinalis), a stream-dwelling salmonid, to illustrate our HBTM for estimating and modeling spatially varying threshold parameters in species occurrence. We parameterized the model for investigating thresholds in landscape predictor variables that are measured as proportions, and which are therefore restricted to values between 0 and 1. Our HBTM estimated spatially varying thresholds in brook trout occurrence for both the proportion agricultural and urban land uses. There was relatively little spatial variation in change point estimates, although there was spatial variability in the overall shape of the threshold response and associated uncertainty. In addition, regional mean stream water temperature was correlated to the change point parameters for the proportion of urban land use, with the change point value increasing with increasing mean stream water temperature. We present a framework for quantify macrosystem variability in spatially varying threshold model parameters in relation to important large-scale drivers such as land use and land cover. Although the model presented is a logistic HBTM, it can easily be extended to accommodate other statistical distributions for modeling species richness or abundance.

  7. The ins and outs of modelling vertical displacement events

    NASA Astrophysics Data System (ADS)

    Pfefferle, David

    2017-10-01

    Of the many reasons a plasma discharge disrupts, Vertical Displacement Events (VDEs) lead to the most severe forces and stresses on the vacuum vessel and Plasma Facing Components (PFCs). After loss of positional control, the plasma column drifts across the vacuum vessel and comes in contact with the first wall, at which point the stored magnetic and thermal energy is abruptly released. The vessel forces have been extensively modelled in 2D but, with the constraint of axisymmetry, the fundamental 3D effects that lead to toroidal peaking, sideways forces, field-line stochastisation and halo current rotation have been vastly overlooked. In this work, we present the main results of an intense VDE modelling activity using the implicit 3D extended MHD code M3D-C1 and share our experience with the multi-domain and highly non-linear physics encountered. At the culmination of code development by the M3D-C1 group over the last decade, highlighted by the inclusion of a finite-thickness resistive vacuum vessel within the computational domain, a series of fully 3D non-linear simulations are performed using realistic transport coefficients based on the reconstruction of so-called NSTX frozen VDEs, where the feedback control was purposely switched off to trigger a vertical instability. The vertical drift phase, the evolution of the current quench and the onset of 3D halo/eddy currents are diagnosed and investigated in detail. The sensitivity of the current quench to parameter changes is assessed via 2D non-linear runs. The growth of individual toroidal modes is monitored via linear-complex runs. The intricate evolution of the plasma, which is decaying to large extent in force-balance with induced halo/wall currents, is carefully resolved via 3D non-linear runs. The location, amplitude and rotation of normal currents and wall forces are analysed and compared with experimental traces.

  8. Bayesian hierarchical piecewise regression models: a tool to detect trajectory divergence between groups in long-term observational studies.

    PubMed

    Buscot, Marie-Jeanne; Wotherspoon, Simon S; Magnussen, Costan G; Juonala, Markus; Sabin, Matthew A; Burgner, David P; Lehtimäki, Terho; Viikari, Jorma S A; Hutri-Kähönen, Nina; Raitakari, Olli T; Thomson, Russell J

    2017-06-06

    Bayesian hierarchical piecewise regression (BHPR) modeling has not been previously formulated to detect and characterise the mechanism of trajectory divergence between groups of participants that have longitudinal responses with distinct developmental phases. These models are useful when participants in a prospective cohort study are grouped according to a distal dichotomous health outcome. Indeed, a refined understanding of how deleterious risk factor profiles develop across the life-course may help inform early-life interventions. Previous techniques to determine between-group differences in risk factors at each age may result in biased estimate of the age at divergence. We demonstrate the use of Bayesian hierarchical piecewise regression (BHPR) to generate a point estimate and credible interval for the age at which trajectories diverge between groups for continuous outcome measures that exhibit non-linear within-person response profiles over time. We illustrate our approach by modeling the divergence in childhood-to-adulthood body mass index (BMI) trajectories between two groups of adults with/without type 2 diabetes mellitus (T2DM) in the Cardiovascular Risk in Young Finns Study (YFS). Using the proposed BHPR approach, we estimated the BMI profiles of participants with T2DM diverged from healthy participants at age 16 years for males (95% credible interval (CI):13.5-18 years) and 21 years for females (95% CI: 19.5-23 years). These data suggest that a critical window for weight management intervention in preventing T2DM might exist before the age when BMI growth rate is naturally expected to decrease. Simulation showed that when using pairwise comparison of least-square means from categorical mixed models, smaller sample sizes tended to conclude a later age of divergence. In contrast, the point estimate of the divergence time is not biased by sample size when using the proposed BHPR method. BHPR is a powerful analytic tool to model long-term non-linear longitudinal outcomes, enabling the identification of the age at which risk factor trajectories diverge between groups of participants. The method is suitable for the analysis of unbalanced longitudinal data, with only a limited number of repeated measures per participants and where the time-related outcome is typically marked by transitional changes or by distinct phases of change over time.

  9. Longitudinal Monitoring of Patients With Chronic Low Back Pain During Physical Therapy Treatment Using the STarT Back Screening Tool.

    PubMed

    Medeiros, Flávia Cordeiro; Costa, Leonardo Oliveira Pena; Added, Marco Aurélio Nemitalla; Salomão, Evelyn Cassia; Costa, Lucíola da Cunha Menezes

    2017-05-01

    Study Design Preplanned secondary analysis of a randomized clinical trial. Background The STarT Back Screening Tool (SBST) was developed to screen and to classify patients with low back pain into subgroups for the risk of having a poor prognosis. However, this classification at baseline does not take into account variables that can influence the prognosis during treatment or over time. Objectives (1) To investigate the changes in risk subgroup measured by the SBST over a period of 6 months, and (2) to assess the long-term predictive ability of the SBST when administered at different time points. Methods Patients with chronic nonspecific low back pain (n = 148) receiving physical therapy care as part of a randomized trial were analyzed. Pain intensity, disability, global perceived effect, and the SBST were collected at baseline, 5 weeks, 3 months, and 6 months. Changes in SBST risk classification were calculated. Hierarchical linear regression models adjusted for potential confounders were built to analyze the predictive capabilities of the SBST when administered at different time points. Results A large proportion of patients (60.8%) changed their risk subgroup after receiving physical therapy care. The SBST improved the prediction for all 6-month outcomes when using the 5-week risk subgroup and the difference between baseline and 5-week subgroup, after controlling for potential confounders. The SBST at baseline did not improve the predictive ability of the models after adjusting for confounders. Conclusion This study shows that many patients change SBST risk subgroup after receiving physical therapy care, and that the predictive ability of the SBST in patients with chronic low back pain increases when administered at different time points. Level of Evidence Prognosis, 2b. J Orthop Sports Phys Ther 2017;47(5):314-323. Epub 29 Mar 2017. doi:10.2519/jospt.2017.7199.

  10. Alcohol consumption and all-cause mortality.

    PubMed

    Duffy, J C

    1995-02-01

    Prospective studies of alcohol and mortality in middle-aged men almost universally find a U-shaped relationship between alcohol consumption and risk of mortality. This review demonstrates the extent to which different studies lead to different risk estimates, analyses the putative influence of abstention as a risk factor and uses available data to produce point and interval estimates of the consumption level apparently associated with minimum risk from two studies in the UK. Data from a number of studies are analysed by means of logistic-linear modelling, taking account of the possible influence of abstention as a special risk factor. Separate analysis of British data is performed. Logistic-linear modelling demonstrates large and highly significant differences between the studies considered in the relationship between alcohol consumption and all-cause mortality. The results support the identification of abstention as a special risk factor for mortality, but do not indicate that this alone explains the apparent U-shaped relationship. Separate analysis of two British studies indicates minimum risk of mortality in this population at a consumption level of about 26 (8.5 g) units of alcohol per week. The analysis supports the view that abstention may be a specific risk factor for all-cause mortality, but is not an adequate explanation of the apparent protective effect of alcohol consumption against all-cause mortality. Future analyses might better be performed on a case-by-case basis, using a change-point model to estimate the parameters of the relationship. The current misinterpretation of the sensible drinking level of 21 units per week for men in the UK as a limit is not justified, and the data suggest that alcohol consumption is a net preventive factor against premature death in this population.

  11. Statistical inference for template aging

    NASA Astrophysics Data System (ADS)

    Schuckers, Michael E.

    2006-04-01

    A change in classification error rates for a biometric device is often referred to as template aging. Here we offer two methods for determining whether the effect of time is statistically significant. The first of these is the use of a generalized linear model to determine if these error rates change linearly over time. This approach generalizes previous work assessing the impact of covariates using generalized linear models. The second approach uses of likelihood ratio tests methodology. The focus here is on statistical methods for estimation not the underlying cause of the change in error rates over time. These methodologies are applied to data from the National Institutes of Standards and Technology Biometric Score Set Release 1. The results of these applications are discussed.

  12. Experimental Validation of a Theory for a Variable Resonant Frequency Wave Energy Converter (VRFWEC)

    NASA Astrophysics Data System (ADS)

    Park, Minok; Virey, Louis; Chen, Zhongfei; Mäkiharju, Simo

    2016-11-01

    A point absorber wave energy converter designed to adapt to changes in wave frequency and be highly resilient to harsh conditions, was tested in a wave tank for wave periods from 0.8 s to 2.5 s. The VRFWEC consists of a closed cylindrical floater containing an internal mass moving vertically and connected to the floater through a spring system. The internal mass and equivalent spring constant are adjustable and enable to match the resonance frequency of the device to the exciting wave frequency, hence optimizing the performance. In a full scale device, a Permanent Magnet Linear Generator will convert the relative motion between the internal mass and the floater into electricity. For a PMLG as described in Yeung et al. (OMAE2012), the electromagnetic force proved to cause dominantly linear damping. Thus, for the present preliminary study it was possible to replace the generator with a linear damper. While the full scale device with 2.2 m diameter is expected to generate O(50 kW), the prototype could generate O(1 W). For the initial experiments the prototype was restricted to heave motion and data compared to predictions from a newly developed theoretical model (Chen, 2016).

  13. Elevated nonlinearity as an indicator of shifts in the dynamics of populations under stress.

    PubMed

    Dakos, Vasilis; Glaser, Sarah M; Hsieh, Chih-Hao; Sugihara, George

    2017-03-01

    Populations occasionally experience abrupt changes, such as local extinctions, strong declines in abundance or transitions from stable dynamics to strongly irregular fluctuations. Although most of these changes have important ecological and at times economic implications, they remain notoriously difficult to detect in advance. Here, we study changes in the stability of populations under stress across a variety of transitions. Using a Ricker-type model, we simulate shifts from stable point equilibrium dynamics to cyclic and irregular boom-bust oscillations as well as abrupt shifts between alternative attractors. Our aim is to infer the loss of population stability before such shifts based on changes in nonlinearity of population dynamics. We measure nonlinearity by comparing forecast performance between linear and nonlinear models fitted on reconstructed attractors directly from observed time series. We compare nonlinearity to other suggested leading indicators of instability (variance and autocorrelation). We find that nonlinearity and variance increase in a similar way prior to the shifts. By contrast, autocorrelation is strongly affected by oscillations. Finally, we test these theoretical patterns in datasets of fisheries populations. Our results suggest that elevated nonlinearity could be used as an additional indicator to infer changes in the dynamics of populations under stress. © 2017 The Author(s).

  14. Dynamics of attitudes and genetic processes.

    PubMed

    Guastello, Stephen J; Guastello, Denise D

    2008-01-01

    Relatively new discoveries of a genetic component to attitudes have challenged the traditional viewpoint that attitudes are primarily learned ideas and behaviors. Attitudes that are regarded by respondents as "more important" tend to have greater genetic components to them, and tend to be more closely associated with authoritarianism. Nonlinear theories, nonetheless, have also been introduced to study attitude change. The objective of this study was to determine whether change in authoritarian attitudes across two generations would be more aptly described by a linear or a nonlinear model. Participants were 372 college students, their mothers, and their fathers who completed an attitude questionnaire. Results indicated that the nonlinear model (R2 = .09) was slightly better than the linear model (R2 = .08), but the two models offered very different forecasts for future generations of US society. The linear model projected a gradual and continuing bifurcation between authoritarians and non-authoritarians. The nonlinear model projected a stabilization of authoritarian attitudes.

  15. SU-E-T-186: Cloud-Based Quality Assurance Application for Linear Accelerator Commissioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rogers, J

    2015-06-15

    Purpose: To identify anomalies and safety issues during data collection and modeling for treatment planning systems Methods: A cloud-based quality assurance system (AQUIRE - Automated QUalIty REassurance) has been developed to allow the uploading and analysis of beam data aquired during the treatment planning system commissioning process. In addition to comparing and aggregating measured data, tools have also been developed to extract dose from the treatment planning system for end-to-end testing. A gamma index is perfomed on the data to give a dose difference and distance-to-agreement for validation that a beam model is generating plans consistent with the beam datamore » collection. Results: Over 20 linear accelerators have been commissioning using this platform, and a variety of errors and potential saftey issues have been caught through the validation process. For example, the gamma index of 2% dose, 2mm DTA is quite sufficient to see curves not corrected for effective point of measurement. Also, data imported into the database is analyzed against an aggregate of similar linear accelerators to show data points that are outliers. The resulting curves in the database exhibit a very small standard deviation and imply that a preconfigured beam model based on aggregated linear accelerators will be sufficient in most cases. Conclusion: With the use of this new platform for beam data commissioning, errors in beam data collection and treatment planning system modeling are greatly reduced. With the reduction in errors during acquisition, the resulting beam models are quite similar, suggesting that a common beam model may be possible in the future. Development is ongoing to create routine quality assurance tools to compare back to the beam data acquired during commissioning. I am a medical physicist for Alzyen Medical Physics, and perform commissioning services.« less

  16. Anomaly General Circulation Models.

    NASA Astrophysics Data System (ADS)

    Navarra, Antonio

    The feasibility of the anomaly model is assessed using barotropic and baroclinic models. In the barotropic case, both a stationary and a time-dependent model has been formulated and constructed, whereas only the stationary, linear case is considered in the baroclinic case. Results from the barotropic model indicate that a relation between the stationary solution and the time-averaged non-linear solution exists. The stationary linear baroclinic solution can therefore be considered with some confidence. The linear baroclinic anomaly model poses a formidable mathematical problem because it is necessary to solve a gigantic linear system to obtain the solution. A new method to find solution of large linear system, based on a projection on the Krylov subspace is shown to be successful when applied to the linearized baroclinic anomaly model. The scheme consists of projecting the original linear system on the Krylov subspace, thereby reducing the dimensionality of the matrix to be inverted to obtain the solution. With an appropriate setting of the damping parameters, the iterative Krylov method reaches a solution even using a Krylov subspace ten times smaller than the original space of the problem. This generality allows the treatment of the important problem of linear waves in the atmosphere. A larger class (nonzonally symmetric) of basic states can now be treated for the baroclinic primitive equations. These problem leads to large unsymmetrical linear systems of order 10000 and more which can now be successfully tackled by the Krylov method. The (R7) linear anomaly model is used to investigate extensively the linear response to equatorial and mid-latitude prescribed heating. The results indicate that the solution is deeply affected by the presence of the stationary waves in the basic state. The instability of the asymmetric flows, first pointed out by Simmons et al. (1983), is active also in the baroclinic case. However, the presence of baroclinic processes modifies the dominant response. The most sensitive areas are identified; they correspond to north Japan, the Pole and Greenland regions. A limited set of higher resolution (R15) experiments indicate that this situation is still present and enhanced at higher resolution. The linear anomaly model is also applied to a realistic case. (Abstract shortened with permission of author.).

  17. A trust region approach with multivariate Padé model for optimal circuit design

    NASA Astrophysics Data System (ADS)

    Abdel-Malek, Hany L.; Ebid, Shaimaa E. K.; Mohamed, Ahmed S. A.

    2017-11-01

    Since the optimization process requires a significant number of consecutive function evaluations, it is recommended to replace the function by an easily evaluated approximation model during the optimization process. The model suggested in this article is based on a multivariate Padé approximation. This model is constructed using data points of ?, where ? is the number of parameters. The model is updated over a sequence of trust regions. This model avoids the slow convergence of linear models of ? and has features of quadratic models that need interpolation data points of ?. The proposed approach is tested by applying it to several benchmark problems. Yield optimization using such a direct method is applied to some practical circuit examples. Minimax solution leads to a suitable initial point to carry out the yield optimization process. The yield is optimized by the proposed derivative-free method for active and passive filter examples.

  18. The Relationship between OCT-measured Central Retinal Thickness and Visual Acuity in Diabetic Macular Edema

    PubMed Central

    2008-01-01

    Objective To compare optical coherence tomography (OCT)-measured retinal thickness and visual acuity in eyes with diabetic macular edema (DME) both before and after macular laser photocoagulation. Design Cross-sectional and longitudinal study. Participants 210 subjects (251 eyes) with DME enrolled in a randomized clinical trial of laser techniques. Methods Retinal thickness was measured with OCT and visual acuity was measured with the electronic-ETDRS procedure. Main Outcome Measures OCT-measured center point thickness and visual acuity Results The correlation coefficients for visual acuity versus OCT center point thickness were 0.52 at baseline and 0.49, 0.36, and 0.38 at 3.5, 8, and 12 months post-laser photocoagulation. The slope of the best fit line to the baseline data was approximately 4.4 letters (95% C.I.: 3.5, 5.3) better visual acuity for every 100 microns decrease in center point thickness at baseline with no important difference at follow-up visits. Approximately one-third of the variation in visual acuity could be predicted by a linear regression model that incorporated OCT center point thickness, age, hemoglobin A1C, and severity of fluorescein leakage in the center and inner subfields. The correlation between change in visual acuity and change in OCT center point thickening 3.5 months after laser treatment was 0.44 with no important difference at the other follow-up times. A subset of eyes showed paradoxical improvements in visual acuity with increased center point thickening (7–17% at the three time points) or paradoxical worsening of visual acuity with a decrease in center point thickening (18%–26% at the three time points). Conclusions There is modest correlation between OCT-measured center point thickness and visual acuity, and modest correlation of changes in retinal thickening and visual acuity following focal laser treatment for DME. However, a wide range of visual acuity may be observed for a given degree of retinal edema and paradoxical increases in center point thickening with increases in visual acuity as well as paradoxical decreases in center point thickening with decreases in visual acuity were not uncommon. Thus, although OCT measurements of retinal thickness represent an important tool in clinical evaluation, they cannot reliably substitute as a surrogate for visual acuity at a given point in time. This study does not address whether short-term changes on OCT are predictive of long-term effects on visual acuity. PMID:17123615

  19. Graphical methods for the sensitivity analysis in discriminant analysis

    DOE PAGES

    Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang

    2015-09-30

    Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern ofmore » the change.« less

  20. Lattice Boltzmann methods for global linear instability analysis

    NASA Astrophysics Data System (ADS)

    Pérez, José Miguel; Aguilar, Alfonso; Theofilis, Vassilis

    2017-12-01

    Modal global linear instability analysis is performed using, for the first time ever, the lattice Boltzmann method (LBM) to analyze incompressible flows with two and three inhomogeneous spatial directions. Four linearization models have been implemented in order to recover the linearized Navier-Stokes equations in the incompressible limit. Two of those models employ the single relaxation time and have been proposed previously in the literature as linearization of the collision operator of the lattice Boltzmann equation. Two additional models are derived herein for the first time by linearizing the local equilibrium probability distribution function. Instability analysis results are obtained in three benchmark problems, two in closed geometries and one in open flow, namely the square and cubic lid-driven cavity flow and flow in the wake of the circular cylinder. Comparisons with results delivered by classic spectral element methods verify the accuracy of the proposed new methodologies and point potential limitations particular to the LBM approach. The known issue of appearance of numerical instabilities when the SRT model is used in direct numerical simulations employing the LBM is shown to be reflected in a spurious global eigenmode when the SRT model is used in the instability analysis. Although this mode is absent in the multiple relaxation times model, other spurious instabilities can also arise and are documented herein. Areas of potential improvements in order to make the proposed methodology competitive with established approaches for global instability analysis are discussed.

  1. Mooring Design Selection of Aquaculture Cage for Indonesian Ocean

    NASA Astrophysics Data System (ADS)

    Mulyadi, Y.; Syahroni, N.; Sambodho, K.; Zikra, M.; Wahyudi; Adia, H. B. P.

    2018-03-01

    Fish production is important for the economy in fishing community and for ensuring food security. Climate change will lead a threat to fish productivity. Therefore, a solution offered is to cultivate certain fish, especially those with high economic value by using offshore aquaculture technology. A Sea Station cage is one of the offshore aquaculture cage model that has been used in some locations. As a floating structure, the Sea Station cage need a mooring system to maintain its position. This paper presents the selection analysis of the mooring system designs of the Sea Station cage model that it is suitable with Indonesia Ocean. There are 3 mooring configurations that are linear array, rectangular array, and 4 points mooring type. The nylon mooring rope type has been selected to be used on the 3 mooring configurations and the rope has a diameter of 104 mm with a breaking force of 2.3 MN. Based on results from comparing the 3 mooring configurations, the best mooring configuration is linear array with the tension on the rope of 217 KN and has the safety factor of 0.2 based on DNVGL OS-E301

  2. A Planar Quasi-Static Constraint Mode Tire Model

    DTIC Science & Technology

    2015-07-10

    strikes a balance between simple tire models that lack the fidelity to make accurate chassis load predictions and computationally intensive models that...strikes a balance between heuristic tire models (such as a linear point-follower) that lack the fidelity to make accurate chassis load predictions...UNCLASSIFIED: Distribution Statement A. Cleared for public release A PLANAR QUASI-STATIC CONSTRAINT MODE TIRE MODEL Rui Maa John B. Ferris

  3. Mean-force-field and mean-spherical approximations for the electric microfield distribution at a charged point in the charged-hard-particles fluid

    NASA Astrophysics Data System (ADS)

    Rosenfeld, Yaakov

    1989-01-01

    The linearized mean-force-field approximation, leading to a Gaussian distribution, provides an exact formal solution to the mean-spherical integral equation model for the electric microfield distribution at a charged point in the general charged-hard-particles fluid. Lado's explicit solution for plasmas immediately follows this general observation.

  4. Generalized linear models and point count data: statistical considerations for the design and analysis of monitoring studies

    Treesearch

    Nathaniel E. Seavy; Suhel Quader; John D. Alexander; C. John Ralph

    2005-01-01

    The success of avian monitoring programs to effectively guide management decisions requires that studies be efficiently designed and data be properly analyzed. A complicating factor is that point count surveys often generate data with non-normal distributional properties. In this paper we review methods of dealing with deviations from normal assumptions, and we focus...

  5. Healthcare waste management during disasters and its effects on climate change: Lessons from 2010 earthquake and cholera tragedies in Haiti.

    PubMed

    Raila, Emilia M; Anderson, David O

    2017-03-01

    Despite growing effects of human activities on climate change throughout the world, and global South in particular, scientists are yet to understand how poor healthcare waste management practices in an emergency influences the climate change. This article presents new findings on climate change risks of healthcare waste disposal during and after the 2010 earthquake and cholera disasters in Haiti. The researchers analysed quantities of healthcare waste incinerated by the United Nations Mission in Haiti for 60 months (2009 to 2013). The aim was to determine the relationship between healthcare waste incinerated weights and the time of occurrence of the two disasters, and associated climate change effects, if any. Pearson product-moment correlation coefficient indicated a weak correlation between the quantities of healthcare waste disposed of and the time of occurrence of the actual emergencies (r (58) = 0.406, p = 0.001). Correspondingly, linear regression analysis indicated a relatively linear data trend (R 2 = 0.16, F (1, 58) = 11.42, P = 0.001) with fluctuating scenarios that depicted a sharp rise in 2012, and time series model showed monthly and yearly variations within 60 months. Given that the peak healthcare waste incineration occurred 2 years after the 2010 disasters, points at the need to minimise wastage on pharmaceuticals by improving logistics management. The Government of Haiti had no data on healthcare waste disposal and practised smoky open burning, thus a need for capacity building on green healthcare waste management technologies for effective climate change mitigation.

  6. Initial Simulations of RF Waves in Hot Plasmas Using the FullWave Code

    NASA Astrophysics Data System (ADS)

    Zhao, Liangji; Svidzinski, Vladimir; Spencer, Andrew; Kim, Jin-Soo

    2017-10-01

    FullWave is a simulation tool that models RF fields in hot inhomogeneous magnetized plasmas. The wave equations with linearized hot plasma dielectric response are solved in configuration space on adaptive cloud of computational points. The nonlocal hot plasma dielectric response is formulated by calculating the plasma conductivity kernel based on the solution of the linearized Vlasov equation in inhomogeneous magnetic field. In an rf field, the hot plasma dielectric response is limited to the distance of a few particles' Larmor radii, near the magnetic field line passing through the test point. The localization of the hot plasma dielectric response results in a sparse matrix of the problem thus significantly reduces the size of the problem and makes the simulations faster. We will present the initial results of modeling of rf waves using the Fullwave code, including calculation of nonlocal conductivity kernel in 2D Tokamak geometry; the interpolation of conductivity kernel from test points to adaptive cloud of computational points; and the results of self-consistent simulations of 2D rf fields using calculated hot plasma conductivity kernel in a tokamak plasma with reduced parameters. Work supported by the US DOE ``SBIR program.

  7. BOOK REVIEW: Statistical Mechanics of Turbulent Flows

    NASA Astrophysics Data System (ADS)

    Cambon, C.

    2004-10-01

    This is a handbook for a computational approach to reacting flows, including background material on statistical mechanics. In this sense, the title is somewhat misleading with respect to other books dedicated to the statistical theory of turbulence (e.g. Monin and Yaglom). In the present book, emphasis is placed on modelling (engineering closures) for computational fluid dynamics. The probabilistic (pdf) approach is applied to the local scalar field, motivated first by the nonlinearity of chemical source terms which appear in the transport equations of reacting species. The probabilistic and stochastic approaches are also used for the velocity field and particle position; nevertheless they are essentially limited to Lagrangian models for a local vector, with only single-point statistics, as for the scalar. Accordingly, conventional techniques, such as single-point closures for RANS (Reynolds-averaged Navier-Stokes) and subgrid-scale models for LES (large-eddy simulations), are described and in some cases reformulated using underlying Langevin models and filtered pdfs. Even if the theoretical approach to turbulence is not discussed in general, the essentials of probabilistic and stochastic-processes methods are described, with a useful reminder concerning statistics at the molecular level. The book comprises 7 chapters. Chapter 1 briefly states the goals and contents, with a very clear synoptic scheme on page 2. Chapter 2 presents definitions and examples of pdfs and related statistical moments. Chapter 3 deals with stochastic processes, pdf transport equations, from Kramer-Moyal to Fokker-Planck (for Markov processes), and moments equations. Stochastic differential equations are introduced and their relationship to pdfs described. This chapter ends with a discussion of stochastic modelling. The equations of fluid mechanics and thermodynamics are addressed in chapter 4. Classical conservation equations (mass, velocity, internal energy) are derived from their counterparts at the molecular level. In addition, equations are given for multicomponent reacting systems. The chapter ends with miscellaneous topics, including DNS, (idea of) the energy cascade, and RANS. Chapter 5 is devoted to stochastic models for the large scales of turbulence. Langevin-type models for velocity (and particle position) are presented, and their various consequences for second-order single-point corelations (Reynolds stress components, Kolmogorov constant) are discussed. These models are then presented for the scalar. The chapter ends with compressible high-speed flows and various models, ranging from k-epsilon to hybrid RANS-pdf. Stochastic models for small-scale turbulence are addressed in chapter 6. These models are based on the concept of a filter density function (FDF) for the scalar, and a more conventional SGS (sub-grid-scale model) for the velocity in LES. The final chapter, chapter 7, is entitled `The unification of turbulence models' and aims at reconciling large-scale and small-scale modelling. This book offers a timely survey of techniques in modern computational fluid mechanics for turbulent flows with reacting scalars. It should be of interest to engineers, while the discussion of the underlying tools, namely pdfs, stochastic and statistical equations should also be attractive to applied mathematicians and physicists. The book's emphasis on local pdfs and stochastic Langevin models gives a consistent structure to the book and allows the author to cover almost the whole spectrum of practical modelling in turbulent CFD. On the other hand, one might regret that non-local issues are not mentioned explicitly, or even briefly. These problems range from the presence of pressure-strain correlations in the Reynolds stress transport equations to the presence of two-point pdfs in the single-point pdf equation derived from the Navier--Stokes equations. (One may recall that, even without scalar transport, a general closure problem for turbulence statistics results from both non-linearity and non-locality of Navier-Stokes equations, the latter coming from, e.g., the nonlocal relationship of velocity and pressure in the quasi-incompressible case. These two aspects are often intricately linked. It is well known that non-linearity alone is not responsible for the `problem', as evidenced by 1D turbulence without pressure (`Burgulence' from the Burgers equation) and probably 3D (cosmological gas). A local description in terms of pdf for the velocity can resolve the `non-linear' problem, which instead yields an infinite hierarchy of equations in terms of moments. On the other hand, non-locality yields a hierarchy of unclosed equations, with the single-point pdf equation for velocity derived from NS incompressible equations involving a two-point pdf, and so on. The general relationship was given by Lundgren (1967, Phys. Fluids 10 (5), 969-975), with the equation for pdf at n points involving the pdf at n+1 points. The nonlocal problem appears in various statistical models which are not discussed in the book. The simplest example is full RST or ASM models, in which the closure of pressure-strain correlations is pivotal (their counterpart ought to be identified and discussed in equations (5-21) and the following ones). The book does not address more sophisticated non-local approaches, such as two-point (or spectral) non-linear closure theories and models, `rapid distortion theory' for linear regimes, not to mention scaling and intermittency based on two-point structure functions, etc. The book sometimes mixes theoretical modelling and pure empirical relationships, the empirical character coming from the lack of a nonlocal (two-point) approach.) In short, the book is orientated more towards applications than towards turbulence theory; it is written clearly and concisely and should be useful to a large community, interested either in the underlying stochastic formalism or in CFD applications.

  8. Stock volatility and stroke mortality in a Chinese population.

    PubMed

    Zhang, Yuhao; Wang, Xin; Xu, Xiaohui; Chen, Renjie; Kan, Haidong

    2013-09-01

    This work was done to study the relationship between stock volatility and stroke mortality in Shanghai, China. Daily stroke death numbers and stock performance data from 1 January 2006 to 31 December 2008 in Shanghai were collected from the Shanghai Center for Disease Control and Prevention and Shanghai Stock Exchange (SSE), respectively. Data were analysed with overdispersed generalized linear Poisson models, controlling for long-term and seasonal trends of stroke mortality and weather conditions with natural smooth functions, as well as Index closing value, air pollution levels and day of the week. We observed a U-shaped relationship between the Index change and stroke deaths: both rising and falling of the Index were associated with more deaths, and the fewest deaths coincided with little or no change of the Index. We also examined the absolute daily change of the Index in relation to stroke deaths: each 100-point Index change corresponded to 3.22% [95% confidence interval (CI) 0.45-5.49] increase of stroke deaths. We found that stroke deaths fluctuated with daily stock changes in Shanghai, suggesting that stock volatility may adversely affect cerebrovascular health.

  9. Could gradual changes in Holocene Saharan landscape have caused the observed abrupt shift in North Atlantic dust deposition?

    NASA Astrophysics Data System (ADS)

    Egerer, Sabine; Claussen, Martin; Reick, Christian; Stanelle, Tanja

    2017-09-01

    The abrupt change in North Atlantic dust deposition found in sediment records has been associated with a rapid large scale transition of Holocene Saharan landscape. We hypothesize that gradual changes in the landscape may have caused this abrupt shift in dust deposition either because of the non-linearity in dust activation or because of the heterogeneous distribution of major dust sources. To test this hypothesis, we investigate the response of North Atlantic dust deposition to a prescribed 1) gradual and spatially homogeneous decrease and 2) gradual southward retreat of North African vegetation and lakes during the Holocene using the aerosol-climate model ECHAM-HAM. In our simulations, we do not find evidence of an abrupt increase in dust deposition as observed in marine sediment records along the Northwest African margin. We conclude that such gradual changes in landscape are not sufficient to explain the observed abrupt changes in dust accumulation in marine sediment records. Instead, our results point to a rapid large-scale retreat of vegetation and lakes in the area of significant dust sources.

  10. Comparison of Modeling Methods to Determine Liver-to-blood Inocula and Parasite Multiplication Rates During Controlled Human Malaria Infection

    PubMed Central

    Douglas, Alexander D.; Edwards, Nick J.; Duncan, Christopher J. A.; Thompson, Fiona M.; Sheehy, Susanne H.; O'Hara, Geraldine A.; Anagnostou, Nicholas; Walther, Michael; Webster, Daniel P.; Dunachie, Susanna J.; Porter, David W.; Andrews, Laura; Gilbert, Sarah C.; Draper, Simon J.; Hill, Adrian V. S.; Bejon, Philip

    2013-01-01

    Controlled human malaria infection is used to measure efficacy of candidate malaria vaccines before field studies are undertaken. Mathematical modeling using data from quantitative polymerase chain reaction (qPCR) parasitemia monitoring can discriminate between vaccine effects on the parasite's liver and blood stages. Uncertainty regarding the most appropriate modeling method hinders interpretation of such trials. We used qPCR data from 267 Plasmodium falciparum infections to compare linear, sine-wave, and normal-cumulative-density-function models. We find that the parameters estimated by these models are closely correlated, and their predictive accuracy for omitted data points was similar. We propose that future studies include the linear model. PMID:23570846

  11. Inter and intra-modal deformable registration: continuous deformations meet efficient optimal linear programming.

    PubMed

    Glocker, Ben; Paragios, Nikos; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir

    2007-01-01

    In this paper we propose a novel non-rigid volume registration based on discrete labeling and linear programming. The proposed framework reformulates registration as a minimal path extraction in a weighted graph. The space of solutions is represented using a set of a labels which are assigned to predefined displacements. The graph topology corresponds to a superimposed regular grid onto the volume. Links between neighborhood control points introduce smoothness, while links between the graph nodes and the labels (end-nodes) measure the cost induced to the objective function through the selection of a particular deformation for a given control point once projected to the entire volume domain, Higher order polynomials are used to express the volume deformation from the ones of the control points. Efficient linear programming that can guarantee the optimal solution up to (a user-defined) bound is considered to recover the optimal registration parameters. Therefore, the method is gradient free, can encode various similarity metrics (simple changes on the graph construction), can guarantee a globally sub-optimal solution and is computational tractable. Experimental validation using simulated data with known deformation, as well as manually segmented data demonstrate the extreme potentials of our approach.

  12. A chain reaction approach to modelling gene pathways.

    PubMed

    Cheng, Gary C; Chen, Dung-Tsa; Chen, James J; Soong, Seng-Jaw; Lamartiniere, Coral; Barnes, Stephen

    2012-08-01

    BACKGROUND: Of great interest in cancer prevention is how nutrient components affect gene pathways associated with the physiological events of puberty. Nutrient-gene interactions may cause changes in breast or prostate cells and, therefore, may result in cancer risk later in life. Analysis of gene pathways can lead to insights about nutrient-gene interactions and the development of more effective prevention approaches to reduce cancer risk. To date, researchers have relied heavily upon experimental assays (such as microarray analysis, etc.) to identify genes and their associated pathways that are affected by nutrient and diets. However, the vast number of genes and combinations of gene pathways, coupled with the expense of the experimental analyses, has delayed the progress of gene-pathway research. The development of an analytical approach based on available test data could greatly benefit the evaluation of gene pathways, and thus advance the study of nutrient-gene interactions in cancer prevention. In the present study, we have proposed a chain reaction model to simulate gene pathways, in which the gene expression changes through the pathway are represented by the species undergoing a set of chemical reactions. We have also developed a numerical tool to solve for the species changes due to the chain reactions over time. Through this approach we can examine the impact of nutrient-containing diets on the gene pathway; moreover, transformation of genes over time with a nutrient treatment can be observed numerically, which is very difficult to achieve experimentally. We apply this approach to microarray analysis data from an experiment which involved the effects of three polyphenols (nutrient treatments), epigallo-catechin-3-O-gallate (EGCG), genistein, and resveratrol, in a study of nutrient-gene interaction in the estrogen synthesis pathway during puberty. RESULTS: In this preliminary study, the estrogen synthesis pathway was simulated by a chain reaction model. By applying it to microarray data, the chain reaction model computed a set of reaction rates to examine the effects of three polyphenols (EGCG, genistein, and resveratrol) on gene expression in this pathway during puberty. We first performed statistical analysis to test the time factor on the estrogen synthesis pathway. Global tests were used to evaluate an overall gene expression change during puberty for each experimental group. Then, a chain reaction model was employed to simulate the estrogen synthesis pathway. Specifically, the model computed the reaction rates in a set of ordinary differential equations to describe interactions between genes in the pathway (A reaction rate K of A to B represents gene A will induce gene B per unit at a rate of K; we give details in the "method" section). Since disparate changes of gene expression may cause numerical error problems in solving these differential equations, we used an implicit scheme to address this issue. We first applied the chain reaction model to obtain the reaction rates for the control group. A sensitivity study was conducted to evaluate how well the model fits to the control group data at Day 50. Results showed a small bias and mean square error. These observations indicated the model is robust to low random noises and has a good fit for the control group. Then the chain reaction model derived from the control group data was used to predict gene expression at Day 50 for the three polyphenol groups. If these nutrients affect the estrogen synthesis pathways during puberty, we expect discrepancy between observed and expected expressions. Results indicated some genes had large differences in the EGCG (e.g., Hsd3b and Sts) and the resveratrol (e.g., Hsd3b and Hrmt12) groups. CONCLUSIONS: In the present study, we have presented (I) experimental studies of the effect of nutrient diets on the gene expression changes in a selected estrogen synthesis pathway. This experiment is valuable because it allows us to examine how the nutrient-containing diets regulate gene expression in the estrogen synthesis pathway during puberty; (II) global tests to assess an overall association of this particular pathway with time factor by utilizing generalized linear models to analyze microarray data; and (III) a chain reaction model to simulate the pathway. This is a novel application because we are able to translate the gene pathway into the chemical reactions in which each reaction channel describes gene-gene relationship in the pathway. In the chain reaction model, the implicit scheme is employed to efficiently solve the differential equations. Data analysis results show the proposed model is capable of predicting gene expression changes and demonstrating the effect of nutrient-containing diets on gene expression changes in the pathway. One of the objectives of this study is to explore and develop a numerical approach for simulating the gene expression change so that it can be applied and calibrated when the data of more time slices are available, and thus can be used to interpolate the expression change at a desired time point without conducting expensive experiments for a large amount of time points. Hence, we are not claiming this is either essential or the most efficient way for simulating this problem, rather a mathematical/numerical approach that can model the expression change of a large set of genes of a complex pathway. In addition, we understand the limitation of this experiment and realize that it is still far from being a complete model of predicting nutrient-gene interactions. The reason is that in the present model, the reaction rates were estimated based on available data at two time points; hence, the gene expression change is dependent upon the reaction rates and a linear function of the gene expressions. More data sets containing gene expression at various time slices are needed in order to improve the present model so that a non-linear variation of gene expression changes at different time can be predicted.

  13. CADASTER QSPR Models for Predictions of Melting and Boiling Points of Perfluorinated Chemicals.

    PubMed

    Bhhatarai, Barun; Teetz, Wolfram; Liu, Tao; Öberg, Tomas; Jeliazkova, Nina; Kochev, Nikolay; Pukalov, Ognyan; Tetko, Igor V; Kovarich, Simona; Papa, Ester; Gramatica, Paola

    2011-03-14

    Quantitative structure property relationship (QSPR) studies on per- and polyfluorinated chemicals (PFCs) on melting point (MP) and boiling point (BP) are presented. The training and prediction chemicals used for developing and validating the models were selected from Syracuse PhysProp database and literatures. The available experimental data sets were split in two different ways: a) random selection on response value, and b) structural similarity verified by self-organizing-map (SOM), in order to propose reliable predictive models, developed only on the training sets and externally verified on the prediction sets. Individual linear and non-linear approaches based models developed by different CADASTER partners on 0D-2D Dragon descriptors, E-state descriptors and fragment based descriptors as well as consensus model and their predictions are presented. In addition, the predictive performance of the developed models was verified on a blind external validation set (EV-set) prepared using PERFORCE database on 15 MP and 25 BP data respectively. This database contains only long chain perfluoro-alkylated chemicals, particularly monitored by regulatory agencies like US-EPA and EU-REACH. QSPR models with internal and external validation on two different external prediction/validation sets and study of applicability-domain highlighting the robustness and high accuracy of the models are discussed. Finally, MPs for additional 303 PFCs and BPs for 271 PFCs were predicted for which experimental measurements are unknown. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Random regression models for the prediction of days to weight, ultrasound rib eye area, and ultrasound back fat depth in beef cattle.

    PubMed

    Speidel, S E; Peel, R K; Crews, D H; Enns, R M

    2016-02-01

    Genetic evaluation research designed to reduce the required days to a specified end point has received very little attention in pertinent scientific literature, given that its economic importance was first discussed in 1957. There are many production scenarios in today's beef industry, making a prediction for the required number of days to a single end point a suboptimal option. Random regression is an attractive alternative to calculate days to weight (DTW), days to ultrasound back fat (DTUBF), and days to ultrasound rib eye area (DTUREA) genetic predictions that could overcome weaknesses of a single end point prediction. The objective of this study was to develop random regression approaches for the prediction of the DTW, DTUREA, and DTUBF. Data were obtained from the Agriculture and Agri-Food Canada Research Centre, Lethbridge, AB, Canada. Data consisted of records on 1,324 feedlot cattle spanning 1999 to 2007. Individual animals averaged 5.77 observations with weights, ultrasound rib eye area (UREA), ultrasound back fat depth (UBF), and ages ranging from 293 to 863 kg, 73.39 to 129.54 cm, 1.53 to 30.47 mm, and 276 to 519 d, respectively. Random regression models using Legendre polynomials were used to regress age of the individual on weight, UREA, and UBF. Fixed effects in the model included an overall fixed regression of age on end point (weight, UREA, and UBF) nested within breed to account for the mean relationship between age and weight as well as a contemporary group effect consisting of breed of the animal (Angus, Charolais, and Charolais sired), feedlot pen, and year of measure. Likelihood ratio tests were used to determine the appropriate random polynomial order. Use of the quadratic polynomial did not account for any additional genetic variation in days for DTW ( > 0.11), for DTUREA ( > 0.18), and for DTUBF ( > 0.20) when compared with the linear random polynomial. Heritability estimates from the linear random regression for DTW ranged from 0.54 to 0.74, corresponding to end points of 293 and 863 kg, respectively. Heritability for DTUREA ranged from 0.51 to 0.34 and for DTUBF ranged from 0.55 to 0.37. These estimates correspond to UREA end points of 35 and 125 cm and UBF end points of 1.53 and 30 mm, respectively. This range of heritability shows DTW, DTUREA, and DTUBF to be highly heritable and indicates that selection pressure aimed at reducing the number of days to reach a finish weight end point can result in genetic change given sufficient data.

  15. Unitarity of spin-2 theories with linearized Weyl symmetry in D=2+1 dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dalmazi, D.

    2009-10-15

    Here we prove unitarity of the recently found fourth-order (in derivatives) self-dual model of spin-2 by investigating the analytic structure of its propagator. The model describes massive particles of helicity +2 (or -2) in D=2+1 dimensions and corresponds to the quadratic truncation of a higher derivative topologically massive gravity about a flat background. It is an intriguing example of a theory where a term in the propagator of the form 1/[{open_square}{sup 2}({open_square}-m{sup 2})] does not lead to ghosts. The crucial role of the linearized Weyl symmetry in getting rid of the ghosts is pointed out. We use a peculiar pairmore » of gauge conditions which fix the linearized reparametrizations and linearized Weyl symmetries separately.« less

  16. A Gaussian mixture model based adaptive classifier for fNIRS brain-computer interfaces and its testing via simulation

    NASA Astrophysics Data System (ADS)

    Li, Zheng; Jiang, Yi-han; Duan, Lian; Zhu, Chao-zhe

    2017-08-01

    Objective. Functional near infra-red spectroscopy (fNIRS) is a promising brain imaging technology for brain-computer interfaces (BCI). Future clinical uses of fNIRS will likely require operation over long time spans, during which neural activation patterns may change. However, current decoders for fNIRS signals are not designed to handle changing activation patterns. The objective of this study is to test via simulations a new adaptive decoder for fNIRS signals, the Gaussian mixture model adaptive classifier (GMMAC). Approach. GMMAC can simultaneously classify and track activation pattern changes without the need for ground-truth labels. This adaptive classifier uses computationally efficient variational Bayesian inference to label new data points and update mixture model parameters, using the previous model parameters as priors. We test GMMAC in simulations in which neural activation patterns change over time and compare to static decoders and unsupervised adaptive linear discriminant analysis classifiers. Main results. Our simulation experiments show GMMAC can accurately decode under time-varying activation patterns: shifts of activation region, expansions of activation region, and combined contractions and shifts of activation region. Furthermore, the experiments show the proposed method can track the changing shape of the activation region. Compared to prior work, GMMAC performed significantly better than the other unsupervised adaptive classifiers on a difficult activation pattern change simulation: 99% versus  <54% in two-choice classification accuracy. Significance. We believe GMMAC will be useful for clinical fNIRS-based brain-computer interfaces, including neurofeedback training systems, where operation over long time spans is required.

  17. Stimulation of a turbofan engine for evaluation of multivariable optimal control concepts. [(computerized simulation)

    NASA Technical Reports Server (NTRS)

    Seldner, K.

    1976-01-01

    The development of control systems for jet engines requires a real-time computer simulation. The simulation provides an effective tool for evaluating control concepts and problem areas prior to actual engine testing. The development and use of a real-time simulation of the Pratt and Whitney F100-PW100 turbofan engine is described. The simulation was used in a multi-variable optimal controls research program using linear quadratic regulator theory. The simulation is used to generate linear engine models at selected operating points and evaluate the control algorithm. To reduce the complexity of the design, it is desirable to reduce the order of the linear model. A technique to reduce the order of the model; is discussed. Selected results between high and low order models are compared. The LQR control algorithms can be programmed on digital computer. This computer will control the engine simulation over the desired flight envelope.

  18. Linear Mixed Models: Gum and Beyond

    NASA Astrophysics Data System (ADS)

    Arendacká, Barbora; Täubner, Angelika; Eichstädt, Sascha; Bruns, Thomas; Elster, Clemens

    2014-04-01

    In Annex H.5, the Guide to the Evaluation of Uncertainty in Measurement (GUM) [1] recognizes the necessity to analyze certain types of experiments by applying random effects ANOVA models. These belong to the more general family of linear mixed models that we focus on in the current paper. Extending the short introduction provided by the GUM, our aim is to show that the more general, linear mixed models cover a wider range of situations occurring in practice and can be beneficial when employed in data analysis of long-term repeated experiments. Namely, we point out their potential as an aid in establishing an uncertainty budget and as means for gaining more insight into the measurement process. We also comment on computational issues and to make the explanations less abstract, we illustrate all the concepts with the help of a measurement campaign conducted in order to challenge the uncertainty budget in calibration of accelerometers.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ko, L.F.

    Calculations for the two-point correlation functions in the scaling limit for two statistical models are presented. In Part I, the Ising model with a linear defect is studied for T < T/sub c/ and T > T/sub c/. The transfer matrix method of Onsager and Kaufman is used. The energy-density correlation is given by functions related to the modified Bessel functions. The dispersion expansion for the spin-spin correlation functions are derived. The dominant behavior for large separations at T not equal to T/sub c/ is extracted. It is shown that these expansions lead to systems of Fredholm integral equations. Inmore » Part II, the electric correlation function of the eight-vertex model for T < T/sub c/ is studied. The eight vertex model decouples to two independent Ising models when the four spin coupling vanishes. To first order in the four-spin coupling, the electric correlation function is related to a three-point function of the Ising model. This relation is systematically investigated and the full dispersion expansion (to first order in four-spin coupling) is obtained. The results is a new kind of structure which, unlike those of many solvable models, is apparently not expressible in terms of linear integral equations.« less

  20. Genomic prediction based on data from three layer lines using non-linear regression models.

    PubMed

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

    2014-11-06

    Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional occurrence of large negative accuracies when the evaluated line was not included in the training dataset. Furthermore, when using a multi-line training dataset, non-linear models provided information on the genotype data that was complementary to the linear models, which indicates that the underlying data distributions of the three studied lines were indeed heterogeneous.

  1. Examining a Dual-Process Model of Desensitization and Hypersensitization to Community Violence in African American Male Adolescents.

    PubMed

    Gaylord-Harden, Noni K; Bai, Grace J; Simic, Dusan

    2017-10-01

    The purpose of the current study was to examine a dual-process model of reactivity to community violence exposure in African American male adolescents from urban communities. The model focused on desensitization and hypersensitization effects as well as desensitization and hypersensitization as predictors of aggressive behavior. Participants were 133 African American male high school students, mean age = 15.17 years, SD = 0.96. Participants completed measures of exposure to community violence, depressive symptoms, hyperarousal symptoms, aggressive beliefs, and aggressive behaviors at two time points. Community violence exposure predicted changes in aggression, β = .25, p = .004, and physiological arousal, β = .22, p = .010, over time, but not aggressive beliefs. The curvilinear association between community violence exposure and changes in depression over time was not significant, β = .42, p = .083, but there was a significant linear association between the exposure to community violence (ECV) and changes in levels of depression over time, β = .21, p = .014. Results indicated a significant mediation effect for hyperarousal on the association between community violence exposure and aggressive behavior, B = 0.20, 95% CI = [0.04, 0.54]. Results showed support for physiological hypersensitization, with hypersensitization increasing the risk for aggressive behavior. Copyright © 2017 International Society for Traumatic Stress Studies.

  2. On the Statistical Errors of RADAR Location Sensor Networks with Built-In Wi-Fi Gaussian Linear Fingerprints

    PubMed Central

    Zhou, Mu; Xu, Yu Bin; Ma, Lin; Tian, Shuo

    2012-01-01

    The expected errors of RADAR sensor networks with linear probabilistic location fingerprints inside buildings with varying Wi-Fi Gaussian strength are discussed. As far as we know, the statistical errors of equal and unequal-weighted RADAR networks have been suggested as a better way to evaluate the behavior of different system parameters and the deployment of reference points (RPs). However, up to now, there is still not enough related work on the relations between the statistical errors, system parameters, number and interval of the RPs, let alone calculating the correlated analytical expressions of concern. Therefore, in response to this compelling problem, under a simple linear distribution model, much attention will be paid to the mathematical relations of the linear expected errors, number of neighbors, number and interval of RPs, parameters in logarithmic attenuation model and variations of radio signal strength (RSS) at the test point (TP) with the purpose of constructing more practical and reliable RADAR location sensor networks (RLSNs) and also guaranteeing the accuracy requirements for the location based services in future ubiquitous context-awareness environments. Moreover, the numerical results and some real experimental evaluations of the error theories addressed in this paper will also be presented for our future extended analysis. PMID:22737027

  3. On the statistical errors of RADAR location sensor networks with built-in Wi-Fi Gaussian linear fingerprints.

    PubMed

    Zhou, Mu; Xu, Yu Bin; Ma, Lin; Tian, Shuo

    2012-01-01

    The expected errors of RADAR sensor networks with linear probabilistic location fingerprints inside buildings with varying Wi-Fi Gaussian strength are discussed. As far as we know, the statistical errors of equal and unequal-weighted RADAR networks have been suggested as a better way to evaluate the behavior of different system parameters and the deployment of reference points (RPs). However, up to now, there is still not enough related work on the relations between the statistical errors, system parameters, number and interval of the RPs, let alone calculating the correlated analytical expressions of concern. Therefore, in response to this compelling problem, under a simple linear distribution model, much attention will be paid to the mathematical relations of the linear expected errors, number of neighbors, number and interval of RPs, parameters in logarithmic attenuation model and variations of radio signal strength (RSS) at the test point (TP) with the purpose of constructing more practical and reliable RADAR location sensor networks (RLSNs) and also guaranteeing the accuracy requirements for the location based services in future ubiquitous context-awareness environments. Moreover, the numerical results and some real experimental evaluations of the error theories addressed in this paper will also be presented for our future extended analysis.

  4. A recurrent neural network for solving bilevel linear programming problem.

    PubMed

    He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie; Huang, Junjian

    2014-04-01

    In this brief, based on the method of penalty functions, a recurrent neural network (NN) modeled by means of a differential inclusion is proposed for solving the bilevel linear programming problem (BLPP). Compared with the existing NNs for BLPP, the model has the least number of state variables and simple structure. Using nonsmooth analysis, the theory of differential inclusions, and Lyapunov-like method, the equilibrium point sequence of the proposed NNs can approximately converge to an optimal solution of BLPP under certain conditions. Finally, the numerical simulations of a supply chain distribution model have shown excellent performance of the proposed recurrent NNs.

  5. A note on probabilistic models over strings: the linear algebra approach.

    PubMed

    Bouchard-Côté, Alexandre

    2013-12-01

    Probabilistic models over strings have played a key role in developing methods that take into consideration indels as phylogenetically informative events. There is an extensive literature on using automata and transducers on phylogenies to do inference on these probabilistic models, in which an important theoretical question is the complexity of computing the normalization of a class of string-valued graphical models. This question has been investigated using tools from combinatorics, dynamic programming, and graph theory, and has practical applications in Bayesian phylogenetics. In this work, we revisit this theoretical question from a different point of view, based on linear algebra. The main contribution is a set of results based on this linear algebra view that facilitate the analysis and design of inference algorithms on string-valued graphical models. As an illustration, we use this method to give a new elementary proof of a known result on the complexity of inference on the "TKF91" model, a well-known probabilistic model over strings. Compared to previous work, our proving method is easier to extend to other models, since it relies on a novel weak condition, triangular transducers, which is easy to establish in practice. The linear algebra view provides a concise way of describing transducer algorithms and their compositions, opens the possibility of transferring fast linear algebra libraries (for example, based on GPUs), as well as low rank matrix approximation methods, to string-valued inference problems.

  6. An extended macro model accounting for acceleration changes with memory and numerical tests

    NASA Astrophysics Data System (ADS)

    Cheng, Rongjun; Ge, Hongxia; Sun, Fengxin; Wang, Jufeng

    2018-09-01

    Considering effect of acceleration changes with memory, an improved continuum model of traffic flow is proposed in this paper. By applying the linear stability theory, we derived the new model's linear stability condition. Through nonlinear analysis, the KdV-Burgers equation is derived to describe the propagating behavior of traffic density wave near the neutral stability line. Numerical simulation is carried out to study the extended traffic flow model, which explores how acceleration changes with memory affected each car's velocity, density and fuel consumption and exhaust emissions. Numerical results demonstrate that acceleration changes with memory have significant negative effect on dynamic characteristic of traffic flow. Furthermore, research results verify that the effect of acceleration changes with memory will deteriorate the stability of traffic flow and increase cars' total fuel consumptions and emissions during the whole evolution of small perturbation.

  7. Non-Gaussian lineshapes and dynamics of time-resolved linear and nonlinear (correlation) spectra.

    PubMed

    Dinpajooh, Mohammadhasan; Matyushov, Dmitry V

    2014-07-17

    Signatures of nonlinear and non-Gaussian dynamics in time-resolved linear and nonlinear (correlation) 2D spectra are analyzed in a model considering a linear plus quadratic dependence of the spectroscopic transition frequency on a Gaussian nuclear coordinate of the thermal bath (quadratic coupling). This new model is contrasted to the commonly assumed linear dependence of the transition frequency on the medium nuclear coordinates (linear coupling). The linear coupling model predicts equality between the Stokes shift and equilibrium correlation functions of the transition frequency and time-independent spectral width. Both predictions are often violated, and we are asking here the question of whether a nonlinear solvent response and/or non-Gaussian dynamics are required to explain these observations. We find that correlation functions of spectroscopic observables calculated in the quadratic coupling model depend on the chromophore's electronic state and the spectral width gains time dependence, all in violation of the predictions of the linear coupling models. Lineshape functions of 2D spectra are derived assuming Ornstein-Uhlenbeck dynamics of the bath nuclear modes. The model predicts asymmetry of 2D correlation plots and bending of the center line. The latter is often used to extract two-point correlation functions from 2D spectra. The dynamics of the transition frequency are non-Gaussian. However, the effect of non-Gaussian dynamics is limited to the third-order (skewness) time correlation function, without affecting the time correlation functions of higher order. The theory is tested against molecular dynamics simulations of a model polar-polarizable chromophore dissolved in a force field water.

  8. A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.

    2012-01-01

    A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…

  9. An Extension of the Partial Credit Model with an Application to the Measurement of Change.

    ERIC Educational Resources Information Center

    Fischer, Gerhard H.; Ponocny, Ivo

    1994-01-01

    An extension to the partial credit model, the linear partial credit model, is considered under the assumption of a certain linear decomposition of the item x category parameters into basic parameters. A conditional maximum likelihood algorithm for estimating basic parameters is presented and illustrated with simulation and an empirical study. (SLD)

  10. Photon-Z mixing the Weinberg-Salam model: Effective charges and the a = -3 gauge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baulieu, L.; Coquereaux, R.

    1982-04-15

    We study some properties of the Weinberg-Salam model connected with the photon-Z mixing. We solve the linear Dyson-Schwinger equations between full and 1PI boson propagators. The task is made easier, by the two-point function Ward identities that we derive to all orders and in any gauge. Some aspects of the renormalization of the model are also discussed. We display the exact mass-dependent one-loop two-point functions involving the photon and Z field in any linear xi-gauge. The special gauge a = xi/sup -1/ = -3 is shown to play a peculiar role. In this gauge, the Z field is multiplicatively renormalizablemore » (at the one-loop level), and one can construct both electric and weak effective charges of the theory from the photon and Z propagators, with a very simple expression similar to that of the QED Petermann, Stueckelberg, Gell-Mann and Low charge.« less

  11. A study of attitude control concepts for precision-pointing non-rigid spacecraft

    NASA Technical Reports Server (NTRS)

    Likins, P. W.

    1975-01-01

    Attitude control concepts for use onboard structurally nonrigid spacecraft that must be pointed with great precision are examined. The task of determining the eigenproperties of a system of linear time-invariant equations (in terms of hybrid coordinates) representing the attitude motion of a flexible spacecraft is discussed. Literal characteristics are developed for the associated eigenvalues and eigenvectors of the system. A method is presented for determining the poles and zeros of the transfer function describing the attitude dynamics of a flexible spacecraft characterized by hybrid coordinate equations. Alterations are made to linear regulator and observer theory to accommodate modeling errors. The results show that a model error vector, which evolves from an error system, can be added to a reduced system model, estimated by an observer, and used by the control law to render the system less sensitive to uncertain magnitudes and phase relations of truncated modes and external disturbance effects. A hybrid coordinate formulation using the provided assumed mode shapes, rather than incorporating the usual finite element approach is provided.

  12. Linear and non-linear infrared response of one-dimensional vibrational Holstein polarons in the anti-adiabatic limit: Optical and acoustical phonon models

    NASA Astrophysics Data System (ADS)

    Falvo, Cyril

    2018-02-01

    The theory of linear and non-linear infrared response of vibrational Holstein polarons in one-dimensional lattices is presented in order to identify the spectral signatures of self-trapping phenomena. Using a canonical transformation, the optical response is computed from the small polaron point of view which is valid in the anti-adiabatic limit. Two types of phonon baths are considered: optical phonons and acoustical phonons, and simple expressions are derived for the infrared response. It is shown that for the case of optical phonons, the linear response can directly probe the polaron density of states. The model is used to interpret the experimental spectrum of crystalline acetanilide in the C=O range. For the case of acoustical phonons, it is shown that two bound states can be observed in the two-dimensional infrared spectrum at low temperature. At high temperature, analysis of the time-dependence of the two-dimensional infrared spectrum indicates that bath mediated correlations slow down spectral diffusion. The model is used to interpret the experimental linear-spectroscopy of model α-helix and β-sheet polypeptides. This work shows that the Davydov Hamiltonian cannot explain the observations in the NH stretching range.

  13. Neuronal modelling of baroreflex response to orthostatic stress

    NASA Astrophysics Data System (ADS)

    Samin, Azfar

    The accelerations experienced in aerial combat can cause pilot loss of consciousness (GLOC) due to a critical reduction in cerebral blood circulation. The development of smart protective equipment requires understanding of how the brain processes blood pressure (BP) information in response to acceleration. We present a biologically plausible model of the Baroreflex to investigate the neural correlates of short-term BP control under acceleration or orthostatic stress. The neuronal network model, which employs an integrate-and-fire representation of a biological neuron, comprises the sensory, motor, and the central neural processing areas that form the Baroreflex. Our modelling strategy is to test hypotheses relating to the encoding mechanisms of multiple sensory inputs to the nucleus tractus solitarius (NTS), the site of central neural processing. The goal is to run simulations and reproduce model responses that are consistent with the variety of available experimental data. Model construction and connectivity are inspired by the available anatomical and neurophysiological evidence that points to a barotopic organization in the NTS, and the presence of frequency-dependent synaptic depression, which provides a mechanism for generating non-linear local responses in NTS neurons that result in quantifiable dynamic global baroreflex responses. The entire physiological range of BP and rate of change of BP variables is encoded in a palisade of NTS neurons in that the spike responses approximate Gaussian 'tuning' curves. An adapting weighted-average decoding scheme computes the motor responses and a compensatory signal regulates the heart rate (HR). Model simulations suggest that: (1) the NTS neurons can encode the hydrostatic pressure difference between two vertically separated sensory receptor regions at +Gz, and use changes in that difference for the regulation of HR; (2) even though NTS neurons do not fire with a cardiac rhythm seen in the afferents, pulse-rhythmic activity is regained downstream provided the input phase information in preserved centrally; (3) frequency-dependent synaptic depression, which causes temporal variations in synaptic strength due to changes in input frequency, is a possible mechanism of non-linear dynamic baroreflex gain control. Synaptic depression enables the NTS neuron to encode dBP/dt but to lose information about the steady state firing of the afferents.

  14. Transition probability, dynamic regimes, and the critical point of financial crisis

    NASA Astrophysics Data System (ADS)

    Tang, Yinan; Chen, Ping

    2015-07-01

    An empirical and theoretical analysis of financial crises is conducted based on statistical mechanics in non-equilibrium physics. The transition probability provides a new tool for diagnosing a changing market. Both calm and turbulent markets can be described by the birth-death process for price movements driven by identical agents. The transition probability in a time window can be estimated from stock market indexes. Positive and negative feedback trading behaviors can be revealed by the upper and lower curves in transition probability. Three dynamic regimes are discovered from two time periods including linear, quasi-linear, and nonlinear patterns. There is a clear link between liberalization policy and market nonlinearity. Numerical estimation of a market turning point is close to the historical event of the US 2008 financial crisis.

  15. Bipartite charge fluctuations in one-dimensional Z2 superconductors and insulators

    NASA Astrophysics Data System (ADS)

    Herviou, Loïc; Mora, Christophe; Le Hur, Karyn

    2017-09-01

    Bipartite charge fluctuations (BCFs) have been introduced to provide an experimental indication of many-body entanglement. They have proved themselves to be a very efficient and useful tool to characterize quantum phase transitions in a variety of quantum models conserving the total number of particles (or magnetization for spin systems) and can be measured experimentally. We study the BCFs in generic one-dimensional Z2 (topological) models including the Kitaev superconducting wire model, the Ising chain, or various topological insulators such as the Su-Schrieffer-Heeger model. The considered charge (either the fermionic number or the relative density) is no longer conserved, leading to macroscopic fluctuations of the number of particles. We demonstrate that at phase transitions characterized by a linear dispersion, the BCFs probe the change in a winding number that allows one to pinpoint the transition and corresponds to the topological invariant for standard models. Additionally, we prove that a subdominant logarithmic contribution is still present at the exact critical point. Its quantized coefficient is universal and characterizes the critical model. Results are extended to the Rashba topological nanowires and to the X Y Z model.

  16. Can we detect a nonlinear response to temperature in European plant phenology?

    NASA Astrophysics Data System (ADS)

    Jochner, Susanne; Sparks, Tim H.; Laube, Julia; Menzel, Annette

    2016-10-01

    Over a large temperature range, the statistical association between spring phenology and temperature is often regarded and treated as a linear function. There are suggestions that a sigmoidal relationship with definite upper and lower limits to leaf unfolding and flowering onset dates might be more realistic. We utilised European plant phenological records provided by the European phenology database PEP725 and gridded monthly mean temperature data for 1951-2012 calculated from the ENSEMBLES data set E-OBS (version 7.0). We analysed 568,456 observations of ten spring flowering or leafing phenophases derived from 3657 stations in 22 European countries in order to detect possible nonlinear responses to temperature. Linear response rates averaged for all stations ranged between -7.7 (flowering of hazel) and -2.7 days °C-1 (leaf unfolding of beech and oak). A lower sensitivity at the cooler end of the temperature range was detected for most phenophases. However, a similar lower sensitivity at the warmer end was not that evident. For only ˜14 % of the station time series (where a comparison between linear and nonlinear model was possible), nonlinear models described the relationship significantly better than linear models. Although in most cases simple linear models might be still sufficient to predict future changes, this linear relationship between phenology and temperature might not be appropriate when incorporating phenological data of very cold (and possibly very warm) environments. For these cases, extrapolations on the basis of linear models would introduce uncertainty in expected ecosystem changes.

  17. Coupling climate and hydrological models to evaluate the impact of climate change on run of the river hydropower schemes from UK study sites

    NASA Astrophysics Data System (ADS)

    Pasten-Zapata, Ernesto; Jones, Julie; Moggridge, Helen

    2015-04-01

    As climate change is expected to generate variations on the Earth's precipitation and temperature, the water cycle will also experience changes. Consequently, water users will have to be prepared for possible changes in future water availability. The main objective of this research is to evaluate the impacts of climate change on river regimes and the implications to the operation and feasibility of run of the river hydropower schemes by analyzing four UK study sites. Run of the river schemes are selected for analysis due to their higher dependence to the available river flow volumes when compared to storage hydropower schemes that can rely on previously accumulated water volumes (linked to poster in session HS5.3). Global Climate Models (GCMs) represent the main tool to assess future climate change. In this research, Regional Climate Models (RCMs), which dynamically downscale GCM outputs providing higher resolutions, are used as starting point to evaluate climate change within the study catchments. RCM daily temperature and precipitation will be downscaled to an appropriate scale for impact studies and bias corrected using different statistical methods: linear scaling, local intensity scaling, power transformation, variance scaling and delta change correction. The downscaled variables will then be coupled to hydrological models that have been previously calibrated and validated against observed daily river flow data. The coupled hydrological and climate models will then be used to simulate historic river flows that are compared to daily observed values in order to evaluate the model accuracy. As this research will employ several different RCMs (from the EURO-CORDEX simulations), downscaling and bias correction methodologies, greenhouse emission scenarios and hydrological models, the uncertainty of each element will be estimated. According to their uncertainty magnitude, a prediction of the best downscaling approach (or approaches) is expected to be obtained. The current progress of the project will be presented along with the steps to be followed in the future.

  18. Patterns of Recovery from Pain after Cesarean Delivery.

    PubMed

    Booth, Jessica L; Sharpe, Emily E; Houle, Timothy T; Harris, Lynnette; Curry, Regina S; Aschenbrenner, Carol A; Eisenach, James C

    2018-06-13

    We know very little about the change in pain in the first 2 months after surgery. To address this gap, we studied 530 women scheduled for elective cesarean delivery who completed daily pain diaries for two months after surgery via text messaging. Over 82% of subjects missed fewer than 10 diary entries and were included in the analysis. Completers were more likely to be Caucasian, non-smokers, and with fewer previous pregnancies than non-completers. Daily worst pain intensity ratings for the previous 24 hours were fit to a log(time) function and allowed to change to a different function up to 3 times according to a Bayesian criterion. All women had at least one change point, occurring 22 ± 9 days postoperatively, and 81% of women had only one change, most commonly to a linear function at 0 pain. Approximately 9% of women were predicted to have pain 2 months after surgery, similar to previous observations. Cluster analysis revealed 6 trajectories of recovery from pain. Predictors of cluster membership included severity of acute pain, perceived stress, surgical factors, and smoking status. These data demonstrate feasibility but considerable challenges to this approach to data acquisition. The form of the initial process of recovery from pain is common to all women, with divergence of patterns at 2-4 weeks after cesarean delivery. The change point model accurately predicts recovery from pain, its parameters can be used to assess predictors of speed of recovery, and it may be useful for future observational, forecasting, and interventional trials.

  19. Deformed Palmprint Matching Based on Stable Regions.

    PubMed

    Wu, Xiangqian; Zhao, Qiushi

    2015-12-01

    Palmprint recognition (PR) is an effective technology for personal recognition. A main problem, which deteriorates the performance of PR, is the deformations of palmprint images. This problem becomes more severe on contactless occasions, in which images are acquired without any guiding mechanisms, and hence critically limits the applications of PR. To solve the deformation problems, in this paper, a model for non-linearly deformed palmprint matching is derived by approximating non-linear deformed palmprint images with piecewise-linear deformed stable regions. Based on this model, a novel approach for deformed palmprint matching, named key point-based block growing (KPBG), is proposed. In KPBG, an iterative M-estimator sample consensus algorithm based on scale invariant feature transform features is devised to compute piecewise-linear transformations to approximate the non-linear deformations of palmprints, and then, the stable regions complying with the linear transformations are decided using a block growing algorithm. Palmprint feature extraction and matching are performed over these stable regions to compute matching scores for decision. Experiments on several public palmprint databases show that the proposed models and the KPBG approach can effectively solve the deformation problem in palmprint verification and outperform the state-of-the-art methods.

  20. Unsupervised Gaussian Mixture-Model With Expectation Maximization for Detecting Glaucomatous Progression in Standard Automated Perimetry Visual Fields.

    PubMed

    Yousefi, Siamak; Balasubramanian, Madhusudhanan; Goldbaum, Michael H; Medeiros, Felipe A; Zangwill, Linda M; Weinreb, Robert N; Liebmann, Jeffrey M; Girkin, Christopher A; Bowd, Christopher

    2016-05-01

    To validate Gaussian mixture-model with expectation maximization (GEM) and variational Bayesian independent component analysis mixture-models (VIM) for detecting glaucomatous progression along visual field (VF) defect patterns (GEM-progression of patterns (POP) and VIM-POP). To compare GEM-POP and VIM-POP with other methods. GEM and VIM models separated cross-sectional abnormal VFs from 859 eyes and normal VFs from 1117 eyes into abnormal and normal clusters. Clusters were decomposed into independent axes. The confidence limit (CL) of stability was established for each axis with a set of 84 stable eyes. Sensitivity for detecting progression was assessed in a sample of 83 eyes with known progressive glaucomatous optic neuropathy (PGON). Eyes were classified as progressed if any defect pattern progressed beyond the CL of stability. Performance of GEM-POP and VIM-POP was compared to point-wise linear regression (PLR), permutation analysis of PLR (PoPLR), and linear regression (LR) of mean deviation (MD), and visual field index (VFI). Sensitivity and specificity for detecting glaucomatous VFs were 89.9% and 93.8%, respectively, for GEM and 93.0% and 97.0%, respectively, for VIM. Receiver operating characteristic (ROC) curve areas for classifying progressed eyes were 0.82 for VIM-POP, 0.86 for GEM-POP, 0.81 for PoPLR, 0.69 for LR of MD, and 0.76 for LR of VFI. GEM-POP was significantly more sensitive to PGON than PoPLR and linear regression of MD and VFI in our sample, while providing localized progression information. Detection of glaucomatous progression can be improved by assessing longitudinal changes in localized patterns of glaucomatous defect identified by unsupervised machine learning.

  1. Linear Response Path Following: A Molecular Dynamics Method To Simulate Global Conformational Changes of Protein upon Ligand Binding.

    PubMed

    Tamura, Koichi; Hayashi, Shigehiko

    2015-07-14

    Molecular functions of proteins are often fulfilled by global conformational changes that couple with local events such as the binding of ligand molecules. High molecular complexity of proteins has, however, been an obstacle to obtain an atomistic view of the global conformational transitions, imposing a limitation on the mechanistic understanding of the functional processes. In this study, we developed a new method of molecular dynamics (MD) simulation called the linear response path following (LRPF) to simulate a protein's global conformational changes upon ligand binding. The method introduces a biasing force based on a linear response theory, which determines a local reaction coordinate in the configuration space that represents linear coupling between local events of ligand binding and global conformational changes and thus provides one with fully atomistic models undergoing large conformational changes without knowledge of a target structure. The overall transition process involving nonlinear conformational changes is simulated through iterative cycles consisting of a biased MD simulation with an updated linear response force and a following unbiased MD simulation for relaxation. We applied the method to the simulation of global conformational changes of the yeast calmodulin N-terminal domain and successfully searched out the end conformation. The atomistically detailed trajectories revealed a sequence of molecular events that properly lead to the global conformational changes and identified key steps of local-global coupling that induce the conformational transitions. The LRPF method provides one with a powerful means to model conformational changes of proteins such as motors and transporters where local-global coupling plays a pivotal role in their functional processes.

  2. M(sub W) = 7.2-7.4 Estimated for A.D. 900 Seattle Fault Earthquake by Modeling the Uplift of a Lidar-Mapped Marine Terrace

    NASA Technical Reports Server (NTRS)

    Muller, Jordan R.; Harding, David J.

    2006-01-01

    Inverse modeling of slip on the Seattle fault system, constrained by elevations of uplifted marine terraces, provides a well-constrained estimate of the magnitude of the largest known upper-crust earthquake in the Puget Sound region within the past 2500 years. The terrace elevations that constrain the slip inversion are extracted from elevation and slope images generated from LIDAR surveys of the Puget Sound collected in 1996-2002. The images reveal a single uplifted terrace, dated to 1000 cal yr B.P. near Restoration Point, which is morphologically continuous along the southern shoreline of Bainbridge Island and is visible at comparable elevations within a 25 km by 12 km region encompassing coastlines of West Seattle, Bremerton, East Bremerton, Port Orchard, and Waterman Point. Considering sea level changes since A.D. 900, the maximum uplift magnitudes of shoreline inner edges approach 9 m and are located at the southernmost coastline of Bainbridge Island and the northern tip of Waterman Point, while tilt magnitudes are modest - approaching 0.1 degrees. For each of several different Seattle fault geometry interpretations, we use a linear inversion code to solve for distributed slip on the fault surfaces. Moment magnitudes of 7.2 to 7.4 are calculated directly from the different slip solutions. In general, the greatest slip of the A.D. 900 event was confined to the frontal thrust of the Seattle fault system and was centered beneath Puget Sound between Restoration Point and Alki Point.

  3. Helping Seniors Plan for Posthospital Discharge Needs Before a Hospitalization Occurs: Results from the Randomized Control Trial of PlanYourLifespan.org.

    PubMed

    Lindquist, Lee A; Ramirez-Zohfeld, Vanessa; Sunkara, Priya D; Forcucci, Chris; Campbell, Dianne S; Mitzen, Phyllis; Ciolino, Jody D; Kricke, Gayle; Seltzer, Anne; Ramirez, Ana V; Cameron, Kenzie A

    2017-11-01

    Investigate the effect of PlanYourLifespan.org (PYL) on knowledge of posthospital discharge options. Multisite randomized controlled trial. Nonhospitalized adults, aged =65 years, living in urban, suburban, and rural areas of Texas, Illinois, and Indiana. PYL is a national, publicly available tool that provides education on posthospital therapy choices and local home-based resources. Participants completed an in-person baseline survey, followed by exposure to intervention or attention control (AC) websites, then 1-month and 3-month telephone surveys. The primary knowledge outcome was measured with 6 items (possible 0-6 points) pertaining to hospital discharge needs. Among 385 participants randomized, mean age was 71.9 years (standard deviation 5.6) and 79.5% of participants were female. At 1 month, the intervention group had a 0.6 point change (standard deviation = 1.6) versus the AC group who had a -0.1 point change in knowledge score. Linear mixed modeling results suggest sex, health literacy level, level of education, income, and history of high blood pressure/kidney disease were significant predictors of knowledge over time. Controlling for these variables, treatment effect remained significant (P < 0.0001). Seniors who used PYL demonstrated an increased understanding of posthospitalization and home services compared to the control group. © 2017 Society of Hospital Medicine

  4. Fourier transform infrared reflectance spectra of latent fingerprints: a biometric gauge for the age of an individual.

    PubMed

    Hemmila, April; McGill, Jim; Ritter, David

    2008-03-01

    To determine if changes in fingerprint infrared spectra linear with age can be found, partial least squares (PLS1) regression of 155 fingerprint infrared spectra against the person's age was constructed. The regression produced a linear model of age as a function of spectrum with a root mean square error of calibration of less than 4 years, showing an inflection at about 25 years of age. The spectral ranges emphasized by the regression do not correspond to the highest concentration constituents of the fingerprints. Separate linear regression models for old and young people can be constructed with even more statistical rigor. The success of the regression demonstrates that a combination of constituents can be found that changes linearly with age, with a significant shift around puberty.

  5. A Point-process Response Model for Spike Trains from Single Neurons in Neural Circuits under Optogenetic Stimulation

    PubMed Central

    Luo, X.; Gee, S.; Sohal, V.; Small, D.

    2015-01-01

    Optogenetics is a new tool to study neuronal circuits that have been genetically modified to allow stimulation by flashes of light. We study recordings from single neurons within neural circuits under optogenetic stimulation. The data from these experiments present a statistical challenge of modeling a high frequency point process (neuronal spikes) while the input is another high frequency point process (light flashes). We further develop a generalized linear model approach to model the relationships between two point processes, employing additive point-process response functions. The resulting model, Point-process Responses for Optogenetics (PRO), provides explicit nonlinear transformations to link the input point process with the output one. Such response functions may provide important and interpretable scientific insights into the properties of the biophysical process that governs neural spiking in response to optogenetic stimulation. We validate and compare the PRO model using a real dataset and simulations, and our model yields a superior area-under-the- curve value as high as 93% for predicting every future spike. For our experiment on the recurrent layer V circuit in the prefrontal cortex, the PRO model provides evidence that neurons integrate their inputs in a sophisticated manner. Another use of the model is that it enables understanding how neural circuits are altered under various disease conditions and/or experimental conditions by comparing the PRO parameters. PMID:26411923

  6. A model for compression-weakening materials and the elastic fields due to contractile cells

    NASA Astrophysics Data System (ADS)

    Rosakis, Phoebus; Notbohm, Jacob; Ravichandran, Guruswami

    2015-12-01

    We construct a homogeneous, nonlinear elastic constitutive law that models aspects of the mechanical behavior of inhomogeneous fibrin networks. Fibers in such networks buckle when in compression. We model this as a loss of stiffness in compression in the stress-strain relations of the homogeneous constitutive model. Problems that model a contracting biological cell in a finite matrix are solved. It is found that matrix displacements and stresses induced by cell contraction decay slower (with distance from the cell) in a compression weakening material than linear elasticity would predict. This points toward a mechanism for long-range cell mechanosensing. In contrast, an expanding cell would induce displacements that decay faster than in a linear elastic matrix.

  7. Climatic effects on mosquito abundance in Mediterranean wetlands

    PubMed Central

    2014-01-01

    Background The impact of climate change on vector-borne diseases is highly controversial. One of the principal points of debate is whether or not climate influences mosquito abundance, a key factor in disease transmission. Methods To test this hypothesis, we analysed ten years of data (2003–2012) from biweekly surveys to assess inter-annual and seasonal relationships between the abundance of seven mosquito species known to be pathogen vectors (West Nile virus, Usutu virus, dirofilariasis and Plasmodium sp.) and several climatic variables in two wetlands in SW Spain. Results Within-season abundance patterns were related to climatic variables (i.e. temperature, rainfall, tide heights, relative humidity and photoperiod) that varied according to the mosquito species in question. Rainfall during winter months was positively related to Culex pipiens and Ochlerotatus detritus annual abundances. Annual maximum temperatures were non-linearly related to annual Cx. pipiens abundance, while annual mean temperatures were positively related to annual Ochlerotatus caspius abundance. Finally, we modelled shifts in mosquito abundances using the A2 and B2 temperature and rainfall climate change scenarios for the period 2011–2100. While Oc. caspius, an important anthropophilic species, may increase in abundance, no changes are expected for Cx. pipiens or the salt-marsh mosquito Oc. detritus. Conclusions Our results highlight that the effects of climate are species-specific, place-specific and non-linear and that linear approaches will therefore overestimate the effect of climate change on mosquito abundances at high temperatures. Climate warming does not necessarily lead to an increase in mosquito abundance in natural Mediterranean wetlands and will affect, above all, species such as Oc. caspius whose numbers are not closely linked to rainfall and are influenced, rather, by local tidal patterns and temperatures. The final impact of changes in vector abundance on disease frequency will depend on the direct and indirect effects of climate and other parameters related to pathogen amplification and spillover on humans and other vertebrates. PMID:25030527

  8. A linear regression model for predicting PNW estuarine temperatures in a changing climate

    EPA Science Inventory

    Pacific Northwest coastal regions, estuaries, and associated ecosystems are vulnerable to the potential effects of climate change, especially to changes in nearshore water temperature. While predictive climate models simulate future air temperatures, no such projections exist for...

  9. Predicting birth weight with conditionally linear transformation models.

    PubMed

    Möst, Lisa; Schmid, Matthias; Faschingbauer, Florian; Hothorn, Torsten

    2016-12-01

    Low and high birth weight (BW) are important risk factors for neonatal morbidity and mortality. Gynecologists must therefore accurately predict BW before delivery. Most prediction formulas for BW are based on prenatal ultrasound measurements carried out within one week prior to birth. Although successfully used in clinical practice, these formulas focus on point predictions of BW but do not systematically quantify uncertainty of the predictions, i.e. they result in estimates of the conditional mean of BW but do not deliver prediction intervals. To overcome this problem, we introduce conditionally linear transformation models (CLTMs) to predict BW. Instead of focusing only on the conditional mean, CLTMs model the whole conditional distribution function of BW given prenatal ultrasound parameters. Consequently, the CLTM approach delivers both point predictions of BW and fetus-specific prediction intervals. Prediction intervals constitute an easy-to-interpret measure of prediction accuracy and allow identification of fetuses subject to high prediction uncertainty. Using a data set of 8712 deliveries at the Perinatal Centre at the University Clinic Erlangen (Germany), we analyzed variants of CLTMs and compared them to standard linear regression estimation techniques used in the past and to quantile regression approaches. The best-performing CLTM variant was competitive with quantile regression and linear regression approaches in terms of conditional coverage and average length of the prediction intervals. We propose that CLTMs be used because they are able to account for possible heteroscedasticity, kurtosis, and skewness of the distribution of BWs. © The Author(s) 2014.

  10. Identifying the Factors That Influence Change in SEBD Using Logistic Regression Analysis

    ERIC Educational Resources Information Center

    Camilleri, Liberato; Cefai, Carmel

    2013-01-01

    Multiple linear regression and ANOVA models are widely used in applications since they provide effective statistical tools for assessing the relationship between a continuous dependent variable and several predictors. However these models rely heavily on linearity and normality assumptions and they do not accommodate categorical dependent…

  11. A Statistical Approach to Passive Target Tracking.

    DTIC Science & Technology

    1981-04-01

    a fixed heading of 90 degrees. For 7F. A. Graybill , An Introduction to Linear Statistical Models , Vol. 1, New York: John Wiley&-Sons -Inc. (1961). 13...likelihood estimators. 12 NCSC TM 311-81 The adjustment for a changing error variance is easy using the linear model approach; i.e., use weighted

  12. Geometry-based ensembles: toward a structural characterization of the classification boundary.

    PubMed

    Pujol, Oriol; Masip, David

    2009-06-01

    This paper introduces a novel binary discriminative learning technique based on the approximation of the nonlinear decision boundary by a piecewise linear smooth additive model. The decision border is geometrically defined by means of the characterizing boundary points-points that belong to the optimal boundary under a certain notion of robustness. Based on these points, a set of locally robust linear classifiers is defined and assembled by means of a Tikhonov regularized optimization procedure in an additive model to create a final lambda-smooth decision rule. As a result, a very simple and robust classifier with a strong geometrical meaning and nonlinear behavior is obtained. The simplicity of the method allows its extension to cope with some of today's machine learning challenges, such as online learning, large-scale learning or parallelization, with linear computational complexity. We validate our approach on the UCI database, comparing with several state-of-the-art classification techniques. Finally, we apply our technique in online and large-scale scenarios and in six real-life computer vision and pattern recognition problems: gender recognition based on face images, intravascular ultrasound tissue classification, speed traffic sign detection, Chagas' disease myocardial damage severity detection, old musical scores clef classification, and action recognition using 3D accelerometer data from a wearable device. The results are promising and this paper opens a line of research that deserves further attention.

  13. [Trend in mortality from external causes in pregnant and postpartum women and its relationship to socioeconomic factors in Colombia, 1998-2010].

    PubMed

    Salazar, Edwin; Buitrago, Carolina; Molina, Federico; Alzate, Catalina Arango

    2015-05-01

    Determine the trend in mortality from external causes in pregnant and postpartum women and its relationship to socioeconomic factors. Descriptive study, based on the official registries of deaths reported by the National Statistics Agency, 1998-2010. The trend was analyzed using Poisson regressions. Bivariate correlations and multiple linear regression models were constructed to explore the relationship between mortality and socioeconomic factors: human development index, Gini index, gross domestic product, unsatisfied basic needs, unemployment rate, poverty, extreme poverty, quality of life index, illiteracy rate, and percentage of affiliation to the Social Security System. A total of 2 223 female deaths from external causes were recorded, of which 1 429 occurred during pregnancy and 794 in the postpartum period. The gross mortality rate dropped from 30.7 per 100 000 live births plus fetal deaths in 1998 to 16.7 in 2010. A downward curve with no significant inflection points was shown in the risk of dying from this cause. The multiple linear regression model showed a correlation between mortality and extreme poverty and the illiteracy rate, suggesting that these indicators could explain 89.4% of the change in mortality from external causes in pregnant and postpartum women each year in Colombia. Mortality from external causes in pregnant and postpartum women showed a significant downward trend that may be explained by important socioeconomic changes in the country, including a decrease in extreme poverty and in the illiteracy rate.

  14. Serum uric acid and progression of diabetic nephropathy in type 1 diabetes.

    PubMed

    Pilemann-Lyberg, S; Lindhardt, M; Persson, Frederik; Andersen, S; Rossing, P

    2018-05-01

    Uric acid (UA) is a risk factor for CKD. We evaluated UA in relation to change in GFR in patients with type 1 diabetes. Post hoc analysis of a trial of losartan in diabetic nephropathy, mean follow-up 3 years (IQR 1.5-3.5). UA was measured at baseline. Primary end-point was change in measured GFR. UA was tested in a linear regression model adjusted for known progression factors (gender, HbA 1c , systolic blood pressure, cholesterol, baseline GFR and baseline urinary albumin excretion rate (UAER)). Baseline UA was 0.339 mmol/l (SD ±0.107), GFR 87 ml/min/1.73 m 2 (±23), geometric mean UAER 1023 mg/24 h (IQR, 631 - 1995). Mean rate of decline in GFR was 4.6 (3.7) ml/min/year. In the upper quartile of baseline UA the mean decline in GFR from baseline to the end of the study was 6.2 (4.9) ml/min/1.73 m 2 and 4.1 (3.1) ml/min/1.73 m 2 in the three lower quartiles of UA, (p = 0.088). In a linear model including baseline covariates (UAER, GFR, total cholesterol, HDL cholesterol) UA was associated with decline in GFR (r 2  = 0.45, p < 0.001). Uric acid was weakly associated with decline in GFR in type 1 diabetic patients with overt nephropathy. Copyright © 2018. Published by Elsevier Inc.

  15. Lattice modeling and application of independent component analysis to high power, long bunch beams in the Los Alamos Proton Storage Ring

    NASA Astrophysics Data System (ADS)

    Kolski, Jeffrey

    The linear lattice properties of the Proton Storage Ring (PSR) at the Los Alamos Neutron Science Center (LANSCE) in Los Alamos, NM were measured and applied to determine a better linear accelerator model. We found that the initial model was deficient in predicting the vertical focusing strength. The additional vertical focusing was located through fundamental understanding of experiment and statistically rigorous analysis. An improved model was constructed and compared against the initial model and measurement at operation set points and set points far away from nominal and was shown to indeed be an enhanced model. Independent component analysis (ICA) is a tool for data mining in many fields of science. Traditionally, ICA is applied to turn-by-turn beam position data as a means to measure the lattice functions of the real machine. Due to the diagnostic setup for the PSR, this method is not applicable. A new application method for ICA is derived, ICA applied along the length of the bunch. The ICA modes represent motions within the beam pulse. Several of the dominate ICA modes are experimentally identified.

  16. Detection of the change point in oxygen uptake during an incremental exercise test using recursive residuals: relationship to the plasma lactate accumulation and blood acid base balance.

    PubMed

    Zoladz, J A; Szkutnik, Z; Majerczak, J; Duda, K

    1998-09-01

    The purpose of this study was to develop a method to determine the power output at which oxygen uptake (VO2) during an incremental exercise test begins to rise non-linearly. A group of 26 healthy non-smoking men [mean age 22.1 (SD 1.4) years, body mass 73.6 (SD 7.4) kg, height 179.4 (SD 7.5) cm, maximal oxygen uptake (VO2max) 3.726 (SD 0.363) l x min(-1)], experienced in laboratory tests, were the subjects in this study. They performed an incremental exercise test on a cycle ergometer at a pedalling rate of 70 rev x min(-1). The test started at a power output of 30 W, followed by increases amounting to 30 W every 3 min. At 5 min prior to the first exercise intensity, at the end of each stage of exercise protocol, blood samples (1 ml each) were taken from an antecubital vein. The samples were analysed for plasma lactate concentration [La]pl, partial pressure of O2 and CO2 and hydrogen ion concentration [H+]b. The lactate threshold (LT) in this study was defined as the highest power output above which [La-]pl showed a sustained increase of more than 0.5 mmol x l(-1) x step(-1). The VO2 was measured breath-by-breath. In the analysis of the change point (CP) of VO2 during the incremental exercise test, a two-phase model was assumed for the 3rd-min-data of each step of the test: Xi = at(i) + b + epsilon(i) for i = 1,2, ..., T, and E(Xi) > at(i) + b for i = T + 1, ..., n, where X1, ..., Xn are independent and epsilon(i) approximately N(0, sigma2). In the first phase, a linear relationship between VO2 and power output was assumed, whereas in the second phase an additional increase in VO2 above the values expected from the linear model was allowed. The power output at which the first phase ended was called the change point in oxygen uptake (CP-VO2). The identification of the model consisted of two steps: testing for the existence of CP and estimating its location. Both procedures were based on suitably normalised recursive residuals. We showed that in 25 out of 26 subjects it was possible to determine the CP-VO2 as described in our model. The power output at CP-VO2 amounted to 136.8 (SD 31.3) W. It was only 11 W -- non significantly -- higher than the power output corresponding to LT. The VO2 at CP-VO2 amounted to 1.828 (SD 0.356) l x min(-1) was [48.9 (SD 7.9)% VO2max]. The [La-]pl at CP-VO2, amounting to 2.57 (SD 0.69) mmol x l(-1) was significantly elevated (P < 0.01) above the resting level [1.85 (SD 0.46) mmol x l(-1)], however the [H+]b at CP-VO2 amounting to 45.1 (SD 3.0) nmol x l(-1), was not significantly different from the values at rest which amounted to 44.14 (SD 2.79) nmol x l(-1). An increase of power output of 30 W above CP-VO2 was accompanied by a significant increase in [H+]b above the resting level (P = 0.03).

  17. Extensions of D-optimal Minimal Designs for Symmetric Mixture Models

    PubMed Central

    Raghavarao, Damaraju; Chervoneva, Inna

    2017-01-01

    The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. In This Paper, Extensions Of The D-Optimal Minimal Designs Are Developed For A General Mixture Model To Allow Additional Interior Points In The Design Space To Enable Prediction Of The Entire Response Surface Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations. PMID:29081574

  18. Linear and non-linear bias: predictions versus measurements

    NASA Astrophysics Data System (ADS)

    Hoffmann, K.; Bel, J.; Gaztañaga, E.

    2017-02-01

    We study the linear and non-linear bias parameters which determine the mapping between the distributions of galaxies and the full matter density fields, comparing different measurements and predictions. Associating galaxies with dark matter haloes in the Marenostrum Institut de Ciències de l'Espai (MICE) Grand Challenge N-body simulation, we directly measure the bias parameters by comparing the smoothed density fluctuations of haloes and matter in the same region at different positions as a function of smoothing scale. Alternatively, we measure the bias parameters by matching the probability distributions of halo and matter density fluctuations, which can be applied to observations. These direct bias measurements are compared to corresponding measurements from two-point and different third-order correlations, as well as predictions from the peak-background model, which we presented in previous papers using the same data. We find an overall variation of the linear bias measurements and predictions of ˜5 per cent with respect to results from two-point correlations for different halo samples with masses between ˜1012and1015 h-1 M⊙ at the redshifts z = 0.0 and 0.5. Variations between the second- and third-order bias parameters from the different methods show larger variations, but with consistent trends in mass and redshift. The various bias measurements reveal a tight relation between the linear and the quadratic bias parameters, which is consistent with results from the literature based on simulations with different cosmologies. Such a universal relation might improve constraints on cosmological models, derived from second-order clustering statistics at small scales or higher order clustering statistics.

  19. Edge-defined film-fed growth of thin silicon sheets

    NASA Technical Reports Server (NTRS)

    Ettouney, H. M.; Kalejs, J. P.

    1984-01-01

    Finite element analysis was used on two length scales to understand crystal growth of thin silicon sheets. Thermal-capillary models of entire ribbon growth systems were developed. Microscopic modeling of morphological structure of melt/solid interfaces beyond the point of linear instability was carried out. The application to silicon system is discussed.

  20. CRITICAL EVALUATION OF THE DIFFUSION HYPOTHESIS IN THE THEORY OF POROUS MEDIA VOLATILE ORGANIC COMPOUND (VOC) SOURCES AND SINKS

    EPA Science Inventory

    The paper proposes three alternative, diffusion-limited mathematical models to account for volatile organic compound (VOC) interactions with indoor sinks, using the linear isotherm model as a reference point. (NOTE: Recent reports by both the U.S. EPA and a study committee of the...

Top