Olmez, Hülya Kaptan; Aran, Necla
2005-02-01
Mathematical models describing the growth kinetic parameters (lag phase duration and growth rate) of Bacillus cereus as a function of temperature, pH, sodium lactate and sodium chloride concentrations were obtained in this study. In order to get a residual distribution closer to a normal distribution, the natural logarithm of the growth kinetic parameters were used in modeling. For reasons of parsimony, the polynomial models were reduced to contain only the coefficients significant at a level of p
2013-01-01
Background The distribution of anopheline mosquitoes is determined by temporally dynamic environmental and human-associated variables, operating over a range of spatial scales. Macro-spatial short-term trends are driven predominantly by prior (lagged) seasonal changes in climate, which regulate the abundance of suitable aquatic larval habitats. Micro-spatial distribution is determined by the location of these habitats, proximity and abundance of available human bloodmeals and prevailing micro-climatic conditions. The challenge of analysing—in a single coherent statistical framework—the lagged and distributed effect of seasonal climate changes simultaneously with the effects of an underlying hierarchy of spatial factors has hitherto not been addressed. Methods Data on Anopheles gambiae sensu stricto and A. funestus collected from households in Kilifi district, Kenya, were analysed using polynomial distributed lag generalized linear mixed models (PDL GLMMs). Results Anopheline density was positively and significantly associated with amount of rainfall between 4 to 47 days, negatively and significantly associated with maximum daily temperature between 5 and 35 days, and positively and significantly associated with maximum daily temperature between 29 and 48 days in the past (depending on Anopheles species). Multiple-occupancy households harboured greater mosquito numbers than single-occupancy households. A significant degree of mosquito clustering within households was identified. Conclusions The PDL GLMMs developed here represent a generalizable framework for analysing hierarchically-structured data in combination with explanatory variables which elicit lagged effects. The framework is a valuable tool for facilitating detailed understanding of determinants of the spatio-temporal distribution of Anopheles. Such understanding facilitates delivery of targeted, cost-effective and, in certain circumstances, preventative antivectorial interventions against malaria. PMID:24330615
Yang, Fang; Yang, Min; Hu, Yuehua; Zhang, Juying
2016-01-01
Background Hand, Foot, and Mouth Disease (HFMD) is a worldwide infectious disease. In China, many provinces have reported HFMD cases, especially the south and southwest provinces. Many studies have found a strong association between the incidence of HFMD and climatic factors such as temperature, rainfall, and relative humidity. However, few studies have analyzed cluster effects between various geographical units. Methods The nonlinear relationships and lag effects between weekly HFMD cases and climatic variables were estimated for the period of 2008–2013 using a polynomial distributed lag model. The extra-Poisson multilevel spatial polynomial model was used to model the exact relationship between weekly HFMD incidence and climatic variables after considering cluster effects, provincial correlated structure of HFMD incidence and overdispersion. The smoothing spline methods were used to detect threshold effects between climatic factors and HFMD incidence. Results The HFMD incidence spatial heterogeneity distributed among provinces, and the scale measurement of overdispersion was 548.077. After controlling for long-term trends, spatial heterogeneity and overdispersion, temperature was highly associated with HFMD incidence. Weekly average temperature and weekly temperature difference approximate inverse “V” shape and “V” shape relationships associated with HFMD incidence. The lag effects for weekly average temperature and weekly temperature difference were 3 weeks and 2 weeks. High spatial correlated HFMD incidence were detected in northern, central and southern province. Temperature can be used to explain most of variation of HFMD incidence in southern and northeastern provinces. After adjustment for temperature, eastern and Northern provinces still had high variation HFMD incidence. Conclusion We found a relatively strong association between weekly HFMD incidence and weekly average temperature. The association between the HFMD incidence and climatic variables spatial heterogeneity distributed across provinces. Future research should explore the risk factors that cause spatial correlated structure or high variation of HFMD incidence which can be explained by temperature. When analyzing association between HFMD incidence and climatic variables, spatial heterogeneity among provinces should be evaluated. Moreover, the extra-Poisson multilevel model was capable of modeling the association between overdispersion of HFMD incidence and climatic variables. PMID:26808311
Liao, Jiaqiang; Yu, Shicheng; Yang, Fang; Yang, Min; Hu, Yuehua; Zhang, Juying
2016-01-01
Hand, Foot, and Mouth Disease (HFMD) is a worldwide infectious disease. In China, many provinces have reported HFMD cases, especially the south and southwest provinces. Many studies have found a strong association between the incidence of HFMD and climatic factors such as temperature, rainfall, and relative humidity. However, few studies have analyzed cluster effects between various geographical units. The nonlinear relationships and lag effects between weekly HFMD cases and climatic variables were estimated for the period of 2008-2013 using a polynomial distributed lag model. The extra-Poisson multilevel spatial polynomial model was used to model the exact relationship between weekly HFMD incidence and climatic variables after considering cluster effects, provincial correlated structure of HFMD incidence and overdispersion. The smoothing spline methods were used to detect threshold effects between climatic factors and HFMD incidence. The HFMD incidence spatial heterogeneity distributed among provinces, and the scale measurement of overdispersion was 548.077. After controlling for long-term trends, spatial heterogeneity and overdispersion, temperature was highly associated with HFMD incidence. Weekly average temperature and weekly temperature difference approximate inverse "V" shape and "V" shape relationships associated with HFMD incidence. The lag effects for weekly average temperature and weekly temperature difference were 3 weeks and 2 weeks. High spatial correlated HFMD incidence were detected in northern, central and southern province. Temperature can be used to explain most of variation of HFMD incidence in southern and northeastern provinces. After adjustment for temperature, eastern and Northern provinces still had high variation HFMD incidence. We found a relatively strong association between weekly HFMD incidence and weekly average temperature. The association between the HFMD incidence and climatic variables spatial heterogeneity distributed across provinces. Future research should explore the risk factors that cause spatial correlated structure or high variation of HFMD incidence which can be explained by temperature. When analyzing association between HFMD incidence and climatic variables, spatial heterogeneity among provinces should be evaluated. Moreover, the extra-Poisson multilevel model was capable of modeling the association between overdispersion of HFMD incidence and climatic variables.
Zhao, Xin; Han, Meng; Ding, Lili; Calin, Adrian Cantemir
2018-01-01
The accurate forecast of carbon dioxide emissions is critical for policy makers to take proper measures to establish a low carbon society. This paper discusses a hybrid of the mixed data sampling (MIDAS) regression model and BP (back propagation) neural network (MIDAS-BP model) to forecast carbon dioxide emissions. Such analysis uses mixed frequency data to study the effects of quarterly economic growth on annual carbon dioxide emissions. The forecasting ability of MIDAS-BP is remarkably better than MIDAS, ordinary least square (OLS), polynomial distributed lags (PDL), autoregressive distributed lags (ADL), and auto-regressive moving average (ARMA) models. The MIDAS-BP model is suitable for forecasting carbon dioxide emissions for both the short and longer term. This research is expected to influence the methodology for forecasting carbon dioxide emissions by improving the forecast accuracy. Empirical results show that economic growth has both negative and positive effects on carbon dioxide emissions that last 15 quarters. Carbon dioxide emissions are also affected by their own change within 3 years. Therefore, there is a need for policy makers to explore an alternative way to develop the economy, especially applying new energy policies to establish a low carbon society.
Nodal Statistics for the Van Vleck Polynomials
NASA Astrophysics Data System (ADS)
Bourget, Alain
The Van Vleck polynomials naturally arise from the generalized Lamé equation
Polynomial probability distribution estimation using the method of moments
Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949
Polynomial probability distribution estimation using the method of moments.
Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.
A DDDAS Framework for Volcanic Ash Propagation and Hazard Analysis
2012-01-01
probability distribution for the input variables (for example, Hermite polynomials for normally distributed parameters, or Legendre for uniformly...parameters and windfields will drive our simulations. We will use uncertainty quantification methodology – polynomial chaos quadrature in combination...quantification methodology ? polynomial chaos quadrature in combination with data integration to complete the DDDAS loop. 15. SUBJECT TERMS 16. SECURITY
1984-11-01
welL The subipace is found by using the usual linear eigenv’ctor solution in th3 new enlarged space. This technique was first suggested by Gnanadesikan ...Wilk (1966, 1968), and a good description can be found in Gnanadesikan (1977). They suggested using polynomial functions’ of the original p co...Heidelberg, Springer Ver- lag. Gnanadesikan , R. (1977), Methods for Statistical Data Analysis of Multivariate Observa- tions, Wiley, New York
Suicide and meteorological factors in São Paulo, Brazil, 1996-2011: a time series analysis.
Bando, Daniel H; Teng, Chei T; Volpe, Fernando M; Masi, Eduardo de; Pereira, Luiz A; Braga, Alfésio L
2017-01-01
Considering the scarcity of reports from intertropical latitudes and the Southern Hemisphere, we aimed to examine the association between meteorological factors and suicide in São Paulo. Weekly suicide records stratified by sex were gathered. Weekly averages for minimum, mean, and maximum temperature (°C), insolation (hours), irradiation (MJ/m2), relative humidity (%), atmospheric pressure (mmHg), and rainfall (mm) were computed. The time structures of explanatory variables were modeled by polynomial distributed lag applied to the generalized additive model. The model controlled for long-term trends and selected meteorological factors. The total number of suicides was 6,600 (5,073 for men), an average of 6.7 suicides per week (8.7 for men and 2.0 for women). For overall suicides and among men, effects were predominantly acute and statistically significant only at lag 0. Weekly average minimum temperature had the greatest effect on suicide; there was a 2.28% increase (95%CI 0.90-3.69) in total suicides and a 2.37% increase (95%CI 0.82-3.96) among male suicides with each 1 °C increase. This study suggests that an increase in weekly average minimum temperature has a short-term effect on suicide in São Paulo.
Teklehaimanot, Hailay D; Schwartz, Joel; Teklehaimanot, Awash; Lipsitch, Marc
2004-11-19
Timely and accurate information about the onset of malaria epidemics is essential for effective control activities in epidemic-prone regions. Early warning methods that provide earlier alerts (usually by the use of weather variables) may permit control measures to interrupt transmission earlier in the epidemic, perhaps at the expense of some level of accuracy. Expected case numbers were modeled using a Poisson regression with lagged weather factors in a 4th-degree polynomial distributed lag model. For each week, the numbers of malaria cases were predicted using coefficients obtained using all years except that for which the prediction was being made. The effectiveness of alerts generated by the prediction system was compared against that of alerts based on observed cases. The usefulness of the prediction system was evaluated in cold and hot districts. The system predicts the overall pattern of cases well, yet underestimates the height of the largest peaks. Relative to alerts triggered by observed cases, the alerts triggered by the predicted number of cases performed slightly worse, within 5% of the detection system. The prediction-based alerts were able to prevent 10-25% more cases at a given sensitivity in cold districts than in hot ones. The prediction of malaria cases using lagged weather performed well in identifying periods of increased malaria cases. Weather-derived predictions identified epidemics with reasonable accuracy and better timeliness than early detection systems; therefore, the prediction of malarial epidemics using weather is a plausible alternative to early detection systems.
NASA Technical Reports Server (NTRS)
Davis, Randall C.
1988-01-01
The design of a nose cap for a hypersonic vehicle is an iterative process requiring a rapid, easy to use and accurate stress analysis. The objective of this paper is to develop such a stress analysis technique from a direct solution of the thermal stress equations for a spherical shell. The nose cap structure is treated as a thin spherical shell with an axisymmetric temperature distribution. The governing differential equations are solved by expressing the stress solution to the thermoelastic equations in terms of a series of derivatives of the Legendre polynomials. The process of finding the coefficients for the series solution in terms of the temperature distribution is generalized by expressing the temperature along the shell and through the thickness as a polynomial in the spherical angle coordinate. Under this generalization the orthogonality property of the Legendre polynomials leads to a sequence of integrals involving powers of the spherical shell coordinate times the derivative of the Legendre polynomials. The coefficients of the temperature polynomial appear outside of these integrals. Thus, the integrals are evaluated only once and their values tabulated for use with any arbitrary polynomial temperature distribution.
On multiple orthogonal polynomials for discrete Meixner measures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorokin, Vladimir N
2010-12-07
The paper examines two examples of multiple orthogonal polynomials generalizing orthogonal polynomials of a discrete variable, meaning thereby the Meixner polynomials. One example is bound up with a discrete Nikishin system, and the other leads to essentially new effects. The limit distribution of the zeros of polynomials is obtained in terms of logarithmic equilibrium potentials and in terms of algebraic curves. Bibliography: 9 titles.
Leitte, Arne Marian; Schlink, Uwe; Herbarth, Olf; Wiedensohler, Alfred; Pan, Xiao-Chuan; Hu, Min; Richter, Matthia; Wehner, Birgit; Tuch, Thomas; Wu, Zhijun; Yang, Minjuan; Liu, Liqun; Breitner, Susanne; Cyrys, Josef; Peters, Annette; Wichmann, H-Erich; Franck, Ulrich
2011-04-01
The link between concentrations of particulate matter (PM) and respiratory morbidity has been investigated in numerous studies. The aim of this study was to analyze the role of different particle size fractions with respect to respiratory health in Beijing, China. Data on particle size distributions from 3 nm to 1 µm; PM10 (PM ≤ 10 µm), nitrogen dioxide (NO(2)), and sulfur dioxide concentrations; and meteorologic variables were collected daily from March 2004 to December 2006. Concurrently, daily counts of emergency room visits (ERV) for respiratory diseases were obtained from the Peking University Third Hospital. We estimated pollutant effects in single- and two-pollutant generalized additive models, controlling for meteorologic and other time-varying covariates. Time-delayed associations were estimated using polynomial distributed lag, cumulative effects, and single lag models. Associations of respiratory ERV with NO(2) concentrations and 100-1,000 nm particle number or surface area concentrations were of similar magnitude-that is, approximately 5% increase in respiratory ERV with an interquartile range increase in air pollution concentration. In general, particles < 50 nm were not positively associated with ERV, whereas particles 50-100 nm were adversely associated with respiratory ERV, both being fractions of ultrafine particles. Effect estimates from two-pollutant models were most consistent for NO(2). Present levels of air pollution in Beijing were adversely associated with respiratory ERV. NO(2) concentrations seemed to be a better surrogate for evaluating overall respiratory health effects of ambient air pollution than PM(10) or particle number concentrations in Beijing.
Xiao, Hong; Tian, Huai-Yu; Gao, Li-Dong; Liu, Hai-Ning; Duan, Liang-Song; Basta, Nicole; Cazelles, Bernard; Li, Xiu-Jun; Lin, Xiao-Ling; Wu, Hong-Wei; Chen, Bi-Yun; Yang, Hui-Suo; Xu, Bing; Grenfell, Bryan
2014-01-01
China has the highest incidence of hemorrhagic fever with renal syndrome (HFRS) worldwide. Reported cases account for 90% of the total number of global cases. By 2010, approximately 1.4 million HFRS cases had been reported in China. This study aimed to explore the effect of the rodent reservoir, and natural and socioeconomic variables, on the transmission pattern of HFRS. Data on monthly HFRS cases were collected from 2006 to 2010. Dynamic rodent monitoring data, normalized difference vegetation index (NDVI) data, climate data, and socioeconomic data were also obtained. Principal component analysis was performed, and the time-lag relationships between the extracted principal components and HFRS cases were analyzed. Polynomial distributed lag (PDL) models were used to fit and forecast HFRS transmission. Four principal components were extracted. Component 1 (F1) represented rodent density, the NDVI, and monthly average temperature. Component 2 (F2) represented monthly average rainfall and monthly average relative humidity. Component 3 (F3) represented rodent density and monthly average relative humidity. The last component (F4) represented gross domestic product and the urbanization rate. F2, F3, and F4 were significantly correlated, with the monthly HFRS incidence with lags of 4 months (r = -0.289, P<0.05), 5 months (r = -0.523, P<0.001), and 0 months (r = -0.376, P<0.01), respectively. F1 was correlated with the monthly HFRS incidence, with a lag of 4 months (r = 0.179, P = 0.192). Multivariate PDL modeling revealed that the four principal components were significantly associated with the transmission of HFRS. The monthly trend in HFRS cases was significantly associated with the local rodent reservoir, climatic factors, the NDVI, and socioeconomic conditions present during the previous months. The findings of this study may facilitate the development of early warning systems for the control and prevention of HFRS and similar diseases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaisultanov, Rashid; Eichler, David
2011-03-15
The dielectric tensor is obtained for a general anisotropic distribution function that is represented as a sum over Legendre polynomials. The result is valid over all of k-space. We obtain growth rates for the Weibel instability for some basic examples of distribution functions.
Vehicle Sprung Mass Estimation for Rough Terrain
2011-03-01
distributions are greater than zero. The multivariate polynomials are functions of the Legendre polynomials (Poularikas (1999...developed methods based on polynomial chaos theory and on the maximum likelihood approach to estimate the most likely value of the vehicle sprung...mass. The polynomial chaos estimator is compared to benchmark algorithms including recursive least squares, recursive total least squares, extended
Molecular Isotopic Distribution Analysis (MIDAs) with Adjustable Mass Accuracy
NASA Astrophysics Data System (ADS)
Alves, Gelio; Ogurtsov, Aleksey Y.; Yu, Yi-Kuo
2014-01-01
In this paper, we present Molecular Isotopic Distribution Analysis (MIDAs), a new software tool designed to compute molecular isotopic distributions with adjustable accuracies. MIDAs offers two algorithms, one polynomial-based and one Fourier-transform-based, both of which compute molecular isotopic distributions accurately and efficiently. The polynomial-based algorithm contains few novel aspects, whereas the Fourier-transform-based algorithm consists mainly of improvements to other existing Fourier-transform-based algorithms. We have benchmarked the performance of the two algorithms implemented in MIDAs with that of eight software packages (BRAIN, Emass, Mercury, Mercury5, NeutronCluster, Qmass, JFC, IC) using a consensus set of benchmark molecules. Under the proposed evaluation criteria, MIDAs's algorithms, JFC, and Emass compute with comparable accuracy the coarse-grained (low-resolution) isotopic distributions and are more accurate than the other software packages. For fine-grained isotopic distributions, we compared IC, MIDAs's polynomial algorithm, and MIDAs's Fourier transform algorithm. Among the three, IC and MIDAs's polynomial algorithm compute isotopic distributions that better resemble their corresponding exact fine-grained (high-resolution) isotopic distributions. MIDAs can be accessed freely through a user-friendly web-interface at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/midas/index.html.
Molecular Isotopic Distribution Analysis (MIDAs) with adjustable mass accuracy.
Alves, Gelio; Ogurtsov, Aleksey Y; Yu, Yi-Kuo
2014-01-01
In this paper, we present Molecular Isotopic Distribution Analysis (MIDAs), a new software tool designed to compute molecular isotopic distributions with adjustable accuracies. MIDAs offers two algorithms, one polynomial-based and one Fourier-transform-based, both of which compute molecular isotopic distributions accurately and efficiently. The polynomial-based algorithm contains few novel aspects, whereas the Fourier-transform-based algorithm consists mainly of improvements to other existing Fourier-transform-based algorithms. We have benchmarked the performance of the two algorithms implemented in MIDAs with that of eight software packages (BRAIN, Emass, Mercury, Mercury5, NeutronCluster, Qmass, JFC, IC) using a consensus set of benchmark molecules. Under the proposed evaluation criteria, MIDAs's algorithms, JFC, and Emass compute with comparable accuracy the coarse-grained (low-resolution) isotopic distributions and are more accurate than the other software packages. For fine-grained isotopic distributions, we compared IC, MIDAs's polynomial algorithm, and MIDAs's Fourier transform algorithm. Among the three, IC and MIDAs's polynomial algorithm compute isotopic distributions that better resemble their corresponding exact fine-grained (high-resolution) isotopic distributions. MIDAs can be accessed freely through a user-friendly web-interface at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/midas/index.html.
Riemann-Liouville Fractional Calculus of Certain Finite Class of Classical Orthogonal Polynomials
NASA Astrophysics Data System (ADS)
Malik, Pradeep; Swaminathan, A.
2010-11-01
In this work we consider certain class of classical orthogonal polynomials defined on the positive real line. These polynomials have their weight function related to the probability density function of F distribution and are finite in number up to orthogonality. We generalize these polynomials for fractional order by considering the Riemann-Liouville type operator on these polynomials. Various properties like explicit representation in terms of hypergeometric functions, differential equations, recurrence relations are derived.
Fitness Probability Distribution of Bit-Flip Mutation.
Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique
2015-01-01
Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.
Effect of lag time distribution on the lag phase of bacterial growth - a Monte Carlo analysis
USDA-ARS?s Scientific Manuscript database
The objective of this study is to use Monte Carlo simulation to evaluate the effect of lag time distribution of individual bacterial cells incubated under isothermal conditions on the development of lag phase. The growth of bacterial cells of the same initial concentration and mean lag phase durati...
Influence of surface error on electromagnetic performance of reflectors based on Zernike polynomials
NASA Astrophysics Data System (ADS)
Li, Tuanjie; Shi, Jiachen; Tang, Yaqiong
2018-04-01
This paper investigates the influence of surface error distribution on the electromagnetic performance of antennas. The normalized Zernike polynomials are used to describe a smooth and continuous deformation surface. Based on the geometrical optics and piecewise linear fitting method, the electrical performance of reflector described by the Zernike polynomials is derived to reveal the relationship between surface error distribution and electromagnetic performance. Then the relation database between surface figure and electric performance is built for ideal and deformed surfaces to realize rapidly calculation of far-field electric performances. The simulation analysis of the influence of Zernike polynomials on the electrical properties for the axis-symmetrical reflector with the axial mode helical antenna as feed is further conducted to verify the correctness of the proposed method. Finally, the influence rules of surface error distribution on electromagnetic performance are summarized. The simulation results show that some terms of Zernike polynomials may decrease the amplitude of main lobe of antenna pattern, and some may reduce the pointing accuracy. This work extracts a new concept for reflector's shape adjustment in manufacturing process.
Some rules for polydimensional squeezing
NASA Technical Reports Server (NTRS)
Manko, Vladimir I.
1994-01-01
The review of the following results is presented: For mixed state light of N-mode electromagnetic field described by Wigner function which has generic Gaussian form, the photon distribution function is obtained and expressed explicitly in terms of Hermite polynomials of 2N-variables. The momenta of this distribution are calculated and expressed as functions of matrix invariants of the dispersion matrix. The role of new uncertainty relation depending on photon state mixing parameter is elucidated. New sum rules for Hermite polynomials of several variables are found. The photon statistics of polymode even and odd coherent light and squeezed polymode Schroedinger cat light is given explicitly. Photon distribution for polymode squeezed number states expressed in terms of multivariable Hermite polynomials is discussed.
NASA Astrophysics Data System (ADS)
Díaz Mendoza, C.; Orive, R.; Pijeira Cabrera, H.
2008-10-01
We study the asymptotic behavior of the zeros of a sequence of polynomials whose weighted norms, with respect to a sequence of weight functions, have the same nth root asymptotic behavior as the weighted norms of certain extremal polynomials. This result is applied to obtain the (contracted) weak zero distribution for orthogonal polynomials with respect to a Sobolev inner product with exponential weights of the form e-[phi](x), giving a unified treatment for the so-called Freud (i.e., when [phi] has polynomial growth at infinity) and Erdös (when [phi] grows faster than any polynomial at infinity) cases. In addition, we provide a new proof for the bound of the distance of the zeros to the convex hull of the support for these Sobolev orthogonal polynomials.
Maadooliat, Mehdi; Huang, Jianhua Z.
2013-01-01
Despite considerable progress in the past decades, protein structure prediction remains one of the major unsolved problems in computational biology. Angular-sampling-based methods have been extensively studied recently due to their ability to capture the continuous conformational space of protein structures. The literature has focused on using a variety of parametric models of the sequential dependencies between angle pairs along the protein chains. In this article, we present a thorough review of angular-sampling-based methods by assessing three main questions: What is the best distribution type to model the protein angles? What is a reasonable number of components in a mixture model that should be considered to accurately parameterize the joint distribution of the angles? and What is the order of the local sequence–structure dependency that should be considered by a prediction method? We assess the model fits for different methods using bivariate lag-distributions of the dihedral/planar angles. Moreover, the main information across the lags can be extracted using a technique called Lag singular value decomposition (LagSVD), which considers the joint distribution of the dihedral/planar angles over different lags using a nonparametric approach and monitors the behavior of the lag-distribution of the angles using singular value decomposition. As a result, we developed graphical tools and numerical measurements to compare and evaluate the performance of different model fits. Furthermore, we developed a web-tool (http://www.stat.tamu.edu/∼madoliat/LagSVD) that can be used to produce informative animations. PMID:22926831
Miled, Rabeb Bennour; Guillier, Laurent; Neves, Sandra; Augustin, Jean-Christophe; Colin, Pierre; Besse, Nathalie Gnanou
2011-06-01
Cells of six strains of Cronobacter were subjected to dry stress and stored for 2.5 months at ambient temperature. The individual cell lag time distributions of recovered cells were characterized at 25 °C and 37 °C in non-selective broth. The individual cell lag times were deduced from the times taken by cultures from individual cells to reach an optical density threshold. In parallel, growth curves for each strain at high contamination levels were determined in the same growth conditions. In general, the extreme value type II distribution with a shape parameter fixed to 5 (EVIIb) was the most effective at describing the 12 observed distributions of individual cell lag times. Recently, a model for characterizing individual cell lag time distribution from population growth parameters was developed for other food-borne pathogenic bacteria such as Listeria monocytogenes. We confirmed this model's applicability to Cronobacter by comparing the mean and the standard deviation of individual cell lag times to populational lag times observed with high initial concentration experiments. We also validated the model in realistic conditions by studying growth in powdered infant formula decimally diluted in Buffered Peptone Water, which represents the first enrichment step of the standard detection method for Cronobacter. Individual lag times and the pooling of samples significantly affect detection performances. Copyright © 2010 Elsevier Ltd. All rights reserved.
On adaptive weighted polynomial preconditioning for Hermitian positive definite matrices
NASA Technical Reports Server (NTRS)
Fischer, Bernd; Freund, Roland W.
1992-01-01
The conjugate gradient algorithm for solving Hermitian positive definite linear systems is usually combined with preconditioning in order to speed up convergence. In recent years, there has been a revival of polynomial preconditioning, motivated by the attractive features of the method on modern architectures. Standard techniques for choosing the preconditioning polynomial are based only on bounds for the extreme eigenvalues. Here a different approach is proposed, which aims at adapting the preconditioner to the eigenvalue distribution of the coefficient matrix. The technique is based on the observation that good estimates for the eigenvalue distribution can be derived after only a few steps of the Lanczos process. This information is then used to construct a weight function for a suitable Chebyshev approximation problem. The solution of this problem yields the polynomial preconditioner. In particular, we investigate the use of Bernstein-Szego weights.
Modelling of capital asset pricing by considering the lagged effects
NASA Astrophysics Data System (ADS)
Sukono; Hidayat, Y.; Bon, A. Talib bin; Supian, S.
2017-01-01
In this paper the problem of modelling the Capital Asset Pricing Model (CAPM) with the effect of the lagged is discussed. It is assumed that asset returns are analysed influenced by the market return and the return of risk-free assets. To analyse the relationship between asset returns, the market return, and the return of risk-free assets, it is conducted by using a regression equation of CAPM, and regression equation of lagged distributed CAPM. Associated with the regression equation lagged CAPM distributed, this paper also developed a regression equation of Koyck transformation CAPM. Results of development show that the regression equation of Koyck transformation CAPM has advantages, namely simple as it only requires three parameters, compared with regression equation of lagged distributed CAPM.
Nuclear Blast Response Computer Program. Volume I. Program Description.
1981-08-01
VEL 13 - AAS / , \\ / I ’ / I 4 2 Y VERTICAL PLANE NRMAL ’ , TO XAAs 3 / / , / 12 -’C’- Figure 6. Blast Orientations for Aero Module The spacing of...lag across 75 ’ " -0- Zi I ,AAS \\ \\ &,8 (8) VEgrrCA PLANE S .NORMAL TO YAAS XAAS VEL / ) - . +/ f \\ " 4 (4) 2 (2) 3 (3) Corresponding Aero...DEFINITION CDIl I Li NTFM-MXOPD ANUM TEMPORARY ARRAYS FOP 2 L2 NTFM-MXOFD ADEN FORMING TRANSFEF 3 L3 2-MXCRE ATEN FUNCTION POLYNOMIALS 4 L4 2.MXCRE
Higher order derivatives of R-Jacobi polynomials
NASA Astrophysics Data System (ADS)
Das, Sourav; Swaminathan, A.
2016-06-01
In this work, the R-Jacobi polynomials defined on the nonnegative real axis related to F-distribution are considered. Using their Sturm-Liouville system higher order derivatives are constructed. Orthogonality property of these higher ordered R-Jacobi polynomials are obtained besides their normal form, self-adjoint form and hypergeometric representation. Interesting results on the Interpolation formula and Gaussian quadrature formulae are obtained with numerical examples.
Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network
Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N.
2015-01-01
Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead. PMID:26426701
Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network.
Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N
2015-01-01
Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead.
LateBiclustering: Efficient Heuristic Algorithm for Time-Lagged Bicluster Identification.
Gonçalves, Joana P; Madeira, Sara C
2014-01-01
Identifying patterns in temporal data is key to uncover meaningful relationships in diverse domains, from stock trading to social interactions. Also of great interest are clinical and biological applications, namely monitoring patient response to treatment or characterizing activity at the molecular level. In biology, researchers seek to gain insight into gene functions and dynamics of biological processes, as well as potential perturbations of these leading to disease, through the study of patterns emerging from gene expression time series. Clustering can group genes exhibiting similar expression profiles, but focuses on global patterns denoting rather broad, unspecific responses. Biclustering reveals local patterns, which more naturally capture the intricate collaboration between biological players, particularly under a temporal setting. Despite the general biclustering formulation being NP-hard, considering specific properties of time series has led to efficient solutions for the discovery of temporally aligned patterns. Notably, the identification of biclusters with time-lagged patterns, suggestive of transcriptional cascades, remains a challenge due to the combinatorial explosion of delayed occurrences. Herein, we propose LateBiclustering, a sensible heuristic algorithm enabling a polynomial rather than exponential time solution for the problem. We show that it identifies meaningful time-lagged biclusters relevant to the response of Saccharomyces cerevisiae to heat stress.
Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu
2015-01-01
Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence onmore » the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.« less
Polynomial chaos representation of databases on manifolds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soize, C., E-mail: christian.soize@univ-paris-est.fr; Ghanem, R., E-mail: ghanem@usc.edu
2017-04-15
Characterizing the polynomial chaos expansion (PCE) of a vector-valued random variable with probability distribution concentrated on a manifold is a relevant problem in data-driven settings. The probability distribution of such random vectors is multimodal in general, leading to potentially very slow convergence of the PCE. In this paper, we build on a recent development for estimating and sampling from probabilities concentrated on a diffusion manifold. The proposed methodology constructs a PCE of the random vector together with an associated generator that samples from the target probability distribution which is estimated from data concentrated in the neighborhood of the manifold. Themore » method is robust and remains efficient for high dimension and large datasets. The resulting polynomial chaos construction on manifolds permits the adaptation of many uncertainty quantification and statistical tools to emerging questions motivated by data-driven queries.« less
NASA Astrophysics Data System (ADS)
Davis, J. K.; Vincent, G. P.; Hildreth, M.; Kightlinger, L.; Carlson, C.; Wimberly, M. C.
2017-12-01
South Dakota has the highest annual incidence of human cases of West Nile virus (WNV) in all US states, and human cases can vary wildly among years; predicting WNV risk in advance is a necessary exercise if public health officials are to respond efficiently and effectively to risk. Case counts are associated with environmental factors that affect mosquitoes, avian hosts, and the virus itself. They are also correlated with entomological risk indices obtained by trapping and testing mosquitoes. However, neither weather nor insect data alone provide a sufficient basis to make timely and accurate predictions, and combining them into models of human disease is not necessarily straightforward. Here we present lessons learned in three years of making real-time forecasts of this threat to public health. Various methods of integrating data from NASA's North American Land Data Assimilation System (NLDAS) with mosquito surveillance data were explored in a model comparison framework. We found that a model of human disease summarizing weather data (by polynomial distributed lags with seasonally-varying coefficients) and mosquito data (by a mixed-effects model that smooths out these sparse and highly-variable data) made accurate predictions of risk, and was generalizable enough to be recommended in similar applications. A model based on lagged effects of temperature and humidity provided the most accurate predictions. We also found that model accuracy was improved by allowing coefficients to vary smoothly throughout the season, giving different weights to different predictor variables during different parts of the season.
DIFFERENTIAL CROSS SECTION ANALYSIS IN KAON PHOTOPRODUCTION USING ASSOCIATED LEGENDRE POLYNOMIALS
DOE Office of Scientific and Technical Information (OSTI.GOV)
P. T. P. HUTAURUK, D. G. IRELAND, G. ROSNER
2009-04-01
Angular distributions of differential cross sections from the latest CLAS data sets,6 for the reaction γ + p→K+ + Λ have been analyzed using associated Legendre polynomials. This analysis is based upon theoretical calculations in Ref. 1 where all sixteen observables in kaon photoproduction can be classified into four Legendre classes. Each observable can be described by an expansion of associated Legendre polynomial functions. One of the questions to be addressed is how many associated Legendre polynomials are required to describe the data. In this preliminary analysis, we used data models with different numbers of associated Legendre polynomials. We thenmore » compared these models by calculating posterior probabilities of the models. We found that the CLAS data set needs no more than four associated Legendre polynomials to describe the differential cross section data. In addition, we also show the extracted coefficients of the best model.« less
Quantized vortices in the ideal bose gas: a physical realization of random polynomials.
Castin, Yvan; Hadzibabic, Zoran; Stock, Sabine; Dalibard, Jean; Stringari, Sandro
2006-02-03
We propose a physical system allowing one to experimentally observe the distribution of the complex zeros of a random polynomial. We consider a degenerate, rotating, quasi-ideal atomic Bose gas prepared in the lowest Landau level. Thermal fluctuations provide the randomness of the bosonic field and of the locations of the vortex cores. These vortices can be mapped to zeros of random polynomials, and observed in the density profile of the gas.
NASA Technical Reports Server (NTRS)
Crespodasilva, M. R. M.
1981-01-01
The differential equations of motion, and boundary conditions, describing the flap-lead/lag-torsional motion of a flexible rotor blade with a precone angle and a variable pitch angle, which incorporates a pretwist, are derived via Hamilton's principle. The meaning of inextensionality is discussed. The equations are reduced to a set of three integro partial differential equations by elimination of the extension variable. The generalized aerodynamic forces are modelled using Greenberg's extension of Theodorsen's strip theory. The equations of motion are systematically expanded into polynomial nonlinearities with the objective of retaining all terms up to third degree. The blade is modeled as a long, slender, of isotropic Hookean materials. Offsets from the blade's elastic axis through its shear center and the axes for the mass, area and aerodynamic centers, radial nonuniformaties of the blade's stiffnesses and cross section properties are considered and the effect of warp of the cross section is included in the formulation.
Su, Chang; Breitner, Susanne; Schneider, Alexandra; Liu, Liqun; Franck, Ulrich; Peters, Annette; Pan, Xiaochuan
2016-05-01
The link between particulate matter (PM) and cardiovascular morbidity has been investigated in numerous studies. Less evidence exists, however, about how age, gender and season may modify this relationship. The aim of this study was to evaluate the association between ambient PM2.5 (PM ≤ 2.5 µm) and daily hospital emergency room visits (ERV) for cardiovascular diseases in Beijing, China. Moreover, potential effect modification by age, gender, season, air mass origin and the specific period with 2008 Beijing Olympic were investigated. Finally, the temporal lag structure of PM2.5 has also been explored. Daily counts of cardiovascular ERV were obtained from the Peking University Third Hospital from January 2007 to December 2008. Concurrently, data on PM2.5, PM10 (PM ≤ 10 µm), nitrogen dioxide and sulfur dioxide concentrations were obtained from monitoring networks and a fixed monitoring station. Poisson regression models adjusting for confounders were used to estimate immediate, delayed and cumulative air pollution effects. The temporal lag structure was also estimated using polynomial distributed lag (PDL) models. We calculated the relative risk (RR) for overall cardiovascular disease ERV as well as for specific causes of disease; and also investigated the potential modifying effect of age, gender, season, air mass origin and the period with 2008 Beijing Olympics. We observed adverse effects of PM2.5 on cardiovascular ERV--an IQR increase (68 μg/m(3)) in PM2.5 was associated with an overall RR of 1.022 (95% CI 0.990-1.057) obtained from PDL model. Strongest effects of PM2.5 on cardiovascular ERV were found for a lag of 7 days; the respective estimate was 1.012 (95% CI 1.002-1.022). The effects were more pronounced in females and in spring. Arrhythmia and cerebrovascular diseases showed a stronger association with PM2.5. We also found stronger PM-effects for stagnant and southern air masses and the period of Olympics modified the air pollution effects. We observed a rather delayed effect of PM2.5 on cardiovascular ERV, which was modified by gender and season. Our findings provide new evidence about effect modifications and may have implications to improve policy making for particulate air pollution standards in Beijing, China.
Nanoflare vs Footpoint Heating : Observational Signatures
NASA Technical Reports Server (NTRS)
Winebarger, Amy; Alexander, Caroline; Lionello, Roberto; Linker, Jon; Mikic, Zoran; Downs, Cooper
2015-01-01
Time lag analysis shows very long time lags between all channel pairs. Impulsive heating cannot address these long time lags. 3D Simulations of footpoint heating shows a similar pattern of time lags (magnitude and distribution) to observations. Time lags and relative peak intensities may be able to differentiate between TNE and impulsive heating solutions. Adding a high temperature channel (like XRT Be-thin) may improve diagnostics.
Distribution functions of probabilistic automata
NASA Technical Reports Server (NTRS)
Vatan, F.
2001-01-01
Each probabilistic automaton M over an alphabet A defines a probability measure Prob sub(M) on the set of all finite and infinite words over A. We can identify a k letter alphabet A with the set {0, 1,..., k-1}, and, hence, we can consider every finite or infinite word w over A as a radix k expansion of a real number X(w) in the interval [0, 1]. This makes X(w) a random variable and the distribution function of M is defined as usual: F(x) := Prob sub(M) { w: X(w) < x }. Utilizing the fixed-point semantics (denotational semantics), extended to probabilistic computations, we investigate the distribution functions of probabilistic automata in detail. Automata with continuous distribution functions are characterized. By a new, and much more easier method, it is shown that the distribution function F(x) is an analytic function if it is a polynomial. Finally, answering a question posed by D. Knuth and A. Yao, we show that a polynomial distribution function F(x) on [0, 1] can be generated by a prob abilistic automaton iff all the roots of F'(x) = 0 in this interval, if any, are rational numbers. For this, we define two dynamical systems on the set of polynomial distributions and study attracting fixed points of random composition of these two systems.
Lattice Boltzmann method for bosons and fermions and the fourth-order Hermite polynomial expansion.
Coelho, Rodrigo C V; Ilha, Anderson; Doria, Mauro M; Pereira, R M; Aibe, Valter Yoshihiko
2014-04-01
The Boltzmann equation with the Bhatnagar-Gross-Krook collision operator is considered for the Bose-Einstein and Fermi-Dirac equilibrium distribution functions. We show that the expansion of the microscopic velocity in terms of Hermite polynomials must be carried to the fourth order to correctly describe the energy equation. The viscosity and thermal coefficients, previously obtained by Yang et al. [Shi and Yang, J. Comput. Phys. 227, 9389 (2008); Yang and Hung, Phys. Rev. E 79, 056708 (2009)] through the Uehling-Uhlenbeck approach, are also derived here. Thus the construction of a lattice Boltzmann method for the quantum fluid is possible provided that the Bose-Einstein and Fermi-Dirac equilibrium distribution functions are expanded to fourth order in the Hermite polynomials.
Oh, S R; Kang, I; Oh, M H; Ha, S D
2014-01-01
The inhibitory effect of chlorine (50, 100, and 200 mg/kg) was investigated with and without UV radiation (300 mW·s/cm(2)) for the growth of Listeria monocytogenes in chicken breast meat. Using a polynomial model, predictive growth models were also developed as a function of chlorine concentration, UV exposure, and storage temperature (4, 10, and 15°C). A maximum L. monocytogenes reduction (0.8 log cfu, cfu/g) was obtained when combining chlorine at 200 mg/kg and UV at 300 mW·s/cm(2), and a maximum synergistic effect (0.4 log cfu/g) was observed when using chlorine at 100 mg/kg and UV at 300 mW·s/cm(2). Primary models developed for specific growth rate and lag time showed a good fitness (R(2) > 0.91), as determined by the reparameterized Gompertz equation. Secondary polynomial models were obtained using nonlinear regression analysis. The developed models were validated with mean square error, bias factor, and accuracy factor, which were 0.0003, 0.96, and 1.11, respectively, for specific growth rate and 7.69, 0.99, and 1.04, respectively, for lag time. The treatment of chlorine and UV did not change the color and texture of chicken breast after 7 d of storage at 4°C. As a result, the combination of chlorine at 100 mg/kg and UV at 300 mW·s/cm(2) appears to an effective method into inhibit L. monocytogenes growth in broiler carcasses with no negative effects on color and textural quality. Based on the validation results, the predictive models can be used to accurately predict L. monocytogenes growth in chicken breast.
Burst Statistics Using the Lag-Luminosity Relationship
NASA Technical Reports Server (NTRS)
Band, D. L.; Norris, J. P.; Bonnell, J. T.
2003-01-01
Using the lag-luminosity relation and various BATSE catalogs we create a large catalog of burst redshifts, peak luminosities and emitted energies. These catalogs permit us to evaluate the lag-luminosity relation, and to study the burst energy distribution. We find that this distribution can be described as a power law with an index of alpha = 1.76 +/- 0.05 (95% confidence), close to the alpha = 2 predicted by the original quasi-universal jet model.
Kerri T. Vierling; Charles E. Swift; Andrew T. Hudak; Jody C. Vogeler; Lee A. Vierling
2014-01-01
Vegetation structure quantified by light detection and ranging (LiDAR) can improve understanding of wildlife occupancy and species-richness patterns. However, there is often a time lag between the collection of LiDAR data and wildlife data. We investigated whether a time lag between the LiDAR acquisition and field-data acquisition affected mapped wildlife distributions...
NASA Astrophysics Data System (ADS)
Salleh, Nur Hanim Mohd; Ali, Zalila; Noor, Norlida Mohd.; Baharum, Adam; Saad, Ahmad Ramli; Sulaiman, Husna Mahirah; Ahmad, Wan Muhamad Amir W.
2014-07-01
Polynomial regression is used to model a curvilinear relationship between a response variable and one or more predictor variables. It is a form of a least squares linear regression model that predicts a single response variable by decomposing the predictor variables into an nth order polynomial. In a curvilinear relationship, each curve has a number of extreme points equal to the highest order term in the polynomial. A quadratic model will have either a single maximum or minimum, whereas a cubic model has both a relative maximum and a minimum. This study used quadratic modeling techniques to analyze the effects of environmental factors: temperature, relative humidity, and rainfall distribution on the breeding of Aedes albopictus, a type of Aedes mosquito. Data were collected at an urban area in south-west Penang from September 2010 until January 2011. The results indicated that the breeding of Aedes albopictus in the urban area is influenced by all three environmental characteristics. The number of mosquito eggs is estimated to reach a maximum value at a medium temperature, a medium relative humidity and a high rainfall distribution.
Where are the roots of the Bethe Ansatz equations?
NASA Astrophysics Data System (ADS)
Vieira, R. S.; Lima-Santos, A.
2015-10-01
Changing the variables in the Bethe Ansatz Equations (BAE) for the XXZ six-vertex model we had obtained a coupled system of polynomial equations. This provided a direct link between the BAE deduced from the Algebraic Bethe Ansatz (ABA) and the BAE arising from the Coordinate Bethe Ansatz (CBA). For two magnon states this polynomial system could be decoupled and the solutions given in terms of the roots of some self-inversive polynomials. From theorems concerning the distribution of the roots of self-inversive polynomials we made a thorough analysis of the two magnon states, which allowed us to find the location and multiplicity of the Bethe roots in the complex plane, to discuss the completeness and singularities of Bethe's equations, the ill-founded string-hypothesis concerning the location of their roots, as well as to find an interesting connection between the BAE with Salem's polynomials.
Implications of Lag-Luminosity Relationship for Unified GRB Paradigms
NASA Technical Reports Server (NTRS)
Norris, J. P.; White, Nicholas E. (Technical Monitor)
2002-01-01
Spectral lags (tau(sub lag)) are deduced for 1437 long (T(sub 90) greater than 2 s) BATSE gamma-ray bursts (GRBs) with peak flux F(sub p) greater than 0.25 photons cm(sup -2)/s, near to the BATSE trigger threshold. The lags are modeled to approximate the observed distribution in the F(sub p)-T(sub lag) plane, realizing a noise-free representation. Assuming a two-branch lag-luminosity relationship, the lags are self-consistently corrected for cosmological effects to yield distributions in luminosity, distance, and redshift. The results have several consequences for GRB populations and for unified gamma-ray/afterglow scenarios which would account for afterglow break times and gamma-ray spectral evolution in terms of jet opening angle, viewing angle, or a profiled jet with variable Lorentz factor: A component of the burst sample is identified - those with few, wide pulses, lags of a few tenths to several seconds, and soft spectra - whose Log[N]-Log[F(sub p)] distribution approximates a -3/2 power-law, suggesting homogeneity and thus relatively nearby sources. The proportion of these long-lag bursts increases from negligible among bright BATSE bursts to approx. 50% at trigger threshold. Bursts with very long lags, approx. 1-2 less than tau(sub lag) (S) less than 10, show a tendency to concentrate near the Supergalactic Plane with a quadrupole moment of approx. -0.10 +/- 0.04. GRB 980425 (SN 1998bw) is a member of this subsample of approx. 90 bursts with estimated distances less than 100 Mpc. The frequency of the observed ultra-low luminosity bursts is approx. 1/4 that of SNe Ib/c within the same volume. If truly nearby, the core-collapse events associated with these GRBs might produce gravitational radiation detectable by LIGO-II. Such nearby bursts might also help explain flattening of the cosmic ray spectrum at ultra-high energies, as observed by AGASA.
Random matrices with external source and the asymptotic behaviour of multiple orthogonal polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aptekarev, Alexander I; Lysov, Vladimir G; Tulyakov, Dmitrii N
2011-02-28
Ensembles of random Hermitian matrices with a distribution measure defined by an anharmonic potential perturbed by an external source are considered. The limiting characteristics of the eigenvalue distribution of the matrices in these ensembles are related to the asymptotic behaviour of a certain system of multiple orthogonal polynomials. Strong asymptotic formulae are derived for this system. As a consequence, for matrices in this ensemble the limit mean eigenvalue density is found, and a variational principle is proposed to characterize this density. Bibliography: 35 titles.
Beyond the excised ensemble: modelling elliptic curve L-functions with random matrices
NASA Astrophysics Data System (ADS)
Cooper, I. A.; Morris, Patrick W.; Snaith, N. C.
2016-02-01
The ‘excised ensemble’, a random matrix model for the zeros of quadratic twist families of elliptic curve L-functions, was introduced by Dueñez et al (2012 J. Phys. A: Math. Theor. 45 115207) The excised model is motivated by a formula for central values of these L-functions in a paper by Kohnen and Zagier (1981 Invent. Math. 64 175-98). This formula indicates that for a finite set of L-functions from a family of quadratic twists, the central values are all either zero or are greater than some positive cutoff. The excised model imposes this same condition on the central values of characteristic polynomials of matrices from {SO}(2N). Strangely, the cutoff on the characteristic polynomials that results in a convincing model for the L-function zeros is significantly smaller than that which we would obtain by naively transferring Kohnen and Zagier’s cutoff to the {SO}(2N) ensemble. In this current paper we investigate a modification to the excised model. It lacks the simplicity of the original excised ensemble, but it serves to explain the reason for the unexpectedly low cutoff in the original excised model. Additionally, the distribution of central L-values is ‘choppier’ than the distribution of characteristic polynomials, in the sense that it is a superposition of a series of peaks: the characteristic polynomial distribution is a smooth approximation to this. The excised model did not attempt to incorporate these successive peaks, only the initial cutoff. Here we experiment with including some of the structure of the L-value distribution. The conclusion is that a critical feature of a good model is to associate the correct mass to the first peak of the L-value distribution.
Erdeljić, Viktorija; Francetić, Igor; Bošnjak, Zrinka; Budimir, Ana; Kalenić, Smilja; Bielen, Luka; Makar-Aušperger, Ksenija; Likić, Robert
2011-05-01
The relationship between antibiotic consumption and selection of resistant strains has been studied mainly by employing conventional statistical methods. A time delay in effect must be anticipated and this has rarely been taken into account in previous studies. Therefore, distributed lags time series analysis and simple linear correlation were compared in their ability to evaluate this relationship. Data on monthly antibiotic consumption for ciprofloxacin, piperacillin/tazobactam, carbapenems and cefepime as well as Pseudomonas aeruginosa susceptibility were retrospectively collected for the period April 2006 to July 2007. Using distributed lags analysis, a significant temporal relationship was identified between ciprofloxacin, meropenem and cefepime consumption and the resistance rates of P. aeruginosa isolates to these antibiotics. This effect was lagged for ciprofloxacin and cefepime [1 month (R=0.827, P=0.039) and 2 months (R=0.962, P=0.001), respectively] and was simultaneous for meropenem (lag 0, R=0.876, P=0.002). Furthermore, a significant concomitant effect of meropenem consumption on the appearance of multidrug-resistant P. aeruginosa strains (resistant to three or more representatives of classes of antibiotics) was identified (lag 0, R=0.992, P<0.001). This effect was not delayed and it was therefore identified both by distributed lags analysis and the Pearson's correlation coefficient. Correlation coefficient analysis was not able to identify relationships between antibiotic consumption and bacterial resistance when the effect was delayed. These results indicate that the use of diverse statistical methods can yield significantly different results, thus leading to the introduction of possibly inappropriate infection control measures. Copyright © 2010 Elsevier B.V. and the International Society of Chemotherapy. All rights reserved.
NASA Astrophysics Data System (ADS)
Morimae, Tomoyuki; Fujii, Keisuke; Nishimura, Harumichi
2017-04-01
The one-clean qubit model (or the DQC1 model) is a restricted model of quantum computing where only a single qubit of the initial state is pure and others are maximally mixed. Although the model is not universal, it can efficiently solve several problems whose classical efficient solutions are not known. Furthermore, it was recently shown that if the one-clean qubit model is classically efficiently simulated, the polynomial hierarchy collapses to the second level. A disadvantage of the one-clean qubit model is, however, that the clean qubit is too clean: for example, in realistic NMR experiments, polarizations are not high enough to have the perfectly pure qubit. In this paper, we consider a more realistic one-clean qubit model, where the clean qubit is not clean, but depolarized. We first show that, for any polarization, a multiplicative-error calculation of the output probability distribution of the model is possible in a classical polynomial time if we take an appropriately large multiplicative error. The result is in strong contrast with that of the ideal one-clean qubit model where the classical efficient multiplicative-error calculation (or even the sampling) with the same amount of error causes the collapse of the polynomial hierarchy. We next show that, for any polarization lower-bounded by an inverse polynomial, a classical efficient sampling (in terms of a sufficiently small multiplicative error or an exponentially small additive error) of the output probability distribution of the model is impossible unless BQP (bounded error quantum polynomial time) is contained in the second level of the polynomial hierarchy, which suggests the hardness of the classical efficient simulation of the one nonclean qubit model.
Generalised quasiprobability distribution for Hermite polynomial squeezed states
NASA Astrophysics Data System (ADS)
Datta, Sunil; D'Souza, Richard
1996-02-01
Generalized quasiprobability distributions (QPD) for Hermite polynomial states are presented. These states are solutions of an eigenvalue equation which is quadratic in creation and annihilation operators. Analytical expressions for the QPD are presented for some special cases of the eigenvalues. For large squeezing these analytical expressions for the QPD take the form of a finite series in even Hermite functions. These expressions very transparently exhibit the transition between, P, Q and W functions corresponding to the change of the s-parameter of the QPD. Further, they clearly show the two-photon nature of the processes involved in the generation of these states.
The Measurement of Pressure Through Tubes in Pressure Distribution Tests
NASA Technical Reports Server (NTRS)
Hemke, Paul E
1928-01-01
The tests described in this report were made to determine the error caused by using small tubes to connect orifices on the surface of aircraft to central pressure capsules in making pressure distribution tests. Aluminum tubes of 3/16-inch inside diameter were used to determine this error. Lengths from 20 feet to 226 feet and pressures whose maxima varied from 2 inches to 140 inches of water were used. Single-pressure impulses for which the time of rise of pressure from zero to a maximum varied from 0.25 second to 3 seconds were investigated. The results show that the pressure recorded at the capsule on the far end of the tube lags behind the pressure at the orifice end and experiences also a change in magnitude. For the values used in these tests the time lag and pressure change vary principally with the time of rise of pressure from zero to a maximum and the tube length. Curves are constructed showing the time lag and pressure change. Empirical formulas are also given for computing the time lag. Analysis of pressure distribution tests made on airplanes in flight shows that the recorded pressures are slightly higher than the pressures at the orifice and that the time lag is negligible. The apparent increase in pressure is usually within the experimental error, but in the case of the modern pursuit type of airplane the pressure increase may be 5 per cent. For pressure-distribution tests on airships the analysis shows that the time lag and pressure change may be neglected.
Un, M Kerem; Kaghazchi, Hamed
2018-01-01
When a signal is initiated in the nerve, it is transmitted along each nerve fiber via an action potential (called single fiber action potential (SFAP)) which travels with a velocity that is related with the diameter of the fiber. The additive superposition of SFAPs constitutes the compound action potential (CAP) of the nerve. The fiber diameter distribution (FDD) in the nerve can be computed from the CAP data by solving an inverse problem. This is usually achieved by dividing the fibers into a finite number of diameter groups and solve a corresponding linear system to optimize FDD. However, number of fibers in a nerve can be measured sometimes in thousands and it is possible to assume a continuous distribution for the fiber diameters which leads to a gradient optimization problem. In this paper, we have evaluated this continuous approach to the solution of the inverse problem. We have utilized an analytical function for SFAP and an assumed a polynomial form for FDD. The inverse problem involves the optimization of polynomial coefficients to obtain the best estimate for the FDD. We have observed that an eighth order polynomial for FDD can capture both unimodal and bimodal fiber distributions present in vivo, even in case of noisy CAP data. The assumed FDD distribution regularizes the ill-conditioned inverse problem and produces good results.
NASA Astrophysics Data System (ADS)
Liu, Huawei; Zheng, Shu; Zhou, Huaichun; Qi, Chaobo
2016-02-01
A generalized method to estimate a two-dimensional (2D) distribution of temperature and wavelength-dependent emissivity in a sooty flame with spectroscopic radiation intensities is proposed in this paper. The method adopts a Newton-type iterative method to solve the unknown coefficients in the polynomial relationship between the emissivity and the wavelength, as well as the unknown temperature. Polynomial functions with increasing order are examined, and final results are determined as the result converges. Numerical simulation on a fictitious flame with wavelength-dependent absorption coefficients shows a good performance with relative errors less than 0.5% in the average temperature. What’s more, a hyper-spectral imaging device is introduced to measure an ethylene/air laminar diffusion flame with the proposed method. The proper order for the polynomial function is selected to be 2, because every one order increase in the polynomial function will only bring in a temperature variation smaller than 20 K. For the ethylene laminar diffusion flame with 194 ml min-1 C2H4 and 284 L min-1 air studied in this paper, the 2D distribution of average temperature estimated along the line of sight is similar to, but smoother than that of the local temperature given in references, and the 2D distribution of emissivity shows a cumulative effect of the absorption coefficient along the line of sight. It also shows that emissivity of the flame decreases as the wavelength increases. The emissivity under wavelength 400 nm is about 2.5 times as much as that under wavelength 1000 nm for a typical line-of-sight in the flame, with the same trend for the absorption coefficient of soot varied with the wavelength.
Relation between germination and mycelium growth of individual fungal spores.
Gougouli, Maria; Koutsoumanis, Konstantinos P
2013-02-15
The relation between germination time and lag time of mycelium growth of individual spores was studied by combining microscopic and macroscopic techniques. The radial growth of a large number (100-200) of Penicillium expansum and Aspergillus niger mycelia originating from single spores was monitored macroscopically at isothermal conditions ranging from 0 to 30°C and 10 to 41.5°C, respectively. The radial growth curve for each mycelium was fitted to a linear model for the estimation of mycelium lag time. The results showed that the lag time varied significantly among single spores. The cumulative frequency distributions of the lag times were fitted to the modified Gompertz model and compared with the respective distributions for the germination time, which were obtained microscopically. The distributions of the measured mycelium lag time were found to be similar to the germination time distributions under the same conditions but shifted in time with the lag times showing a significant delay compared to germination times. A numerical comparison was also performed based on the distribution parameters λ(m) and λ(g), which indicate the time required from the spores to start the germination process and the completion of the lag phase, respectively. The relative differences %(λ(m)-λ(g))/λ(m) were not found to be significantly affected by temperatures tested with mean values of 72.5±5.1 and 60.7±2.1 for P. expansum for A. niger, respectively. In order to investigate the source of the above difference, a time-lapse microscopy method was developed providing videos with the behavior of single fungal spore from germination until mycelium formation. The distances of the apexes of the first germ tubes that emerged from the swollen spore were measured in each frame of the videos and these data were expressed as a function of time. The results showed that in the early hyphal development, the measured radii appear to increase exponentially, until a certain time, where growth becomes linear. The two phases of hyphal development can explain the difference between germination and lag time. Since the lag time is estimated from the extrapolation of the regression line of the linear part of the graph only, its value is significantly higher than the germination time, t(G). The relation of germination and lag time was further investigated by comparing their temperature dependence using the Cardinal Model with Inflection. The estimated values of the cardinal parameters (T(min), T(opt), and T(max)) for 1/λ(g) were found to be very close to the respective values for 1/λ(m), indicating similar temperature dependence between them. Copyright © 2012 Elsevier B.V. All rights reserved.
Repetition and lag effects in movement recognition.
Hall, C R; Buckolz, E
1982-03-01
Whether repetition and lag improve the recognition of movement patterns was investigated. Recognition memory was tested for one repetition, two-repetitions massed, and two-repetitions distributed with movement patterns at lags of 3, 5, 7, and 13. Recognition performance was examined both immediately afterwards and following a 48 hour delay. Both repetition and lag effects failed to be demonstrated, providing some support for the claim that memory is unaffected by repetition at a constant level of processing (Craik & Lockhart, 1972). There was, as expected, a significant decrease in recognition memory following the retention interval, but this appeared unrelated to repetition or lag.
Dagnas, Stéphane; Gougouli, Maria; Onno, Bernard; Koutsoumanis, Konstantinos P; Membré, Jeanne-Marie
2017-01-02
The inhibitory effect of water activity (a w ) and storage temperature on single spore lag times of Aspergillus niger, Eurotium repens (Aspergillus pseudoglaucus) and Penicillium corylophilum strains isolated from spoiled bakery products, was quantified. A full factorial design was set up for each strain. Data were collected at levels of a w varying from 0.80 to 0.98 and temperature from 15 to 35°C. Experiments were performed on malt agar, at pH5.5. When growth was observed, ca 20 individual growth kinetics per condition were recorded up to 35days. Radius of the colony vs time was then fitted with the Buchanan primary model. For each experimental condition, a lag time variability was observed, it was characterized by its mean, standard deviation (sd) and 5 th percentile, after a Normal distribution fit. As the environmental conditions became stressful (e.g. storage temperature and a w lower), mean and sd of single spore lag time distribution increased, indicating longer lag times and higher variability. The relationship between mean and sd followed a monotonous but not linear pattern, identical whatever the species. Next, secondary models were deployed to estimate the cardinal values (minimal, optimal and maximal temperatures, minimal water activity where no growth is observed anymore) for the three species. That enabled to confirm the observation made based on raw data analysis: concerning the temperature effect, A. niger behaviour was significantly different from E. repens and P. corylophilum: T opt of 37.4°C (standard deviation 1.4°C) instead of 27.1°C (1.4°C) and 25.2°C (1.2°C), respectively. Concerning the a w effect, from the three mould species, E. repens was the species able to grow at the lowest a w (aw min estimated to 0.74 (0.02)). Finally, results obtained with single spores were compared to findings from a previous study carried out at the population level (Dagnas et al., 2014). For short lag times (≤5days), there was no difference between lag time of the population (ca 2000 spores inoculated in one spot) and mean (nor 5 th percentile) of single spore lag time distribution. In contrast, when lag time was longer, i.e. under more stressful conditions, there was a discrepancy between individual and population lag times (population lag times shorter than 5 th percentiles of single spore lag time distribution), confirming a stochastic process. Finally, the temperature cardinal values estimated with single spores were found to be similar to those obtained at the population level, whatever the species. All these findings will be used to describe better mould spore lag time variability and then to predict more accurately bakery product shelf-life. Copyright © 2016 Elsevier B.V. All rights reserved.
Staircase tableaux, the asymmetric exclusion process, and Askey-Wilson polynomials
Corteel, Sylvie; Williams, Lauren K.
2010-01-01
We introduce some combinatorial objects called staircase tableaux, which have cardinality 4nn !, and connect them to both the asymmetric exclusion process (ASEP) and Askey-Wilson polynomials. The ASEP is a model from statistical mechanics introduced in the late 1960s, which describes a system of interacting particles hopping left and right on a one-dimensional lattice of n sites with open boundaries. It has been cited as a model for traffic flow and translation in protein synthesis. In its most general form, particles may enter and exit at the left with probabilities α and γ, and they may exit and enter at the right with probabilities β and δ. In the bulk, the probability of hopping left is q times the probability of hopping right. Our first result is a formula for the stationary distribution of the ASEP with all parameters general, in terms of staircase tableaux. Our second result is a formula for the moments of (the weight function of) Askey-Wilson polynomials, also in terms of staircase tableaux. Since the 1980s there has been a great deal of work giving combinatorial formulas for moments of classical orthogonal polynomials (e.g. Hermite, Charlier, Laguerre); among these polynomials, the Askey-Wilson polynomials are the most important, because they are at the top of the hierarchy of classical orthogonal polynomials. PMID:20348417
Staircase tableaux, the asymmetric exclusion process, and Askey-Wilson polynomials.
Corteel, Sylvie; Williams, Lauren K
2010-04-13
We introduce some combinatorial objects called staircase tableaux, which have cardinality 4(n)n!, and connect them to both the asymmetric exclusion process (ASEP) and Askey-Wilson polynomials. The ASEP is a model from statistical mechanics introduced in the late 1960s, which describes a system of interacting particles hopping left and right on a one-dimensional lattice of n sites with open boundaries. It has been cited as a model for traffic flow and translation in protein synthesis. In its most general form, particles may enter and exit at the left with probabilities alpha and gamma, and they may exit and enter at the right with probabilities beta and delta. In the bulk, the probability of hopping left is q times the probability of hopping right. Our first result is a formula for the stationary distribution of the ASEP with all parameters general, in terms of staircase tableaux. Our second result is a formula for the moments of (the weight function of) Askey-Wilson polynomials, also in terms of staircase tableaux. Since the 1980s there has been a great deal of work giving combinatorial formulas for moments of classical orthogonal polynomials (e.g. Hermite, Charlier, Laguerre); among these polynomials, the Askey-Wilson polynomials are the most important, because they are at the top of the hierarchy of classical orthogonal polynomials.
Calculation of Radar Probability of Detection in K-Distributed Sea Clutter and Noise
2011-04-01
Laguerre polynomials are generated from a recurrence relation, and the nodes and weights are calculated from the eigenvalues and eigenvectors of a...B.P. Flannery, Numerical Recipes in Fortran, Second Edition, Cambridge University Press (1992). 12. W. Gautschi, Orthogonal Polynomials (in Matlab...the integration, with the nodes and weights calculated using matrix methods, so that a general purpose numerical integration routine is not required
Least Squares Approximation By G1 Piecewise Parametric Cubes
1993-12-01
ADDRESS(ES) 10.SPONSORING/MONITORING AGENCY REPORT NUMBER 11. SUPPLEMENTARY NOTES The views expressed in this thesis are those of the author and do not...CODE Approved for public release; distribution is unlimited. 13. ABSTRACT (maximum 200 words) Parametric piecewise cubic polynomials are used throughout...piecewise parametric cubic polynomial to a sequence of ordered points in the plane. Cubic Bdzier curves are used as a basis. The parameterization, the
Derivatives of random matrix characteristic polynomials with applications to elliptic curves
NASA Astrophysics Data System (ADS)
Snaith, N. C.
2005-12-01
The value distribution of derivatives of characteristic polynomials of matrices from SO(N) is calculated at the point 1, the symmetry point on the unit circle of the eigenvalues of these matrices. We consider subsets of matrices from SO(N) that are constrained to have at least n eigenvalues equal to 1 and investigate the first non-zero derivative of the characteristic polynomial at that point. The connection between the values of random matrix characteristic polynomials and values of L-functions in families has been well established. The motivation for this work is the expectation that through this connection with L-functions derived from families of elliptic curves, and using the Birch and Swinnerton-Dyer conjecture to relate values of the L-functions to the rank of elliptic curves, random matrix theory will be useful in probing important questions concerning these ranks.
Pseudo spectral collocation with Maxwell polynomials for kinetic equations with energy diffusion
NASA Astrophysics Data System (ADS)
Sánchez-Vizuet, Tonatiuh; Cerfon, Antoine J.
2018-02-01
We study the approximation and stability properties of a recently popularized discretization strategy for the speed variable in kinetic equations, based on pseudo-spectral collocation on a grid defined by the zeros of a non-standard family of orthogonal polynomials called Maxwell polynomials. Taking a one-dimensional equation describing energy diffusion due to Fokker-Planck collisions with a Maxwell-Boltzmann background distribution as the test bench for the performance of the scheme, we find that Maxwell based discretizations outperform other commonly used schemes in most situations, often by orders of magnitude. This provides a strong motivation for their use in high-dimensional gyrokinetic simulations. However, we also show that Maxwell based schemes are subject to a non-modal time stepping instability in their most straightforward implementation, so that special care must be given to the discrete representation of the linear operators in order to benefit from the advantages provided by Maxwell polynomials.
Hilbert's 17th Problem and the Quantumness of States
NASA Astrophysics Data System (ADS)
Korbicz, J. K.; Cirac, J. I.; Wehr, Jan; Lewenstein, M.
2005-04-01
A state of a quantum system can be regarded as classical (quantum) with respect to measurements of a set of canonical observables if and only if there exists (does not exist) a well defined, positive phase-space distribution, the so called Glauber-Sudarshan P representation. We derive a family of classicality criteria that requires that the averages of positive functions calculated using P representation must be positive. For polynomial functions, these criteria are related to Hilbert’s 17th problem, and have physical meaning of generalized squeezing conditions; alternatively, they may be interpreted as nonclassicality witnesses. We show that every generic nonclassical state can be detected by a polynomial that is a sum-of-squares of other polynomials. We introduce a very natural hierarchy of states regarding their degree of quantumness, which we relate to the minimal degree of a sum-of-squares polynomial that detects them.
Phase unwrapping algorithm using polynomial phase approximation and linear Kalman filter.
Kulkarni, Rishikesh; Rastogi, Pramod
2018-02-01
A noise-robust phase unwrapping algorithm is proposed based on state space analysis and polynomial phase approximation using wrapped phase measurement. The true phase is approximated as a two-dimensional first order polynomial function within a small sized window around each pixel. The estimates of polynomial coefficients provide the measurement of phase and local fringe frequencies. A state space representation of spatial phase evolution and the wrapped phase measurement is considered with the state vector consisting of polynomial coefficients as its elements. Instead of using the traditional nonlinear Kalman filter for the purpose of state estimation, we propose to use the linear Kalman filter operating directly with the wrapped phase measurement. The adaptive window width is selected at each pixel based on the local fringe density to strike a balance between the computation time and the noise robustness. In order to retrieve the unwrapped phase, either a line-scanning approach or a quality guided strategy of pixel selection is used depending on the underlying continuous or discontinuous phase distribution, respectively. Simulation and experimental results are provided to demonstrate the applicability of the proposed method.
Zhao, Chunyu; Burge, James H
2007-12-24
Zernike polynomials provide a well known, orthogonal set of scalar functions over a circular domain, and are commonly used to represent wavefront phase or surface irregularity. A related set of orthogonal functions is given here which represent vector quantities, such as mapping distortion or wavefront gradient. These functions are generated from gradients of Zernike polynomials, made orthonormal using the Gram- Schmidt technique. This set provides a complete basis for representing vector fields that can be defined as a gradient of some scalar function. It is then efficient to transform from the coefficients of the vector functions to the scalar Zernike polynomials that represent the function whose gradient was fit. These new vector functions have immediate application for fitting data from a Shack-Hartmann wavefront sensor or for fitting mapping distortion for optical testing. A subsequent paper gives an additional set of vector functions consisting only of rotational terms with zero divergence. The two sets together provide a complete basis that can represent all vector distributions in a circular domain.
NASA Astrophysics Data System (ADS)
Delfani, M. R.; Latifi Shahandashti, M.
2017-09-01
In this paper, within the complete form of Mindlin's second strain gradient theory, the elastic field of an isolated spherical inclusion embedded in an infinitely extended homogeneous isotropic medium due to a non-uniform distribution of eigenfields is determined. These eigenfields, in addition to eigenstrain, comprise eigen double and eigen triple strains. After the derivation of a closed-form expression for Green's function associated with the problem, two different cases of non-uniform distribution of the eigenfields are considered as follows: (i) radial distribution, i.e. the distributions of the eigenfields are functions of only the radial distance of points from the centre of inclusion, and (ii) polynomial distribution, i.e. the distributions of the eigenfields are polynomial functions in the Cartesian coordinates of points. While the obtained solution for the elastic field of the latter case takes the form of an infinite series, the solution to the former case is represented in a closed form. Moreover, Eshelby's tensors associated with the two mentioned cases are obtained.
Modeling of photocurrent and lag signals in amorphous selenium x-ray detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siddiquee, Sinchita; Kabir, M. Z., E-mail: kabir@encs.concordia.ca
2015-07-15
A mathematical model for transient photocurrent and lag signal in x-ray imaging detectors has been developed by considering charge carrier trapping and detrapping in the energy distributed defect states under exponentially distributed carrier generation across the photoconductor. The model for the transient and steady-state carrier distributions and hence the photocurrent has been developed by solving the carrier continuity equation for both holes and electrons. The residual (commonly known as lag signal) current is modeled by solving the trapping rate equations considering the thermal release and trap filling effects. The model is applied to amorphous selenium (a-Se) detectors for both chestmore » radiography and mammography. The authors analyze the dependence of the residual current on various factors, such as x-ray exposure, applied electric field, and temperature. The electron trapping and detrapping mostly determines the residual current in a-Se detectors. The lag signal is more prominent in chest radiographic detector than in mammographic detectors. The model calculations are compared with the published experimental data and show a very good agreement.« less
Moutsopoulou, Karolina; Waszak, Florian
2013-05-01
It has been shown that in associative learning it is possible to disentangle the effects caused on behaviour by the associations between a stimulus and a classification (S-C) and the associations between a stimulus and the action performed towards it (S-A). Such evidence has been provided using ex-Gaussian distribution analysis to show that different parameters of the reaction time distribution reflect the different processes. Here, using this method, we investigate another difference between these two types of associations: What is the relative durability of these associations across time? Using a task-switching paradigm and by manipulating the lag between the point of the creation of the associations and the test phase, we show that S-A associations have stronger effects on behaviour when the lag between the two repetitions of a stimulus is short. However, classification learning affects behaviour not only in short-term lags but also (and equally so) when the lag between prime and probe is long and the same stimuli are repeatedly presented within a different classification task, demonstrating a remarkable durability of S-C associations.
Methods of Reverberation Mapping. I. Time-lag Determination by Measures of Randomness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chelouche, Doron; Pozo-Nuñez, Francisco; Zucker, Shay, E-mail: doron@sci.haifa.ac.il, E-mail: francisco.pozon@gmail.com, E-mail: shayz@post.tau.ac.il
A class of methods for measuring time delays between astronomical time series is introduced in the context of quasar reverberation mapping, which is based on measures of randomness or complexity of the data. Several distinct statistical estimators are considered that do not rely on polynomial interpolations of the light curves nor on their stochastic modeling, and do not require binning in correlation space. Methods based on von Neumann’s mean-square successive-difference estimator are found to be superior to those using other estimators. An optimized von Neumann scheme is formulated, which better handles sparsely sampled data and outperforms current implementations of discretemore » correlation function methods. This scheme is applied to existing reverberation data of varying quality, and consistency with previously reported time delays is found. In particular, the size–luminosity relation of the broad-line region in quasars is recovered with a scatter comparable to that obtained by other works, yet with fewer assumptions made concerning the process underlying the variability. The proposed method for time-lag determination is particularly relevant for irregularly sampled time series, and in cases where the process underlying the variability cannot be adequately modeled.« less
SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahlfeld, R., E-mail: r.ahlfeld14@imperial.ac.uk; Belkouchi, B.; Montomoli, F.
2016-09-01
A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrixmore » is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.« less
SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos
NASA Astrophysics Data System (ADS)
Ahlfeld, R.; Belkouchi, B.; Montomoli, F.
2016-09-01
A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.
Huang, Jixia; Wang, Jinfeng; Yu, Weiwei
2014-04-11
This research quantifies the lag effects and vulnerabilities of temperature effects on cardiovascular disease in Changsha--a subtropical climate zone of China. A Poisson regression model within a distributed lag nonlinear models framework was used to examine the lag effects of cold- and heat-related CVD mortality. The lag effect for heat-related CVD mortality was just 0-3 days. In contrast, we observed a statistically significant association with 10-25 lag days for cold-related CVD mortality. Low temperatures with 0-2 lag days increased the mortality risk for those ≥65 years and females. For all ages, the cumulative effects of cold-related CVD mortality was 6.6% (95% CI: 5.2%-8.2%) for 30 lag days while that of heat-related CVD mortality was 4.9% (95% CI: 2.0%-7.9%) for 3 lag days. We found that in Changsha city, the lag effect of hot temperatures is short while the lag effect of cold temperatures is long. Females and older people were more sensitive to extreme hot and cold temperatures than males and younger people.
Liu, Liqun; Breitner, Susanne; Pan, Xiaochuan; Franck, Ulrich; Leitte, Arne Marian; Wiedensohler, Alfred; von Klot, Stephanie; Wichmann, H-Erich; Peters, Annette; Schneider, Alexandra
2011-05-25
Associations between air temperature and mortality have been consistently observed in Europe and the United States; however, there is a lack of studies for Asian countries. Our study investigated the association between air temperature and cardio-respiratory mortality in the urban area of Beijing, China. Death counts for cardiovascular and respiratory diseases for adult residents (≥15 years), meteorological parameters and concentrations of particulate air pollution were obtained from January 2003 to August 2005. The effects of two-day and 15-day average temperatures were estimated by Poisson regression models, controlling for time trend, relative humidity and other confounders if necessary. Effects were explored for warm (April to September) and cold periods (October to March) separately. The lagged effects of daily temperature were investigated by polynomial distributed lag (PDL) models. We observed a J-shaped exposure-response function only for 15-day average temperature and respiratory mortality in the warm period, with 21.3°C as the threshold temperature. All other exposure-response functions could be considered as linear. In the warm period, a 5°C increase of two-day average temperature was associated with a RR of 1.098 (95% confidence interval (95%CI): 1.057-1.140) for cardiovascular and 1.134 (95%CI: 1.050-1.224) for respiratory mortality; a 5°C decrease of 15-day average temperature was associated with a RR of 1.040 (95%CI: 0.990-1.093) for cardiovascular mortality. In the cold period, a 5°C increase of two-day average temperature was associated with a RR of 1.149 (95%CI: 1.078-1.224) for respiratory mortality; a 5°C decrease of 15-day average temperature was associated with a RR of 1.057 (95%CI: 1.022-1.094) for cardiovascular mortality. The effects remained robust after considering particles as additional confounders. Both increases and decreases in air temperature are associated with an increased risk of cardiovascular mortality. The effects of heat were immediate while the ones of cold became predominant with longer time lags. Increases in air temperature are also associated with an immediate increased risk of respiratory mortality.
2011-01-01
Background Associations between air temperature and mortality have been consistently observed in Europe and the United States; however, there is a lack of studies for Asian countries. Our study investigated the association between air temperature and cardio-respiratory mortality in the urban area of Beijing, China. Methods Death counts for cardiovascular and respiratory diseases for adult residents (≥15 years), meteorological parameters and concentrations of particulate air pollution were obtained from January 2003 to August 2005. The effects of two-day and 15-day average temperatures were estimated by Poisson regression models, controlling for time trend, relative humidity and other confounders if necessary. Effects were explored for warm (April to September) and cold periods (October to March) separately. The lagged effects of daily temperature were investigated by polynomial distributed lag (PDL) models. Results We observed a J-shaped exposure-response function only for 15-day average temperature and respiratory mortality in the warm period, with 21.3°C as the threshold temperature. All other exposure-response functions could be considered as linear. In the warm period, a 5°C increase of two-day average temperature was associated with a RR of 1.098 (95% confidence interval (95%CI): 1.057-1.140) for cardiovascular and 1.134 (95%CI: 1.050-1.224) for respiratory mortality; a 5°C decrease of 15-day average temperature was associated with a RR of 1.040 (95%CI: 0.990-1.093) for cardiovascular mortality. In the cold period, a 5°C increase of two-day average temperature was associated with a RR of 1.149 (95%CI: 1.078-1.224) for respiratory mortality; a 5°C decrease of 15-day average temperature was associated with a RR of 1.057 (95%CI: 1.022-1.094) for cardiovascular mortality. The effects remained robust after considering particles as additional confounders. Conclusions Both increases and decreases in air temperature are associated with an increased risk of cardiovascular mortality. The effects of heat were immediate while the ones of cold became predominant with longer time lags. Increases in air temperature are also associated with an immediate increased risk of respiratory mortality. PMID:21612647
Costa, Amine Farias; Hoek, Gerard; Brunekreef, Bert; Ponce de Leon, Antônio C M
2017-03-01
Evaluation of short-term mortality displacement is essential to accurately estimate the impact of short-term air pollution exposure on public health. We quantified mortality displacement by estimating single-day lag effects and cumulative effects of air pollutants on mortality using distributed lag models. We performed a daily time series of nonaccidental and cause-specific mortality among elderly residents of São Paulo, Brazil, between 2000 and 2011. Effects of particulate matter smaller than 10 μm (PM 10 ), nitrogen dioxide (NO 2 ) and carbon monoxide (CO) were estimated in Poisson generalized additive models. Single-day lag effects of air pollutant exposure were estimated for 0-, 1- and 2-day lags. Distributed lag models with lags of 0-10, 0-20 and 0-30 days were used to assess mortality displacement and potential cumulative exposure effects. PM 10 , NO 2 and CO were significantly associated with nonaccidental and cause-specific deaths in both single-day lag and cumulative lag models. Cumulative effect estimates for 0-10 days were larger than estimates for single-day lags. Cumulative effect estimates for 0-30 days were essentially zero for nonaccidental and circulatory deaths but remained elevated for respiratory and cancer deaths. We found evidence of mortality displacement within 30 days for nonaccidental and circulatory deaths in elderly residents of São Paulo. We did not find evidence of mortality displacement within 30 days for respiratory or cancer deaths. Citation: Costa AF, Hoek G, Brunekreef B, Ponce de Leon AC. 2017. Air pollution and deaths among elderly residents of São Paulo, Brazil: an analysis of mortality displacement. Environ Health Perspect 125:349-354; http://dx.doi.org/10.1289/EHP98.
Distributed lag effects and vulnerable groups of floods on bacillary dysentery in Huaihua, China
Liu, Zhi-Dong; Li, Jing; Zhang, Ying; Ding, Guo-Yong; Xu, Xin; Gao, Lu; Liu, Xue-Na; Liu, Qi-Yong; Jiang, Bao-Fa
2016-01-01
Understanding the potential links between floods and bacillary dysentery in China is important to develop appropriate intervention programs after floods. This study aimed to explore the distributed lag effects of floods on bacillary dysentery and to identify the vulnerable groups in Huaihua, China. Weekly number of bacillary dysentery cases from 2005–2011 were obtained during flood season. Flood data and meteorological data over the same period were obtained from the China Meteorological Data Sharing Service System. To examine the distributed lag effects, a generalized linear mixed model combined with a distributed lag non-linear model were developed to assess the relationship between floods and bacillary dysentery. A total of 3,709 cases of bacillary dysentery were notified over the study period. The effects of floods on bacillary dysentery continued for approximately 3 weeks with a cumulative risk ratio equal to 1.52 (95% CI: 1.08–2.12). The risks of bacillary dysentery were higher in females, farmers and people aged 15–64 years old. This study suggests floods have increased the risk of bacillary dysentery with 3 weeks’ effects, especially for the vulnerable groups identified. Public health programs should be taken to prevent and control a potential risk of bacillary dysentery after floods. PMID:27427387
Distributed lag effects and vulnerable groups of floods on bacillary dysentery in Huaihua, China.
Liu, Zhi-Dong; Li, Jing; Zhang, Ying; Ding, Guo-Yong; Xu, Xin; Gao, Lu; Liu, Xue-Na; Liu, Qi-Yong; Jiang, Bao-Fa
2016-07-18
Understanding the potential links between floods and bacillary dysentery in China is important to develop appropriate intervention programs after floods. This study aimed to explore the distributed lag effects of floods on bacillary dysentery and to identify the vulnerable groups in Huaihua, China. Weekly number of bacillary dysentery cases from 2005-2011 were obtained during flood season. Flood data and meteorological data over the same period were obtained from the China Meteorological Data Sharing Service System. To examine the distributed lag effects, a generalized linear mixed model combined with a distributed lag non-linear model were developed to assess the relationship between floods and bacillary dysentery. A total of 3,709 cases of bacillary dysentery were notified over the study period. The effects of floods on bacillary dysentery continued for approximately 3 weeks with a cumulative risk ratio equal to 1.52 (95% CI: 1.08-2.12). The risks of bacillary dysentery were higher in females, farmers and people aged 15-64 years old. This study suggests floods have increased the risk of bacillary dysentery with 3 weeks' effects, especially for the vulnerable groups identified. Public health programs should be taken to prevent and control a potential risk of bacillary dysentery after floods.
Distributed lag effects and vulnerable groups of floods on bacillary dysentery in Huaihua, China
NASA Astrophysics Data System (ADS)
Liu, Zhi-Dong; Li, Jing; Zhang, Ying; Ding, Guo-Yong; Xu, Xin; Gao, Lu; Liu, Xue-Na; Liu, Qi-Yong; Jiang, Bao-Fa
2016-07-01
Understanding the potential links between floods and bacillary dysentery in China is important to develop appropriate intervention programs after floods. This study aimed to explore the distributed lag effects of floods on bacillary dysentery and to identify the vulnerable groups in Huaihua, China. Weekly number of bacillary dysentery cases from 2005-2011 were obtained during flood season. Flood data and meteorological data over the same period were obtained from the China Meteorological Data Sharing Service System. To examine the distributed lag effects, a generalized linear mixed model combined with a distributed lag non-linear model were developed to assess the relationship between floods and bacillary dysentery. A total of 3,709 cases of bacillary dysentery were notified over the study period. The effects of floods on bacillary dysentery continued for approximately 3 weeks with a cumulative risk ratio equal to 1.52 (95% CI: 1.08-2.12). The risks of bacillary dysentery were higher in females, farmers and people aged 15-64 years old. This study suggests floods have increased the risk of bacillary dysentery with 3 weeks’ effects, especially for the vulnerable groups identified. Public health programs should be taken to prevent and control a potential risk of bacillary dysentery after floods.
Heat transfer of phase-change materials in two-dimensional cylindrical coordinates
NASA Technical Reports Server (NTRS)
Labdon, M. B.; Guceri, S. I.
1981-01-01
Two-dimensional phase-change problem is numerically solved in cylindrical coordinates (r and z) by utilizing two Taylor series expansions for the temperature distributions in the neighborhood of the interface location. These two expansions form two polynomials in r and z directions. For the regions sufficiently away from the interface the temperature field equations are numerically solved in the usual way and the results are coupled with the polynomials. The main advantages of this efficient approach include ability to accept arbitrarily time dependent boundary conditions of all types and arbitrarily specified initial temperature distributions. A modified approach using a single Taylor series expansion in two variables is also suggested.
Response of spectral vegetation indices to soil moisture in grasslands and shrublands
Zhang, Li; Ji, Lei; Wylie, Bruce K.
2011-01-01
The relationships between satellite-derived vegetation indices (VIs) and soil moisture are complicated because of the time lag of the vegetation response to soil moisture. In this study, we used a distributed lag regression model to evaluate the lag responses of VIs to soil moisture for grasslands and shrublands at Soil Climate Analysis Network sites in the central and western United States. We examined the relationships between Moderate Resolution Imaging Spectroradiometer (MODIS)-derived VIs and soil moisture measurements. The Normalized Difference Vegetation Index (NDVI) and Normalized Difference Water Index (NDWI) showed significant lag responses to soil moisture. The lag length varies from 8 to 56 days for NDVI and from 16 to 56 days for NDWI. However, the lag response of NDVI and NDWI to soil moisture varied among the sites. Our study suggests that the lag effect needs to be taken into consideration when the VIs are used to estimate soil moisture.
A Riemann-Hilbert approach to asymptotic questions for orthogonal polynomials
NASA Astrophysics Data System (ADS)
Deift, P.; Kriecherbauer, T.; McLaughlin, K. T.-R.; Venakides, S.; Zhou, X.
2001-08-01
A few years ago the authors introduced a new approach to study asymptotic questions for orthogonal polynomials. In this paper we give an overview of our method and review the results which have been obtained in Deift et al. (Internat. Math. Res. Notices (1997) 759, Comm. Pure Appl. Math. 52 (1999) 1491, 1335), Deift (Orthogonal Polynomials and Random Matrices: A Riemann-Hilbert Approach, Courant Lecture Notes, Vol. 3, New York University, 1999), Kriecherbauer and McLaughlin (Internat. Math. Res. Notices (1999) 299) and Baik et al. (J. Amer. Math. Soc. 12 (1999) 1119). We mainly consider orthogonal polynomials with respect to weights on the real line which are either (1) Freud-type weights d[alpha](x)=e-Q(x) dx (Q polynomial or Q(x)=x[beta], [beta]>0), or (2) varying weights d[alpha]n(x)=e-nV(x) dx (V analytic, limx-->[infinity] V(x)/logx=[infinity]). We obtain Plancherel-Rotach-type asymptotics in the entire complex plane as well as asymptotic formulae with error estimates for the leading coefficients, for the recurrence coefficients, and for the zeros of the orthogonal polynomials. Our proof starts from an observation of Fokas et al. (Comm. Math. Phys. 142 (1991) 313) that the orthogonal polynomials can be determined as solutions of certain matrix valued Riemann-Hilbert problems. We analyze the Riemann-Hilbert problems by a steepest descent type method introduced by Deift and Zhou (Ann. Math. 137 (1993) 295) and further developed in Deift and Zhou (Comm. Pure Appl. Math. 48 (1995) 277) and Deift et al. (Proc. Nat. Acad. Sci. USA 95 (1998) 450). A crucial step in our analysis is the use of the well-known equilibrium measure which describes the asymptotic distribution of the zeros of the orthogonal polynomials.
Routh's algorithm - A centennial survey
NASA Technical Reports Server (NTRS)
Barnett, S.; Siljak, D. D.
1977-01-01
One hundred years have passed since the publication of Routh's fundamental work on determining the stability of constant linear systems. The paper presents an outline of the algorithm and considers such aspects of it as the distribution of zeros and applications of it that relate to the greatest common divisor, the abscissa of stability, continued fractions, canonical forms, the nonnegativity of polynomials and polynomial matrices, the absolute stability, optimality and passivity of dynamic systems, and the stability of two-dimensional circuits.
Multivariable Hermite polynomials and phase-space dynamics
NASA Technical Reports Server (NTRS)
Dattoli, G.; Torre, Amalia; Lorenzutta, S.; Maino, G.; Chiccoli, C.
1994-01-01
The phase-space approach to classical and quantum systems demands for advanced analytical tools. Such an approach characterizes the evolution of a physical system through a set of variables, reducing to the canonically conjugate variables in the classical limit. It often happens that phase-space distributions can be written in terms of quadratic forms involving the above quoted variables. A significant analytical tool to treat these problems may come from the generalized many-variables Hermite polynomials, defined on quadratic forms in R(exp n). They form an orthonormal system in many dimensions and seem the natural tool to treat the harmonic oscillator dynamics in phase-space. In this contribution we discuss the properties of these polynomials and present some applications to physical problems.
Gouveia, Nelson; Junger, Washington Leite
2018-01-01
Air pollution is an important public health concern especially for children who are particularly susceptible. Latin America has a large children population, is highly urbanized and levels of pollution are substantially high, making the potential health impact of air pollution quite large. We evaluated the effect of air pollution on children respiratory mortality in four large urban centers: Mexico City, Santiago, Chile, and Sao Paulo and Rio de Janeiro in Brazil. Generalized Additive Models in Poisson regression was used to fit daily time-series of mortality due to respiratory diseases in infants and children, and levels of PM 10 and O 3 . Single lag and constrained polynomial distributed lag models were explored. Analyses were carried out per cause for each age group and each city. Fixed- and random-effects meta-analysis was conducted in order to combine the city-specific results in a single summary estimate. These cities host nearly 43 million people and pollution levels were above the WHO guidelines. For PM 10 the percentage increase in risk of death due to respiratory diseases in infants in a fixed effect model was 0.47% (0.09-0.85). For respiratory deaths in children 1-5 years old, the increase in risk was 0.58% (0.08-1.08) while a higher effect was observed for lower respiratory infections (LRI) in children 1-14 years old [1.38% (0.91-1.85)]. For O 3 , the only summarized estimate statistically significant was for LRI in infants. Analysis by season showed effects of O 3 in the warm season for respiratory diseases in infants, while negative effects were observed for respiratory and LRI deaths in children. We provided comparable mortality impact estimates of air pollutants across these cities and age groups. This information is important because many public policies aimed at preventing the adverse effects of pollution on health consider children as the population group that deserves the highest protection. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Merz, A. W.; Hague, D. S.
1975-01-01
An investigation was conducted on a CDC 7600 digital computer to determine the effects of additional thickness distributions to the upper surface of the NACA 64-206 and 64 sub 1 - 212 airfoils. The additional thickness distribution had the form of a continuous mathematical function which disappears at both the leading edge and the trailing edge. The function behaves as a polynomial of order epsilon sub 1 at the leading edge, and a polynomial of order epsilon sub 2 at the trailing edge. Epsilon sub 2 is a constant and epsilon sub 1 is varied over a range of practical interest. The magnitude of the additional thickness, y, is a second input parameter, and the effect of varying epsilon sub 1 and y on the aerodynamic performance of the airfoil was investigated. Results were obtained at a Mach number of 0.2 with an angle-of-attack of 6 degrees on the basic airfoils, and all calculations employ the full potential flow equations for two dimensional flow. The relaxation method of Jameson was employed for solution of the potential flow equations.
Comparison of volatility function technique for risk-neutral densities estimation
NASA Astrophysics Data System (ADS)
Bahaludin, Hafizah; Abdullah, Mimi Hafizah
2017-08-01
Volatility function technique by using interpolation approach plays an important role in extracting the risk-neutral density (RND) of options. The aim of this study is to compare the performances of two interpolation approaches namely smoothing spline and fourth order polynomial in extracting the RND. The implied volatility of options with respect to strike prices/delta are interpolated to obtain a well behaved density. The statistical analysis and forecast accuracy are tested using moments of distribution. The difference between the first moment of distribution and the price of underlying asset at maturity is used as an input to analyze forecast accuracy. RNDs are extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity for the period from January 2011 until December 2015. The empirical results suggest that the estimation of RND using a fourth order polynomial is more appropriate to be used compared to a smoothing spline in which the fourth order polynomial gives the lowest mean square error (MSE). The results can be used to help market participants capture market expectations of the future developments of the underlying asset.
Ojha, Kumari Shikha; Kerry, Joseph P; Alvarez, Carlos; Walsh, Des; Tiwari, Brijesh K
2016-07-01
The objective of this study was to investigate the efficacy of high intensity ultrasound on the fermentation profile of Lactobacillus sakei in a meat model system. Ultrasound power level (0-68.5 W) and sonication time (0-9 min) at 20 °C were assessed against the growth of L. sakei using a Microplate reader over a period of 24h. The L. sakei growth data showed a good fit with the Gompertz model (R(2)>0.90; SE<0.042). Second order polynomial models demonstrated the effect of ultrasonic power and sonication time on the specific growth rate (SGR, μ, h(-1)) and lag phase (λ, h). A higher SGR and a shorter lag phase were observed at low power (2.99 W for 5 min) compared to control. Conversely, a decrease (p<0.05) in SGR with an increase in lag phase was observed with an increase in ultrasonic power level. Cell-free extracts obtained after 24h fermentation of ultrasound treated samples showed antimicrobial activity against Staphylococcus aureus, Listeria monocytogenes, Escherichia coli and Salmonella typhimurium at lower concentrations compared to control. No significant difference (p<0.05) among treatments was observed for lactic acid content after a 24h fermentation period. This study showed that both stimulation and retardation of L. sakei is possible, depending on the ultrasonic power and sonication time employed. Hence, fermentation process involving probiotics to develop functional food products can be tailored by selection of ultrasound processing parameters. Copyright © 2016 Elsevier B.V. All rights reserved.
2013-08-01
release; distribution unlimited. PA Number 412-TW-PA-13395 f generic function g acceleration due to gravity h altitude L aerodynamic lift force L Lagrange...cost m vehicle mass M Mach number n number of coefficients in polynomial regression p highest order of polynomial regression Q dynamic pressure R...Method (RPM); the collocation points are defined by the roots of Legendre -Gauss- Radau (LGR) functions.9 GPOPS also automatically refines the “mesh” by
Radio pulsar glitches as a state-dependent Poisson process
NASA Astrophysics Data System (ADS)
Fulgenzi, W.; Melatos, A.; Hughes, B. D.
2017-10-01
Gross-Pitaevskii simulations of vortex avalanches in a neutron star superfluid are limited computationally to ≲102 vortices and ≲102 avalanches, making it hard to study the long-term statistics of radio pulsar glitches in realistically sized systems. Here, an idealized, mean-field model of the observed Gross-Pitaevskii dynamics is presented, in which vortex unpinning is approximated as a state-dependent, compound Poisson process in a single random variable, the spatially averaged crust-superfluid lag. Both the lag-dependent Poisson rate and the conditional distribution of avalanche-driven lag decrements are inputs into the model, which is solved numerically (via Monte Carlo simulations) and analytically (via a master equation). The output statistics are controlled by two dimensionless free parameters: α, the glitch rate at a reference lag, multiplied by the critical lag for unpinning, divided by the spin-down rate; and β, the minimum fraction of the lag that can be restored by a glitch. The system evolves naturally to a self-regulated stationary state, whose properties are determined by α/αc(β), where αc(β) ≈ β-1/2 is a transition value. In the regime α ≳ αc(β), one recovers qualitatively the power-law size and exponential waiting-time distributions observed in many radio pulsars and Gross-Pitaevskii simulations. For α ≪ αc(β), the size and waiting-time distributions are both power-law-like, and a correlation emerges between size and waiting time until the next glitch, contrary to what is observed in most pulsars. Comparisons with astrophysical data are restricted by the small sample sizes available at present, with ≤35 events observed per pulsar.
Thornton, B S; Hung, W T; Irving, J
1991-01-01
The response decay data of living cells subject to electric polarization is associated with their relaxation distribution function (RDF) and can be determined using the inverse Laplace transform method. A new polynomial, involving a series of associated Laguerre polynomials, has been used as the approximating function for evaluating the RDF, with the advantage of avoiding the usual arbitrary trial values of a particular parameter in the numerical computations. Some numerical examples are given, followed by an application to cervical tissue. It is found that the average relaxation time and the peak amplitude of the RDF exhibit higher values for tumorous cells than normal cells and might be used as parameters to differentiate them and their associated tissues.
Gaussian quadrature and lattice discretization of the Fermi-Dirac distribution for graphene.
Oettinger, D; Mendoza, M; Herrmann, H J
2013-07-01
We construct a lattice kinetic scheme to study electronic flow in graphene. For this purpose, we first derive a basis of orthogonal polynomials, using as the weight function the ultrarelativistic Fermi-Dirac distribution at rest. Later, we use these polynomials to expand the respective distribution in a moving frame, for both cases, undoped and doped graphene. In order to discretize the Boltzmann equation and make feasible the numerical implementation, we reduce the number of discrete points in momentum space to 18 by applying a Gaussian quadrature, finding that the family of representative wave (2+1)-vectors, which satisfies the quadrature, reconstructs a honeycomb lattice. The procedure and discrete model are validated by solving the Riemann problem, finding excellent agreement with other numerical models. In addition, we have extended the Riemann problem to the case of different dopings, finding that by increasing the chemical potential the electronic fluid behaves as if it increases its effective viscosity.
Bounding the Failure Probability Range of Polynomial Systems Subject to P-box Uncertainties
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2012-01-01
This paper proposes a reliability analysis framework for systems subject to multiple design requirements that depend polynomially on the uncertainty. Uncertainty is prescribed by probability boxes, also known as p-boxes, whose distribution functions have free or fixed functional forms. An approach based on the Bernstein expansion of polynomials and optimization is proposed. In particular, we search for the elements of a multi-dimensional p-box that minimize (i.e., the best-case) and maximize (i.e., the worst-case) the probability of inner and outer bounding sets of the failure domain. This technique yields intervals that bound the range of failure probabilities. The offset between this bounding interval and the actual failure probability range can be made arbitrarily tight with additional computational effort.
Matrix of moments of the Legendre polynomials and its application to problems of electrostatics
NASA Astrophysics Data System (ADS)
Savchenko, A. O.
2017-01-01
In this work, properties of the matrix of moments of the Legendre polynomials are presented and proven. In particular, the explicit form of the elements of the matrix inverse to the matrix of moments is found and theorems of the linear combination and orthogonality are proven. On the basis of these properties, the total charge and the dipole moment of a conducting ball in a nonuniform electric field, the charge distribution over the surface of the conducting ball, its multipole moments, and the force acting on a conducting ball situated on the axis of a nonuniform axisymmetric electric field are determined. All assertions are formulated in theorems, the proofs of which are based on the properties of the matrix of moments of the Legendre polynomials.
Chaudhari, Harshal Liladhar; Warad, Shivaraj; Ashok, Nipun; Baroudi, Kusai; Tarakji, Bassel
2016-01-01
Interleukin 17(IL-17) is a pro-inflammatory cytokine produced mainly by Th17 cells. The present study was undertaken to investigate a possible association between IL-17 A genetic polymorphism at (-197A/G) and susceptibility to chronic and localized aggressive periodontitis (LAgP) in an Indian population. The study was carried out on 105 subjects, which included 35 LAgP patients, 35 chronic periodontitis patients and 35 healthy controls. Blood samples were drawn from the subjects and analyzed for IL-17 genetic polymorphism at (-197A/G), by using the polymerase chain reaction-restriction fragment length polymorphism method. A statistically significant difference was seen in the genotype distribution among chronic periodontitis patients, LAgP patients and healthy subjects. There was a significant difference in the distribution of alleles among chronic periodontitis patients, LAgP patients and healthy subjects. The odds ratio for A allele versus G allele was 5.1 between chronic periodontitis patients and healthy controls, and 5.1 between LAgp patients and healthy controls. Our study concluded that IL-17 A gene polymorphism at (-197A/G) is linked to chronic periodontitis and LAgP in Indian population. The presence of allele A in the IL-17 gene polymorphism (-197A/G) can be considered a risk factor for chronic periodontitis and LAgP.
Neophytou, Andreas M; Picciotto, Sally; Brown, Daniel M; Gallagher, Lisa E; Checkoway, Harvey; Eisen, Ellen A; Costello, Sadie
2018-02-13
Prolonged exposures can have complex relationships with health outcomes, as timing, duration, and intensity of exposure are all potentially relevant. Summary measures such as cumulative exposure or average intensity of exposure may not fully capture these relationships. We applied penalized and unpenalized distributed lag non-linear models (DLNMs) with flexible exposure-response and lag-response functions in order to examine the association between crystalline silica exposure and mortality from lung cancer and non-malignant respiratory disease in a cohort study of 2,342 California diatomaceous earth workers, followed 1942-2011. We also assessed associations using simple measures of cumulative exposure assuming linear exposure-response and constant lag-response. Measures of association from DLNMs were generally higher than from simpler models. Rate ratios from penalized DLNMs corresponding to average daily exposures of 0.4 mg/m3 during lag years 31-50 prior to the age of observed cases were 1.47 (95% confidence interval (CI) 0.92, 2.35) for lung cancer and 1.80 (95% CI: 1.14, 2.85) for non-malignant respiratory disease. Rate ratios from the simpler models for the same exposure scenario were 1.15 (95% CI: 0.89-1.48) and 1.23 (95% CI: 1.03-1.46) respectively. Longitudinal cohort studies of prolonged exposures and chronic health outcomes should explore methods allowing for flexibility and non-linearities in the exposure-lag-response. © The Author(s) 2018. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.
Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah
2017-03-24
Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure.
Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah
2017-01-01
Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure. PMID:28338632
Chen, Huifang; Xie, Lei
2014-01-01
Self-healing group key distribution (SGKD) aims to deal with the key distribution problem over an unreliable wireless network. In this paper, we investigate the SGKD issue in resource-constrained wireless networks. We propose two improved SGKD schemes using the one-way hash chain (OHC) and the revocation polynomial (RP), the OHC&RP-SGKD schemes. In the proposed OHC&RP-SGKD schemes, by introducing the unique session identifier and binding the joining time with the capability of recovering previous session keys, the problem of the collusion attack between revoked users and new joined users in existing hash chain-based SGKD schemes is resolved. Moreover, novel methods for utilizing the one-way hash chain and constructing the personal secret, the revocation polynomial and the key updating broadcast packet are presented. Hence, the proposed OHC&RP-SGKD schemes eliminate the limitation of the maximum allowed number of revoked users on the maximum allowed number of sessions, increase the maximum allowed number of revoked/colluding users, and reduce the redundancy in the key updating broadcast packet. Performance analysis and simulation results show that the proposed OHC&RP-SGKD schemes are practical for resource-constrained wireless networks in bad environments, where a strong collusion attack resistance is required and many users could be revoked. PMID:25529204
A generalized multivariate regression model for modelling ocean wave heights
NASA Astrophysics Data System (ADS)
Wang, X. L.; Feng, Y.; Swail, V. R.
2012-04-01
In this study, a generalized multivariate linear regression model is developed to represent the relationship between 6-hourly ocean significant wave heights (Hs) and the corresponding 6-hourly mean sea level pressure (MSLP) fields. The model is calibrated using the ERA-Interim reanalysis of Hs and MSLP fields for 1981-2000, and is validated using the ERA-Interim reanalysis for 2001-2010 and ERA40 reanalysis of Hs and MSLP for 1958-2001. The performance of the fitted model is evaluated in terms of Pierce skill score, frequency bias index, and correlation skill score. Being not normally distributed, wave heights are subjected to a data adaptive Box-Cox transformation before being used in the model fitting. Also, since 6-hourly data are being modelled, lag-1 autocorrelation must be and is accounted for. The models with and without Box-Cox transformation, and with and without accounting for autocorrelation, are inter-compared in terms of their prediction skills. The fitted MSLP-Hs relationship is then used to reconstruct historical wave height climate from the 6-hourly MSLP fields taken from the Twentieth Century Reanalysis (20CR, Compo et al. 2011), and to project possible future wave height climates using CMIP5 model simulations of MSLP fields. The reconstructed and projected wave heights, both seasonal means and maxima, are subject to a trend analysis that allows for non-linear (polynomial) trends.
Bildirici, Melike; Ersin, Özgür Ömer
2018-01-01
The study aims to combine the autoregressive distributed lag (ARDL) cointegration framework with smooth transition autoregressive (STAR)-type nonlinear econometric models for causal inference. Further, the proposed STAR distributed lag (STARDL) models offer new insights in terms of modeling nonlinearity in the long- and short-run relations between analyzed variables. The STARDL method allows modeling and testing nonlinearity in the short-run and long-run parameters or both in the short- and long-run relations. To this aim, the relation between CO 2 emissions and economic growth rates in the USA is investigated for the 1800-2014 period, which is one of the largest data sets available. The proposed hybrid models are the logistic, exponential, and second-order logistic smooth transition autoregressive distributed lag (LSTARDL, ESTARDL, and LSTAR2DL) models combine the STAR framework with nonlinear ARDL-type cointegration to augment the linear ARDL approach with smooth transitional nonlinearity. The proposed models provide a new approach to the relevant econometrics and environmental economics literature. Our results indicated the presence of asymmetric long-run and short-run relations between the analyzed variables that are from the GDP towards CO 2 emissions. By the use of newly proposed STARDL models, the results are in favor of important differences in terms of the response of CO 2 emissions in regimes 1 and 2 for the estimated LSTAR2DL and LSTARDL models.
Patterns of Activity in A Global Model of A Solar Active Region
NASA Technical Reports Server (NTRS)
Bradshaw, S. J.; Viall, N. M.
2016-01-01
In this work we investigate the global activity patterns predicted from a model active region heated by distributions of nanoflares that have a range of frequencies. What differs is the average frequency of the distributions. The activity patterns are manifested in time lag maps of narrow-band instrument channel pairs. We combine hydrodynamic and forward modeling codes with a magnetic field extrapolation to create a model active region and apply the time lag method to synthetic observations. Our aim is not to reproduce a particular set of observations in detail, but to recover some typical properties and patterns observed in active regions. Our key findings are the following. (1) Cooling dominates the time lag signature and the time lags between the channel pairs are generally consistent with observed values. (2) Shorter coronal loops in the core cool more quickly than longer loops at the periphery. (3) All channel pairs show zero time lag when the line of sight passes through coronal loop footpoints. (4) There is strong evidence that plasma must be re-energized on a timescale comparable to the cooling timescale to reproduce the observed coronal activity, but it is likely that a relatively broad spectrum of heating frequencies are operating across active regions. (5) Due to their highly dynamic nature, we find nanoflare trains produce zero time lags along entire flux tubes in our model active region that are seen between the same channel pairs in observed active regions.
Seven-day cumulative effects of air pollutants increase respiratory ER visits up to threefold.
Schvartsman, Cláudio; Pereira, Luiz Alberto Amador; Braga, Alfésio Luiz Ferreira; Farhat, Sylvia Costa Lima
2017-02-01
Children are especially vulnerable to respiratory injury induced by exposure to air pollutants. In the present study, we investigate periods of up to 7 days, and evaluate the lagged effects of exposure to air pollutants on the daily number of children and adolescents visiting the emergency room (ER) for the treatment of lower respiratory obstructive diseases (LROD), in the city of São Paulo, Brazil. Daily records of LROD-related ER visits by children and adolescents under the age of 19, from January 2000 to December 2007 (2,922 days) were included in the study. Time-series regression models (generalized linear Poisson) were used to control for short- and long-term trends, as well as for temperature and relative humidity. Third-degree polynomial lag models were used to estimate both lag structures and the cumulative effects of air pollutants. Effects of air pollutants were expressed as the percentage increase in LROD-related ER visits. We observed an acute effect at the same day of exposure to air pollutants; however, the cumulative effects of air pollutants on the number of LROD-related ER visits was almost threefold greater than the one observed at the same day of exposure to PM 10 , SO 2 , and NO 2 mainly in children aged 5 years and under. The 7-day cumulative effect of SO 2 reached 11.0% (95% CI: 5.0-16.7) increase in visits. Conclusion and Relevance: This study highlights the effects of intermediate-term exposure to air pollutants on LROD in children. Pediatr Pulmonol. 2017;52:205-212. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Passalacqua, Olivier; Ritz, Catherine; Parrenin, Frédéric; Urbini, Stefano; Frezzotti, Massimo
2017-09-01
Basal melt rate is the most important physical quantity to be evaluated when looking for an old-ice drilling site, and it depends to a great extent on the geothermal flux (GF), which is poorly known under the East Antarctic ice sheet. Given that wet bedrock has higher reflectivity than dry bedrock, the wetness of the ice-bed interface can be assessed using radar echoes from the bedrock. But, since basal conditions depend on heat transfer forced by climate but lagged by the thick ice, the basal ice may currently be frozen whereas in the past it was generally melting. For that reason, the risk of bias between present and past conditions has to be evaluated. The objective of this study is to assess which locations in the Dome C area could have been protected from basal melting at any time in the past, which requires evaluating GF. We used an inverse approach to retrieve GF from radar-inferred distribution of wet and dry beds. A 1-D heat model is run over the last 800 ka to constrain the value of GF by assessing a critical ice thickness, i.e. the minimum ice thickness that would allow the present local distribution of basal melting. A regional map of the GF was then inferred over a 80 km × 130 km area, with a N-S gradient and with values ranging from 48 to 60 mW m-2. The forward model was then emulated by a polynomial function to compute a time-averaged value of the spatially variable basal melt rate over the region. Three main subregions appear to be free of basal melting, two because of a thin overlying ice and one, north of Dome C, because of a low GF.
Duong, Manh Hong; Han, The Anh
2016-12-01
In this paper, we study the distribution and behaviour of internal equilibria in a d-player n-strategy random evolutionary game where the game payoff matrix is generated from normal distributions. The study of this paper reveals and exploits interesting connections between evolutionary game theory and random polynomial theory. The main contributions of the paper are some qualitative and quantitative results on the expected density, [Formula: see text], and the expected number, E(n, d), of (stable) internal equilibria. Firstly, we show that in multi-player two-strategy games, they behave asymptotically as [Formula: see text] as d is sufficiently large. Secondly, we prove that they are monotone functions of d. We also make a conjecture for games with more than two strategies. Thirdly, we provide numerical simulations for our analytical results and to support the conjecture. As consequences of our analysis, some qualitative and quantitative results on the distribution of zeros of a random Bernstein polynomial are also obtained.
Chromosome behaviour in Rhoeo spathacea var. variegata.
Lin, Y J
1980-01-01
Rhoeo spathacea var. variegata is unusual in that its twelve chromosomes are arranged in a ring at meiosis. The order of the chromosomes has been established, and each chromosome arm has been designated a letter in accordance with the segmental interchange theory. Chromosomes are often irregularly orientated at metaphase I. Chromosomes at anaphase I are generally distributed equally (6-6, 58.75%) although not necessarily balanced. Due to adjacent distribution, 7-5 distribution at anaphase I was frequently observed (24.17%), and due to lagging, 6-1-5 and 5-2-5 distributions were also observed (10.83% and 3.33% respectively). Three types of abnormal distribution, 8-4, 7-1-4 and 6-2-4 were observed very infrequently (2.92% total), and their possible origins are discussed. Irregularities, such as adjacent distribution and lagging, undoubtedly reduce the fertility of the plant because of the resulting unbalanced gametes.
NASA Astrophysics Data System (ADS)
Silva, Antonio
2005-03-01
It is well-known that the mathematical theory of Brownian motion was first developed in the Ph. D. thesis of Louis Bachelier for the French stock market before Einstein [1]. In Ref. [2] we studied the so-called Heston model, where the stock-price dynamics is governed by multiplicative Brownian motion with stochastic diffusion coefficient. We solved the corresponding Fokker-Planck equation exactly and found an analytic formula for the time-dependent probability distribution of stock price changes (returns). The formula interpolates between the exponential (tent-shaped) distribution for short time lags and the Gaussian (parabolic) distribution for long time lags. The theoretical formula agrees very well with the actual stock-market data ranging from the Dow-Jones index [2] to individual companies [3], such as Microsoft, Intel, etc. [] [1] Louis Bachelier, ``Th'eorie de la sp'eculation,'' Annales Scientifiques de l''Ecole Normale Sup'erieure, III-17:21-86 (1900).[] [2] A. A. Dragulescu and V. M. Yakovenko, ``Probability distribution of returns in the Heston model with stochastic volatility,'' Quantitative Finance 2, 443--453 (2002); Erratum 3, C15 (2003). [cond-mat/0203046] [] [3] A. C. Silva, R. E. Prange, and V. M. Yakovenko, ``Exponential distribution of financial returns at mesoscopic time lags: a new stylized fact,'' Physica A 344, 227--235 (2004). [cond-mat/0401225
Analytic Evolution of Singular Distribution Amplitudes in QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tandogan Kunkel, Asli
2014-08-01
Distribution amplitudes (DAs) are the basic functions that contain information about the quark momentum. DAs are necessary to describe hard exclusive processes in quantum chromodynamics. We describe a method of analytic evolution of DAs that have singularities such as nonzero values at the end points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a at (constant) DA, antisymmetric at DA, and then use the method for evolution of the two-photon generalized distribution amplitude. Our approach to DA evolution has advantages over the standardmore » method of expansion in Gegenbauer polynomials [1, 2] and over a straightforward iteration of an initial distribution with evolution kernel. Expansion in Gegenbauer polynomials requires an infinite number of terms in order to accurately reproduce functions in the vicinity of singular points. Straightforward iteration of an initial distribution produces logarithmically divergent terms at each iteration. In our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve. Afterwards, in order to get precise results, only one or two iterations are needed.« less
Algebraic criteria for positive realness relative to the unit circle.
NASA Technical Reports Server (NTRS)
Siljak, D. D.
1973-01-01
A definition is presented of the circle positive realness of real rational functions relative to the unit circle in the complex variable plane. The problem of testing this kind of positive reality is reduced to the algebraic problem of determining the distribution of zeros of a real polynomial with respect to and on the unit circle. Such reformulation of the problem avoids the search for explicit information about imaginary poles of rational functions. The stated algebraic problem is solved by applying the polynomial criteria of Marden (1966) and Jury (1964), and a completely recursive algorithm for circle positive realness is obtained.
An Algebraic Implicitization and Specialization of Minimum KL-Divergence Models
NASA Astrophysics Data System (ADS)
Dukkipati, Ambedkar; Manathara, Joel George
In this paper we study representation of KL-divergence minimization, in the cases where integer sufficient statistics exists, using tools from polynomial algebra. We show that the estimation of parametric statistical models in this case can be transformed to solving a system of polynomial equations. In particular, we also study the case of Kullback-Csisźar iteration scheme. We present implicit descriptions of these models and show that implicitization preserves specialization of prior distribution. This result leads us to a Gröbner bases method to compute an implicit representation of minimum KL-divergence models.
Smoothing optimization of supporting quadratic surfaces with Zernike polynomials
NASA Astrophysics Data System (ADS)
Zhang, Hang; Lu, Jiandong; Liu, Rui; Ma, Peifu
2018-03-01
A new optimization method to get a smooth freeform optical surface from an initial surface generated by the supporting quadratic method (SQM) is proposed. To smooth the initial surface, a 9-vertex system from the neighbor quadratic surface and the Zernike polynomials are employed to establish a linear equation system. A local optimized surface to the 9-vertex system can be build by solving the equations. Finally, a continuous smooth optimization surface is constructed by stitching the above algorithm on the whole initial surface. The spot corresponding to the optimized surface is no longer discrete pixels but a continuous distribution.
Lags in the response of mountain plant communities to climate change.
Alexander, Jake M; Chalmandrier, Loïc; Lenoir, Jonathan; Burgess, Treena I; Essl, Franz; Haider, Sylvia; Kueffer, Christoph; McDougall, Keith; Milbau, Ann; Nuñez, Martin A; Pauchard, Aníbal; Rabitsch, Wolfgang; Rew, Lisa J; Sanders, Nathan J; Pellissier, Loïc
2018-02-01
Rapid climatic changes and increasing human influence at high elevations around the world will have profound impacts on mountain biodiversity. However, forecasts from statistical models (e.g. species distribution models) rarely consider that plant community changes could substantially lag behind climatic changes, hindering our ability to make temporally realistic projections for the coming century. Indeed, the magnitudes of lags, and the relative importance of the different factors giving rise to them, remain poorly understood. We review evidence for three types of lag: "dispersal lags" affecting plant species' spread along elevational gradients, "establishment lags" following their arrival in recipient communities, and "extinction lags" of resident species. Variation in lags is explained by variation among species in physiological and demographic responses, by effects of altered biotic interactions, and by aspects of the physical environment. Of these, altered biotic interactions could contribute substantially to establishment and extinction lags, yet impacts of biotic interactions on range dynamics are poorly understood. We develop a mechanistic community model to illustrate how species turnover in future communities might lag behind simple expectations based on species' range shifts with unlimited dispersal. The model shows a combined contribution of altered biotic interactions and dispersal lags to plant community turnover along an elevational gradient following climate warming. Our review and simulation support the view that accounting for disequilibrium range dynamics will be essential for realistic forecasts of patterns of biodiversity under climate change, with implications for the conservation of mountain species and the ecosystem functions they provide. © 2017 John Wiley & Sons Ltd.
Time-dependent Electron Acceleration in Blazar Transients: X-Ray Time Lags and Spectral Formation
NASA Astrophysics Data System (ADS)
Lewis, Tiffany R.; Becker, Peter A.; Finke, Justin D.
2016-06-01
Electromagnetic radiation from blazar jets often displays strong variability, extending from radio to γ-ray frequencies. In a few cases, this variability has been characterized using Fourier time lags, such as those detected in the X-rays from Mrk 421 using BeppoSAX. The lack of a theoretical framework to interpret the data has motivated us to develop a new model for the formation of the X-ray spectrum and the time lags in blazar jets based on a transport equation including terms describing stochastic Fermi acceleration, synchrotron losses, shock acceleration, adiabatic expansion, and spatial diffusion. We derive the exact solution for the Fourier transform of the electron distribution and use it to compute the Fourier transform of the synchrotron radiation spectrum and the associated X-ray time lags. The same theoretical framework is also used to compute the peak flare X-ray spectrum, assuming that a steady-state electron distribution is achieved during the peak of the flare. The model parameters are constrained by comparing the theoretical predictions with the observational data for Mrk 421. The resulting integrated model yields, for the first time, a complete first-principles physical explanation for both the formation of the observed time lags and the shape of the peak flare X-ray spectrum. It also yields direct estimates of the strength of the shock and the stochastic magnetohydrodynamical wave acceleration components in the Mrk 421 jet.
Statistical models and time series forecasting of sulfur dioxide: a case study Tehran.
Hassanzadeh, S; Hosseinibalam, F; Alizadeh, R
2009-08-01
This study performed a time-series analysis, frequency distribution and prediction of SO(2) levels for five stations (Pardisan, Vila, Azadi, Gholhak and Bahman) in Tehran for the period of 2000-2005. Most sites show a quite similar characteristic with highest pollution in autumn-winter time and least pollution in spring-summer. The frequency distributions show higher peaks at two residential sites. The potential for SO(2) problems is high because of high emissions and the close geographical proximity of the major industrial and urban centers. The ACF and PACF are nonzero for several lags, indicating a mixed (ARMA) model, then at Bahman station an ARMA model was used for forecasting SO(2). The partial autocorrelations become close to 0 after about 5 lags while the autocorrelations remain strong through all the lags shown. The results proved that ARMA (2,2) model can provides reliable, satisfactory predictions for time series.
NASA Astrophysics Data System (ADS)
Pal, Debdatta; Mitra, Subrata Kumar
2018-01-01
This study used a quantile autoregressive distributed lag (QARDL) model to capture asymmetric impact of rainfall on food production in India. It was found that the coefficient corresponding to the rainfall in the QARDL increased till the 75th quantile and started decreasing thereafter, though it remained in the positive territory. Another interesting finding is that at the 90th quantile and above the coefficients of rainfall though remained positive was not statistically significant and therefore, the benefit of high rainfall on crop production was not conclusive. However, the impact of other determinants, such as fertilizer and pesticide consumption, is quite uniform over the whole range of the distribution of food grain production.
Periodic trim solutions with hp-version finite elements in time
NASA Technical Reports Server (NTRS)
Peters, David A.; Hou, Lin-Jun
1990-01-01
Finite elements in time as an alternative strategy for rotorcraft trim problems are studied. The research treats linear flap and linearized flap-lag response both for quasi-trim and trim cases. The connection between Fourier series analysis and hp-finite elements for periodic a problem is also examined. It is proved that Fourier series is a special case of space-time finite elements in which one element is used with a strong displacement formulation. Comparisons are made with respect to accuracy among Fourier analysis, displacement methods, and mixed methods over a variety parameters. The hp trade-off is studied for the periodic trim problem to provide an optimum step size and order of polynomial for a given error criteria. It is found that finite elements in time can outperform Fourier analysis for periodic problems, and for some given error criteria. The mixed method provides better results than does the displacement method.
A numerical study on dual-phase-lag model of bio-heat transfer during hyperthermia treatment.
Kumar, P; Kumar, Dinesh; Rai, K N
2015-01-01
The success of hyperthermia in the treatment of cancer depends on the precise prediction and control of temperature. It was absolutely a necessity for hyperthermia treatment planning to understand the temperature distribution within living biological tissues. In this paper, dual-phase-lag model of bio-heat transfer has been studied using Gaussian distribution source term under most generalized boundary condition during hyperthermia treatment. An approximate analytical solution of the present problem has been done by Finite element wavelet Galerkin method which uses Legendre wavelet as a basis function. Multi-resolution analysis of Legendre wavelet in the present case localizes small scale variations of solution and fast switching of functional bases. The whole analysis is presented in dimensionless form. The dual-phase-lag model of bio-heat transfer has compared with Pennes and Thermal wave model of bio-heat transfer and it has been found that large differences in the temperature at the hyperthermia position and time to achieve the hyperthermia temperature exist, when we increase the value of τT. Particular cases when surface subjected to boundary condition of 1st, 2nd and 3rd kind are discussed in detail. The use of dual-phase-lag model of bio-heat transfer and finite element wavelet Galerkin method as a solution method helps in precise prediction of temperature. Gaussian distribution source term helps in control of temperature during hyperthermia treatment. So, it makes this study more useful for clinical applications. Copyright © 2015 Elsevier Ltd. All rights reserved.
Tracking lags in historical plant species' shifts in relation to regional climate change.
Ash, Jeremy D; Givnish, Thomas J; Waller, Donald M
2017-03-01
Can species shift their distributions fast enough to track changes in climate? We used abundance data from the 1950s and the 2000s in Wisconsin to measure shifts in the distribution and abundance of 78 forest-understory plant species over the last half-century and compare these shifts to changes in climate. We estimated temporal shifts in the geographic distribution of each species using vectors to connect abundance-weighted centroids from the 1950s and 2000s. These shifts in distribution reflect colonization, extirpation, and changes in abundance within sites, separately quantified here. We then applied climate analog analyses to compute vectors representing the climate change that each species experienced. Species shifted mostly to the northwest (mean: 49 ± 29 km) primarily reflecting processes of colonization and changes in local abundance. Analog climates for these species shifted even further to the northwest, however, exceeding species' shifts by an average of 90 ± 40 km. Most species thus failed to match recent rates of climate change. These lags decline in species that have colonized more sites and those with broader site occupancy, larger seed mass, and higher habitat fidelity. Thus, species' traits appear to affect their responses to climate change, but relationships are weak. As climate change accelerates, these lags will likely increase, potentially threatening the persistence of species lacking the capacity to disperse to new sites or locally adapt. However, species with greater lags have not yet declined more in abundance. The extent of these threats will likely depend on how other drivers of ecological change and interactions among species affect their responses to climate change. © 2016 John Wiley & Sons Ltd.
Higher-order Fourier analysis over finite fields and applications
NASA Astrophysics Data System (ADS)
Hatami, Pooya
Higher-order Fourier analysis is a powerful tool in the study of problems in additive and extremal combinatorics, for instance the study of arithmetic progressions in primes, where the traditional Fourier analysis comes short. In recent years, higher-order Fourier analysis has found multiple applications in computer science in fields such as property testing and coding theory. In this thesis, we develop new tools within this theory with several new applications such as a characterization theorem in algebraic property testing. One of our main contributions is a strong near-equidistribution result for regular collections of polynomials. The densities of small linear structures in subsets of Abelian groups can be expressed as certain analytic averages involving linear forms. Higher-order Fourier analysis examines such averages by approximating the indicator function of a subset by a function of bounded number of polynomials. Then, to approximate the average, it suffices to know the joint distribution of the polynomials applied to the linear forms. We prove a near-equidistribution theorem that describes these distributions for the group F(n/p) when p is a fixed prime. This fundamental fact was previously known only under various extra assumptions about the linear forms or the field size. We use this near-equidistribution theorem to settle a conjecture of Gowers and Wolf on the true complexity of systems of linear forms. Our next application is towards a characterization of testable algebraic properties. We prove that every locally characterized affine-invariant property of functions f : F(n/p) → R with n∈ N, is testable. In fact, we prove that any such property P is proximity-obliviously testable. More generally, we show that any affine-invariant property that is closed under subspace restrictions and has "bounded complexity" is testable. We also prove that any property that can be described as the property of decomposing into a known structure of low-degree polynomials is locally characterized and is, hence, testable. We discuss several notions of regularity which allow us to deduce algorithmic versions of various regularity lemmas for polynomials by Green and Tao and by Kaufman and Lovett. We show that our algorithmic regularity lemmas for polynomials imply algorithmic versions of several results relying on regularity, such as decoding Reed-Muller codes beyond the list decoding radius (for certain structured errors), and prescribed polynomial decompositions. Finally, motivated by the definition of Gowers norms, we investigate norms defined by different systems of linear forms. We give necessary conditions on the structure of systems of linear forms that define norms. We prove that such norms can be one of only two types, and assuming that |F p| is sufficiently large, they essentially are equivalent to either a Gowers norm or Lp norms.
NASA Astrophysics Data System (ADS)
Edwards, M. E.; Alsos, I. G.; Sjögren, P.; Coissac, E.; Gielly, L.; Yoccoz, N.; Føreid, M. K.; Taberlet, P.
2015-12-01
Knowledge of how climate change affected species distribution in the past may help us predict the effect of ongoing environmental changes. We explore how the use of modern (AFLP fingerprinting techniques) and ancient DNA (metabarcoding P6 loop of chloroplast DNA) help to reveal past distribution of vascular plant species, dispersal processes, and effect of species traits. Based on studies of modern DNA combined with species distribution models, we show the dispersal routes and barriers to dispersal throughout the circumarctic/circumboreal region, likely dispersal vectors, the cost of dispersal in term of loss of genetic diversity, and how these relates to species traits, dispersal distance, and size of colonized region. We also estimate the expected future distribution and loss of genetic diversity and show how this relates to life form and adaptations to dispersal. To gain more knowledge on time lags in past range change events, we rely on palaeorecords. Current data on past distribution are limited by the taxonomic and time resolution of macrofossil and pollen records. We show how this may be improved by studying ancient DNA of lake sediments. DNA of lake sediments recorded about half of the flora surrounding the lake. Compared to macrofossil, the taxonomic resolution is similar but the detection rate is considerable improved. By taking into account main determinants of founder effect, dispersal vectors, and dispersal lags, we may improve our ability to forecast effects of climate change, whereas more studies on ancient DNA may provide us with knowledge on distribution time lags.
Model-based multi-fringe interferometry using Zernike polynomials
NASA Astrophysics Data System (ADS)
Gu, Wei; Song, Weihong; Wu, Gaofeng; Quan, Haiyang; Wu, Yongqian; Zhao, Wenchuan
2018-06-01
In this paper, a general phase retrieval method is proposed, which is based on one single interferogram with a small amount of fringes (either tilt or power). Zernike polynomials are used to characterize the phase to be measured; the phase distribution is reconstructed by a non-linear least squares method. Experiments show that the proposed method can obtain satisfactory results compared to the standard phase-shifting interferometry technique. Additionally, the retrace errors of proposed method can be neglected because of the few fringes; it does not need any auxiliary phase shifting facilities (low cost) and it is easy to implement without the process of phase unwrapping.
Modeling of Engine Parameters for Condition-Based Maintenance of the MTU Series 2000 Diesel Engine
2016-09-01
are suitable. To model the behavior of the engine, an autoregressive distributed lag (ARDL) time series model of engine speed and exhaust gas... time series model of engine speed and exhaust gas temperature is derived. The lag length for ARDL is determined by whitening of residuals using the...15 B. REGRESSION ANALYSIS ....................................................................15 1. Time Series Analysis
TIME-DEPENDENT ELECTRON ACCELERATION IN BLAZAR TRANSIENTS: X-RAY TIME LAGS AND SPECTRAL FORMATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, Tiffany R.; Becker, Peter A.; Finke, Justin D., E-mail: pbecker@gmu.edu, E-mail: tlewis13@gmu.edu, E-mail: justin.finke@nrl.navy.mil
2016-06-20
Electromagnetic radiation from blazar jets often displays strong variability, extending from radio to γ -ray frequencies. In a few cases, this variability has been characterized using Fourier time lags, such as those detected in the X-rays from Mrk 421 using Beppo SAX. The lack of a theoretical framework to interpret the data has motivated us to develop a new model for the formation of the X-ray spectrum and the time lags in blazar jets based on a transport equation including terms describing stochastic Fermi acceleration, synchrotron losses, shock acceleration, adiabatic expansion, and spatial diffusion. We derive the exact solution formore » the Fourier transform of the electron distribution and use it to compute the Fourier transform of the synchrotron radiation spectrum and the associated X-ray time lags. The same theoretical framework is also used to compute the peak flare X-ray spectrum, assuming that a steady-state electron distribution is achieved during the peak of the flare. The model parameters are constrained by comparing the theoretical predictions with the observational data for Mrk 421. The resulting integrated model yields, for the first time, a complete first-principles physical explanation for both the formation of the observed time lags and the shape of the peak flare X-ray spectrum. It also yields direct estimates of the strength of the shock and the stochastic magnetohydrodynamical wave acceleration components in the Mrk 421 jet.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroon, John J.; Becker, Peter A., E-mail: jkroon@gmu.edu, E-mail: pbecker@gmu.edu
Accreting black hole sources show a wide variety of rapid time variability, including the manifestation of time lags during X-ray transients, in which a delay (phase shift) is observed between the Fourier components of the hard and soft spectra. Despite a large body of observational evidence for time lags, no fundamental physical explanation for the origin of this phenomenon has been presented. We develop a new theoretical model for the production of X-ray time lags based on an exact analytical solution for the Fourier transform describing the diffusion and Comptonization of seed photons propagating through a spherical corona. The resultingmore » Green's function can be convolved with any source distribution to compute the associated Fourier transform and time lags, hence allowing us to explore a wide variety of injection scenarios. We show that thermal Comptonization is able to self-consistently explain both the X-ray time lags and the steady-state (quiescent) X-ray spectrum observed in the low-hard state of Cyg X-1. The reprocessing of bremsstrahlung seed photons produces X-ray time lags that diminish with increasing Fourier frequency, in agreement with the observations for a wide range of sources.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rakhmanov, E A; Suetin, S P
2013-09-30
The distribution of the zeros of the Hermite-Padé polynomials of the first kind for a pair of functions with an arbitrary even number of common branch points lying on the real axis is investigated under the assumption that this pair of functions forms a generalized complex Nikishin system. It is proved (Theorem 1) that the zeros have a limiting distribution, which coincides with the equilibrium measure of a certain compact set having the S-property in a harmonic external field. The existence problem for S-compact sets is solved in Theorem 2. The main idea of the proof of Theorem 1 consists in replacing a vector equilibrium problem in potentialmore » theory by a scalar problem with an external field and then using the general Gonchar-Rakhmanov method, which was worked out in the solution of the '1/9'-conjecture. The relation of the result obtained here to some results and conjectures due to Nuttall is discussed. Bibliography: 51 titles.« less
Fast computation of close-coupling exchange integrals using polynomials in a tree representation
NASA Astrophysics Data System (ADS)
Wallerberger, Markus; Igenbergs, Katharina; Schweinzer, Josef; Aumayr, Friedrich
2011-03-01
The semi-classical atomic-orbital close-coupling method is a well-known approach for the calculation of cross sections in ion-atom collisions. It strongly relies on the fast and stable computation of exchange integrals. We present an upgrade to earlier implementations of the Fourier-transform method. For this purpose, we implement an extensive library for symbolic storage of polynomials, relying on sophisticated tree structures to allow fast manipulation and numerically stable evaluation. Using this library, we considerably speed up creation and computation of exchange integrals. This enables us to compute cross sections for more complex collision systems. Program summaryProgram title: TXINT Catalogue identifier: AEHS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 12 332 No. of bytes in distributed program, including test data, etc.: 157 086 Distribution format: tar.gz Programming language: Fortran 95 Computer: All with a Fortran 95 compiler Operating system: All with a Fortran 95 compiler RAM: Depends heavily on input, usually less than 100 MiB Classification: 16.10 Nature of problem: Analytical calculation of one- and two-center exchange matrix elements for the close-coupling method in the impact parameter model. Solution method: Similar to the code of Hansen and Dubois [1], we use the Fourier-transform method suggested by Shakeshaft [2] to compute the integrals. However, we heavily speed up the calculation using a library for symbolic manipulation of polynomials. Restrictions: We restrict ourselves to a defined collision system in the impact parameter model. Unusual features: A library for symbolic manipulation of polynomials, where polynomials are stored in a space-saving left-child right-sibling binary tree. This provides stable numerical evaluation and fast mutation while maintaining full compatibility with the original code. Additional comments: This program makes heavy use of the new features provided by the Fortran 90 standard, most prominently pointers, derived types and allocatable structures and a small portion of Fortran 95. Only newer compilers support these features. Following compilers support all features needed by the program. GNU Fortran Compiler "gfortran" from version 4.3.0 GNU Fortran 95 Compiler "g95" from version 4.2.0 Intel Fortran Compiler "ifort" from version 11.0
Novel Threshold Changeable Secret Sharing Schemes Based on Polynomial Interpolation
Li, Mingchu; Guo, Cheng; Choo, Kim-Kwang Raymond; Ren, Yizhi
2016-01-01
After any distribution of secret sharing shadows in a threshold changeable secret sharing scheme, the threshold may need to be adjusted to deal with changes in the security policy and adversary structure. For example, when employees leave the organization, it is not realistic to expect departing employees to ensure the security of their secret shadows. Therefore, in 2012, Zhang et al. proposed (t → t′, n) and ({t1, t2,⋯, tN}, n) threshold changeable secret sharing schemes. However, their schemes suffer from a number of limitations such as strict limit on the threshold values, large storage space requirement for secret shadows, and significant computation for constructing and recovering polynomials. To address these limitations, we propose two improved dealer-free threshold changeable secret sharing schemes. In our schemes, we construct polynomials to update secret shadows, and use two-variable one-way function to resist collusion attacks and secure the information stored by the combiner. We then demonstrate our schemes can adjust the threshold safely. PMID:27792784
Novel Threshold Changeable Secret Sharing Schemes Based on Polynomial Interpolation.
Yuan, Lifeng; Li, Mingchu; Guo, Cheng; Choo, Kim-Kwang Raymond; Ren, Yizhi
2016-01-01
After any distribution of secret sharing shadows in a threshold changeable secret sharing scheme, the threshold may need to be adjusted to deal with changes in the security policy and adversary structure. For example, when employees leave the organization, it is not realistic to expect departing employees to ensure the security of their secret shadows. Therefore, in 2012, Zhang et al. proposed (t → t', n) and ({t1, t2,⋯, tN}, n) threshold changeable secret sharing schemes. However, their schemes suffer from a number of limitations such as strict limit on the threshold values, large storage space requirement for secret shadows, and significant computation for constructing and recovering polynomials. To address these limitations, we propose two improved dealer-free threshold changeable secret sharing schemes. In our schemes, we construct polynomials to update secret shadows, and use two-variable one-way function to resist collusion attacks and secure the information stored by the combiner. We then demonstrate our schemes can adjust the threshold safely.
Predictability in community dynamics.
Blonder, Benjamin; Moulton, Derek E; Blois, Jessica; Enquist, Brian J; Graae, Bente J; Macias-Fauria, Marc; McGill, Brian; Nogué, Sandra; Ordonez, Alejandro; Sandel, Brody; Svenning, Jens-Christian
2017-03-01
The coupling between community composition and climate change spans a gradient from no lags to strong lags. The no-lag hypothesis is the foundation of many ecophysiological models, correlative species distribution modelling and climate reconstruction approaches. Simple lag hypotheses have become prominent in disequilibrium ecology, proposing that communities track climate change following a fixed function or with a time delay. However, more complex dynamics are possible and may lead to memory effects and alternate unstable states. We develop graphical and analytic methods for assessing these scenarios and show that these dynamics can appear in even simple models. The overall implications are that (1) complex community dynamics may be common and (2) detailed knowledge of past climate change and community states will often be necessary yet sometimes insufficient to make predictions of a community's future state. © 2017 John Wiley & Sons Ltd/CNRS.
Flap-Lag-Torsion Stability in Forward Flight
NASA Technical Reports Server (NTRS)
Panda, B.; Chopra, I.
1985-01-01
An aeroelastic stability of three-degree flap-lag-torsion blade in forward flight is examined. Quasisteady aerodynamics with a dynamic inflow model is used. The nonlinear time dependent periodic blade response is calculated using an iterative procedure based on Floquet theory. The periodic perturbation equations are solved for stability using Floquet transition matrix theory as well as constant coefficient approximation in the fixed reference frame. Results are presented for both stiff-inplane and soft-inplane blade configurations. The effects of several parameters on blade stability are examined, including structural coupling, pitch-flap and pitch-lag coupling, torsion stiffness, steady inflow distribution, dynamic inflow, blade response solution and constant coefficient approximation.
Hermite Polynomials and the Inverse Problem for Collisionless Equilibria
NASA Astrophysics Data System (ADS)
Allanson, O.; Neukirch, T.; Troscheit, S.; Wilson, F.
2017-12-01
It is long established that Hermite polynomial expansions in either velocity or momentum space can elegantly encode the non-Maxwellian velocity-space structure of a collisionless plasma distribution function (DF). In particular, Hermite polynomials in the canonical momenta naturally arise in the consideration of the 'inverse problem in collisionless equilibria' (IPCE): "for a given macroscopic/fluid equilibrium, what are the self-consistent Vlasov-Maxwell equilibrium DFs?". This question is of particular interest for the equilibrium and stability properties of a given macroscopic configuration, e.g. a current sheet. It can be relatively straightforward to construct a formal solution to IPCE by a Hermite expansion method, but several important questions remain regarding the use of this method. We present recent work that considers the necessary conditions of non-negativity, convergence, and the existence of all moments of an equilibrium DF solution found for IPCE. We also establish meaningful analogies between the equations that link the microscopic and macrosopic descriptions of the Vlasov-Maxwell equilibrium, and those that solve the initial value problem for the heat equation. In the language of the heat equation, IPCE poses the pressure tensor as the 'present' heat distribution over an infinite domain, and the non-Maxwellian features of the DF as the 'past' distribution. We find sufficient conditions for the convergence of the Hermite series representation of the DF, and prove that the non-negativity of the DF can be dependent on the magnetisation of the plasma. For DFs that decay at least as quickly as exp(-v^2/4), we show non-negativity is guaranteed for at least a finite range of magnetisation values, as parameterised by the ratio of the Larmor radius to the gradient length scale. 1. O. Allanson, T. Neukirch, S. Troscheit & F. Wilson: From one-dimensional fields to Vlasov equilibria: theory and application of Hermite polynomials, Journal of Plasma Physics, 82, 905820306, 2016 2. O. Allanson, S. Troscheit & T. Neukirch: The inverse problem for collisionless plasma equilibria (invited paper for IMA Journal of Applied Mathematics, under review)
NASA Technical Reports Server (NTRS)
Minor, L. B.; Lasker, D. M.; Backous, D. D.; Hullar, T. E.; Shelhamer, M. J. (Principal Investigator)
1999-01-01
The horizontal angular vestibuloocular reflex (VOR) evoked by high-frequency, high-acceleration rotations was studied in five squirrel monkeys with intact vestibular function. The VOR evoked by steps of acceleration in darkness (3,000 degrees /s(2) reaching a velocity of 150 degrees /s) began after a latency of 7.3 +/- 1.5 ms (mean +/- SD). Gain of the reflex during the acceleration was 14.2 +/- 5.2% greater than that measured once the plateau head velocity had been reached. A polynomial regression was used to analyze the trajectory of the responses to steps of acceleration. A better representation of the data was obtained from a polynomial that included a cubic term in contrast to an exclusively linear fit. For sinusoidal rotations of 0.5-15 Hz with a peak velocity of 20 degrees /s, the VOR gain measured 0.83 +/- 0.06 and did not vary across frequencies or animals. The phase of these responses was close to compensatory except at 15 Hz where a lag of 5.0 +/- 0.9 degrees was noted. The VOR gain did not vary with head velocity at 0.5 Hz but increased with velocity for rotations at frequencies of >/=4 Hz (0. 85 +/- 0.04 at 4 Hz, 20 degrees /s; 1.01 +/- 0.05 at 100 degrees /s, P < 0.0001). No responses to these rotations were noted in two animals that had undergone bilateral labyrinthectomy indicating that inertia of the eye had a negligible effect for these stimuli. We developed a mathematical model of VOR dynamics to account for these findings. The inputs to the reflex come from linear and nonlinear pathways. The linear pathway is responsible for the constant gain across frequencies at peak head velocity of 20 degrees /s and also for the phase lag at higher frequencies being less than that expected based on the reflex delay. The frequency- and velocity-dependent nonlinearity in VOR gain is accounted for by the dynamics of the nonlinear pathway. A transfer function that increases the gain of this pathway with frequency and a term related to the third power of head velocity are used to represent the dynamics of this pathway. This model accounts for the experimental findings and provides a method for interpreting responses to these stimuli after vestibular lesions.
Palm oil price forecasting model: An autoregressive distributed lag (ARDL) approach
NASA Astrophysics Data System (ADS)
Hamid, Mohd Fahmi Abdul; Shabri, Ani
2017-05-01
Palm oil price fluctuated without any clear trend or cyclical pattern in the last few decades. The instability of food commodities price causes it to change rapidly over time. This paper attempts to develop Autoregressive Distributed Lag (ARDL) model in modeling and forecasting the price of palm oil. In order to use ARDL as a forecasting model, this paper modifies the data structure where we only consider lagged explanatory variables to explain the variation in palm oil price. We then compare the performance of this ARDL model with a benchmark model namely ARIMA in term of their comparative forecasting accuracy. This paper also utilize ARDL bound testing approach to co-integration in examining the short run and long run relationship between palm oil price and its determinant; production, stock, and price of soybean as the substitute of palm oil and price of crude oil. The comparative forecasting accuracy suggests that ARDL model has a better forecasting accuracy compared to ARIMA.
A penalized framework for distributed lag non-linear models.
Gasparrini, Antonio; Scheipl, Fabian; Armstrong, Ben; Kenward, Michael G
2017-09-01
Distributed lag non-linear models (DLNMs) are a modelling tool for describing potentially non-linear and delayed dependencies. Here, we illustrate an extension of the DLNM framework through the use of penalized splines within generalized additive models (GAM). This extension offers built-in model selection procedures and the possibility of accommodating assumptions on the shape of the lag structure through specific penalties. In addition, this framework includes, as special cases, simpler models previously proposed for linear relationships (DLMs). Alternative versions of penalized DLNMs are compared with each other and with the standard unpenalized version in a simulation study. Results show that this penalized extension to the DLNM class provides greater flexibility and improved inferential properties. The framework exploits recent theoretical developments of GAMs and is implemented using efficient routines within freely available software. Real-data applications are illustrated through two reproducible examples in time series and survival analysis. © 2017 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.
Lags in the response of mountain plant communities to climate change
Alexander, Jake M.; Chalmandrier, Loïc; Lenoir, Jonathan; Burgess, Treena I.; Essl, Franz; Haider, Sylvia; Kueffer, Christoph; McDougall, Keith; Milbau, Ann; Nuñez, Martin A.; Pauchard, Aníbal; Rabitsch, Wolfgang; Rew, Lisa J.; Sanders, Nathan J.; Pellissier, Loïc
2018-01-01
Rapid climatic changes and increasing human influence at high elevations around the world will have profound impacts on mountain biodiversity. However, forecasts from statistical models (e.g. species distribution models) rarely consider that plant community changes could substantially lag behind climatic changes, hindering our ability to make temporally realistic projections for the coming century. Indeed, the magnitudes of lags, and the relative importance of the different factors giving rise to them, remain poorly understood. We review evidence for three types of lag: “dispersal lags” affecting plant species’ spread along elevational gradients, “establishment lags” following their arrival in recipient communities, and “extinction lags” of resident species. Variation in lags is explained by variation among species in physiological and demographic responses, by effects of altered biotic interactions, and by aspects of the physical environment. Of these, altered biotic interactions could contribute substantially to establishment and extinction lags, yet impacts of biotic interactions on range dynamics are poorly understood. We develop a mechanistic community model to illustrate how species turnover in future communities might lag behind simple expectations based on species’ range shifts with unlimited dispersal. The model shows a combined contribution of altered biotic interactions and dispersal lags to plant community turnover along an elevational gradient following climate warming. Our review and simulation support the view that accounting for disequilibrium range dynamics will be essential for realistic forecasts of patterns of biodiversity under climate change, with implications for the conservation of mountain species and the ecosystem functions they provide. PMID:29112781
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroon, John J.; Becker, Peter A., E-mail: jkroon@gmu.edu, E-mail: pbecker@gmu.edu
2016-04-20
Many accreting black holes manifest time lags during outbursts, in which the hard Fourier component typically lags behind the soft component. Despite decades of observations of this phenomenon, the underlying physical explanation for the time lags has remained elusive, although there are suggestions that Compton reverberation plays an important role. However, the lack of analytical solutions has hindered the interpretation of the available data. In this paper, we investigate the generation of X-ray time lags in Compton scattering coronae using a new mathematical approach based on analysis of the Fourier-transformed transport equation. By solving this equation, we obtain the Fouriermore » transform of the radiation Green’s function, which allows us to calculate the exact dependence of the time lags on the Fourier frequency, for both homogeneous and inhomogeneous coronal clouds. We use the new formalism to explore a variety of injection scenarios, including both monochromatic and broadband (bremsstrahlung) seed photon injection. We show that our model can successfully reproduce both the observed time lags and the time-averaged (quiescent) X-ray spectra for Cyg X-1 and GX 339-04, using a single set of coronal parameters for each source. The time lags are the result of impulsive bremsstrahlung injection occurring near the outer edge of the corona, while the time-averaged spectra are the result of continual distributed injection of soft photons throughout the cloud.« less
NASA Astrophysics Data System (ADS)
Ickert, R. B.; Mundil, R.
2012-12-01
Dateable minerals (especially zircon U-Pb) that crystallized at high temperatures but have been redeposited, pose both unique opportunities and challenges for geochronology. Although they have the potential to provide useful information on the depositional age of their host rocks, their relationship to the host is not always well constrained. For example, primary volcanic deposits will often have a lag time (time between eruption and deposition) that is smaller than can be resolved using radiometric techniques, and the age of eruption and of deposition will be coincident within uncertainty. Alternatively, ordinary clastic sedimentary rocks will usually have a long and variable lag time, even for the youngest minerals. Intermediate cases, for example moderately reworked volcanogenic material, will have a short, but unknown lag time. A compounding problem with U-Pb zircon is that the residence time of crystals in their host magma chamber (time between crystallization and eruption) can be high and is variable, even within the products of a single eruption. In cases where the lag and/or residence time suspected to be large relative to the precision of the date, a common objective is to determine the minimum age of a sample of dates, in order to constrain the maximum age of the deposition of the host rock. However, both the extraction of that age as well as assignment of a meaningful uncertainty is not straightforward. A number of ad hoc techniques have been employed in the literature, which may be appropriate for particular data sets or specific problems, but may yield biased or misleading results. Ludwig (2012) has developed an objective, statistically justified method for the determination of the distribution of the minimum age, but it has not been widely adopted. Here we extend this algorithm with a bootstrap (which can show the effect - if any - of the sampling distribution itself). This method has a number of desirable characteristics: It can incorporate all data points while being resistant to outliers, it utilizes the measurement uncertainties, and it does not require the assumption that any given cluster of data represents a single geological event. In brief, the technique generates a synthetic distribution from the input data by resampling with replacement (a bootstrap). Each resample is a random selection from a Gaussian distribution defined by the mean and uncertainty of the data point. For this distribution, the minimum value is calculated. This procedure is repeated many times (>1000) and a distribution of minimum values is generated, from which a confidence interval can be constructed. We demonstrate the application of this technique using natural and synthetic datasets, show the advantages and limitations, and relate it to other methods. We emphasize that this estimate remains strictly a minimum age - as with any other estimate that does not explicitly incorporate lag or residence time, it will not reflect a depositional age if the lag/residence time is larger than the uncertainty of the estimate. We recommend that this or similar techniques be considered by geochronologists. Ludwig, K.R., 2012. Isoplot 3.75, A geochronological toolkit for Microsoft Excel; Berkeley Geochronology Center Special Publication no. 5
Simulating Multivariate Nonnormal Data Using an Iterative Algorithm
ERIC Educational Resources Information Center
Ruscio, John; Kaczetow, Walter
2008-01-01
Simulating multivariate nonnormal data with specified correlation matrices is difficult. One especially popular method is Vale and Maurelli's (1983) extension of Fleishman's (1978) polynomial transformation technique to multivariate applications. This requires the specification of distributional moments and the calculation of an intermediate…
Ye, Jingfei; Gao, Zhishan; Wang, Shuai; Cheng, Jinlong; Wang, Wei; Sun, Wenqing
2014-10-01
Four orthogonal polynomials for reconstructing a wavefront over a square aperture based on the modal method are currently available, namely, the 2D Chebyshev polynomials, 2D Legendre polynomials, Zernike square polynomials and Numerical polynomials. They are all orthogonal over the full unit square domain. 2D Chebyshev polynomials are defined by the product of Chebyshev polynomials in x and y variables, as are 2D Legendre polynomials. Zernike square polynomials are derived by the Gram-Schmidt orthogonalization process, where the integration region across the full unit square is circumscribed outside the unit circle. Numerical polynomials are obtained by numerical calculation. The presented study is to compare these four orthogonal polynomials by theoretical analysis and numerical experiments from the aspects of reconstruction accuracy, remaining errors, and robustness. Results show that the Numerical orthogonal polynomial is superior to the other three polynomials because of its high accuracy and robustness even in the case of a wavefront with incomplete data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bizyaev, D.; Pan, K.; Brinkmann, J.
2017-04-20
We present a study of the kinematics of the extraplanar ionized gas around several dozen galaxies observed by the Mapping of Nearby Galaxies at the Apache Point Observatory (MaNGA) survey. We considered a sample of 67 edge-on galaxies out of more than 1400 extragalactic targets observed by MaNGA, in which we found 25 galaxies (or 37%) with regular lagging of the rotation curve at large distances from the galactic midplane. We model the observed H α emission velocity fields in the galaxies, taking projection effects and a simple model for the dust extinction into account. We show that the verticalmore » lag of the rotation curve is necessary in the modeling, and estimate the lag amplitude in the galaxies. We find no correlation between the lag and the star formation rate in the galaxies. At the same time, we report a correlation between the lag and the galactic stellar mass, central stellar velocity dispersion, and axial ratio of the light distribution. These correlations suggest a possible higher ratio of infalling-to-local gas in early-type disk galaxies or a connection between lags and the possible presence of hot gaseous halos, which may be more prevalent in more massive galaxies. These results again demonstrate that observations of extraplanar gas can serve as a potential probe for accretion of gas.« less
NASA Astrophysics Data System (ADS)
Bizyaev, D.; Walterbos, R. A. M.; Yoachim, P.; Riffel, R. A.; Fernández-Trincado, J. G.; Pan, K.; Diamond-Stanic, A. M.; Jones, A.; Thomas, D.; Cleary, J.; Brinkmann, J.
2017-04-01
We present a study of the kinematics of the extraplanar ionized gas around several dozen galaxies observed by the Mapping of Nearby Galaxies at the Apache Point Observatory (MaNGA) survey. We considered a sample of 67 edge-on galaxies out of more than 1400 extragalactic targets observed by MaNGA, in which we found 25 galaxies (or 37%) with regular lagging of the rotation curve at large distances from the galactic midplane. We model the observed Hα emission velocity fields in the galaxies, taking projection effects and a simple model for the dust extinction into account. We show that the vertical lag of the rotation curve is necessary in the modeling, and estimate the lag amplitude in the galaxies. We find no correlation between the lag and the star formation rate in the galaxies. At the same time, we report a correlation between the lag and the galactic stellar mass, central stellar velocity dispersion, and axial ratio of the light distribution. These correlations suggest a possible higher ratio of infalling-to-local gas in early-type disk galaxies or a connection between lags and the possible presence of hot gaseous halos, which may be more prevalent in more massive galaxies. These results again demonstrate that observations of extraplanar gas can serve as a potential probe for accretion of gas.
[Influence of humidex on incidence of bacillary dysentery in Hefei: a time-series study].
Zhang, H; Zhao, K F; He, R X; Zhao, D S; Xie, M Y; Wang, S S; Bai, L J; Cheng, Q; Zhang, Y W; Su, H
2017-11-10
Objective: To investigate the effect of humidex combined with mean temperature and relative humidity on the incidence of bacillary dysentery in Hefei. Methods: Daily counts of bacillary dysentery cases and weather data in Hefei were collected from January 1, 2006 to December 31, 2013. Then, the humidex was calculated from temperature and relative humidity. A Poisson generalized linear regression combined with distributed lag non-linear model was applied to analyze the relationship between humidex and the incidence of bacillary dysentery, after adjusting for long-term and seasonal trends, day of week and other weather confounders. Stratified analyses by gender, age and address were also conducted. Results: The risk of bacillary dysentery increased with the rise of humidex. The adverse effect of high humidex (90 percentile of humidex) appeared in 2-days lag and it was the largest at 4-days lag ( RR =1.063, 95 %CI : 1.037-1.090). Subgroup analyses indicated that all groups were affected by high humidex at lag 2-5 days. Conclusion: High humidex could significantly increase the risk of bacillary dysentery, and the lagged effects were observed.
NASA Astrophysics Data System (ADS)
Zhang, Feifei; Ding, Guoyong; Liu, Zhidong; Zhang, Caixia; Jiang, Baofa
2016-12-01
This study examined the relationship between daily morbidity of bacillary dysentery and flood in 2007 in Zibo City, China, using a symmetric bidirectional case-crossover study. Odds ratios (ORs) and 95 % confidence intervals (CIs) on the basis of multivariate model and stratified analysis at different lagged days were calculated to estimate the risk of flood on bacillary dysentery. A total of 902 notified bacillary dysentery cases were identified during the study period. The median of case distribution was 7-year-old and biased to children. Multivariable analysis showed that flood was associated with an increased risk of bacillary dysentery, with the largest OR of 1.849 (95 % CI 1.229-2.780) at 2-day lag. Gender-specific analysis showed that there was a significant association between flood and bacillary dysentery among males only (ORs >1 from lag 1 to lag 5), with the strongest lagged effect at 2-day lag (OR = 2.820, 95 % CI 1.629-4.881), and the result of age-specific indicated that youngsters had a slightly larger risk to develop flood-related bacillary dysentery than older people at one shorter lagged day (OR = 2.000, 95 % CI 1.128-3.546 in youngsters at lag 2; OR = 1.879, 95 % CI 1.069-3.305 in older people at lag 3). Our study has confirmed that there is a positive association between flood and the risk of bacillary dysentery in selected study area. Males and youngsters may be the vulnerable and high-risk populations to develop the flood-related bacillary dysentery. Results from this study will provide recommendations to make available strategies for government to deal with negative health outcomes due to floods.
Zhang, Feifei; Ding, Guoyong; Liu, Zhidong; Zhang, Caixia; Jiang, Baofa
2016-12-01
This study examined the relationship between daily morbidity of bacillary dysentery and flood in 2007 in Zibo City, China, using a symmetric bidirectional case-crossover study. Odds ratios (ORs) and 95 % confidence intervals (CIs) on the basis of multivariate model and stratified analysis at different lagged days were calculated to estimate the risk of flood on bacillary dysentery. A total of 902 notified bacillary dysentery cases were identified during the study period. The median of case distribution was 7-year-old and biased to children. Multivariable analysis showed that flood was associated with an increased risk of bacillary dysentery, with the largest OR of 1.849 (95 % CI 1.229-2.780) at 2-day lag. Gender-specific analysis showed that there was a significant association between flood and bacillary dysentery among males only (ORs >1 from lag 1 to lag 5), with the strongest lagged effect at 2-day lag (OR = 2.820, 95 % CI 1.629-4.881), and the result of age-specific indicated that youngsters had a slightly larger risk to develop flood-related bacillary dysentery than older people at one shorter lagged day (OR = 2.000, 95 % CI 1.128-3.546 in youngsters at lag 2; OR = 1.879, 95 % CI 1.069-3.305 in older people at lag 3). Our study has confirmed that there is a positive association between flood and the risk of bacillary dysentery in selected study area. Males and youngsters may be the vulnerable and high-risk populations to develop the flood-related bacillary dysentery. Results from this study will provide recommendations to make available strategies for government to deal with negative health outcomes due to floods.
Poly-Frobenius-Euler polynomials
NASA Astrophysics Data System (ADS)
Kurt, Burak
2017-07-01
Hamahata [3] defined poly-Euler polynomials and the generalized poly-Euler polynomials. He proved some relations and closed formulas for the poly-Euler polynomials. By this motivation, we define poly-Frobenius-Euler polynomials. We give some relations for this polynomials. Also, we prove the relationships between poly-Frobenius-Euler polynomials and Stirling numbers of the second kind.
NASA Astrophysics Data System (ADS)
Acharya, S.; Adam, J.; Adamová, D.; Adolfsson, J.; Aggarwal, M. M.; Aglieri Rinella, G.; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahmad, N.; Ahn, S. U.; Aiola, S.; Akindinov, A.; Al-Turany, M.; Alam, S. N.; Alba, J. L. B.; Albuquerque, D. S. D.; Aleksandrov, D.; Alessandro, B.; Alfaro Molina, R.; Alici, A.; Alkin, A.; Alme, J.; Alt, T.; Altenkamper, L.; Altsybeev, I.; Alves Garcia Prado, C.; Andrei, C.; Andreou, D.; Andrews, H. A.; Andronic, A.; Anguelov, V.; Anson, C.; Antičić, T.; Antinori, F.; Antonioli, P.; Anwar, R.; Aphecetche, L.; Appelshäuser, H.; Arcelli, S.; Arnaldi, R.; Arnold, O. W.; Arsene, I. C.; Arslandok, M.; Audurier, B.; Augustinus, A.; Averbeck, R.; Azmi, M. D.; Badalà, A.; Baek, Y. W.; Bagnasco, S.; Bailhache, R.; Bala, R.; Baldisseri, A.; Ball, M.; Baral, R. C.; Barbano, A. M.; Barbera, R.; Barile, F.; Barioglio, L.; Barnaföldi, G. G.; Barnby, L. S.; Barret, V.; Bartalini, P.; Barth, K.; Bartsch, E.; Basile, M.; Bastid, N.; Basu, S.; Batigne, G.; Batyunya, B.; Batzing, P. C.; Bearden, I. G.; Beck, H.; Bedda, C.; Behera, N. K.; Belikov, I.; Bellini, F.; Bello Martinez, H.; Bellwied, R.; Beltran, L. G. E.; Belyaev, V.; Bencedi, G.; Beole, S.; Bercuci, A.; Berdnikov, Y.; Berenyi, D.; Bertens, R. A.; Berzano, D.; Betev, L.; Bhasin, A.; Bhat, I. R.; Bhati, A. K.; Bhattacharjee, B.; Bhom, J.; Bianchi, A.; Bianchi, L.; Bianchi, N.; Bianchin, C.; Bielčík, J.; Bielčíková, J.; Bilandzic, A.; Biro, G.; Biswas, R.; Biswas, S.; Blair, J. T.; Blau, D.; Blume, C.; Boca, G.; Bock, F.; Bogdanov, A.; Boldizsár, L.; Bombara, M.; Bonomi, G.; Bonora, M.; Book, J.; Borel, H.; Borissov, A.; Borri, M.; Botta, E.; Bourjau, C.; Bratrud, L.; Braun-Munzinger, P.; Bregant, M.; Broker, T. A.; Broz, M.; Brucken, E. J.; Bruna, E.; Bruno, G. E.; Budnikov, D.; Buesching, H.; Bufalino, S.; Buhler, P.; Buncic, P.; Busch, O.; Buthelezi, Z.; Butt, J. B.; Buxton, J. T.; Cabala, J.; Caffarri, D.; Caines, H.; Caliva, A.; Calvo Villar, E.; Camerini, P.; Capon, A. A.; Carena, F.; Carena, W.; Carnesecchi, F.; Castillo Castellanos, J.; Castro, A. J.; Casula, E. A. R.; Ceballos Sanchez, C.; Cerello, P.; Chandra, S.; Chang, B.; Chapeland, S.; Chartier, M.; Chattopadhyay, S.; Chattopadhyay, S.; Chauvin, A.; Cheshkov, C.; Cheynis, B.; Chibante Barroso, V.; Chinellato, D. D.; Cho, S.; Chochula, P.; Chojnacki, M.; Choudhury, S.; Chowdhury, T.; Christakoglou, P.; Christensen, C. H.; Christiansen, P.; Chujo, T.; Chung, S. U.; Cicalo, C.; Cifarelli, L.; Cindolo, F.; Cleymans, J.; Colamaria, F.; Colella, D.; Collu, A.; Colocci, M.; Concas, M.; Conesa Balbastre, G.; Conesa Del Valle, Z.; Connors, M. E.; Contreras, J. G.; Cormier, T. M.; Corrales Morales, Y.; Cortés Maldonado, I.; Cortese, P.; Cosentino, M. R.; Costa, F.; Costanza, S.; Crkovská, J.; Crochet, P.; Cuautle, E.; Cunqueiro, L.; Dahms, T.; Dainese, A.; Danisch, M. C.; Danu, A.; Das, D.; Das, I.; Das, S.; Dash, A.; Dash, S.; de, S.; de Caro, A.; de Cataldo, G.; de Conti, C.; de Cuveland, J.; de Falco, A.; de Gruttola, D.; De Marco, N.; de Pasquale, S.; de Souza, R. D.; Degenhardt, H. F.; Deisting, A.; Deloff, A.; Deplano, C.; Dhankher, P.; di Bari, D.; di Mauro, A.; di Nezza, P.; di Ruzza, B.; Dietel, T.; Dillenseger, P.; Divià, R.; Djuvsland, Ø.; Dobrin, A.; Domenicis Gimenez, D.; Dönigus, B.; Dordic, O.; Doremalen, L. V. R.; Dubey, A. K.; Dubla, A.; Ducroux, L.; Duggal, A. K.; Dukhishyam, M.; Dupieux, P.; Ehlers, R. J.; Elia, D.; Endress, E.; Engel, H.; Epple, E.; Erazmus, B.; Erhardt, F.; Espagnon, B.; Esumi, S.; Eulisse, G.; Eum, J.; Evans, D.; Evdokimov, S.; Fabbietti, L.; Faivre, J.; Fantoni, A.; Fasel, M.; Feldkamp, L.; Feliciello, A.; Feofilov, G.; Fernández Téllez, A.; Ferretti, A.; Festanti, A.; Feuillard, V. J. G.; Figiel, J.; Figueredo, M. A. S.; Filchagin, S.; Finogeev, D.; Fionda, F. M.; Floris, M.; Foertsch, S.; Foka, P.; Fokin, S.; Fragiacomo, E.; Francescon, A.; Francisco, A.; Frankenfeld, U.; Fronze, G. G.; Fuchs, U.; Furget, C.; Furs, A.; Fusco Girard, M.; Gaardhøje, J. J.; Gagliardi, M.; Gago, A. M.; Gajdosova, K.; Gallio, M.; Galvan, C. D.; Ganoti, P.; Garabatos, C.; Garcia-Solis, E.; Garg, K.; Gargiulo, C.; Gasik, P.; Gauger, E. F.; Gay Ducati, M. B.; Germain, M.; Ghosh, J.; Ghosh, P.; Ghosh, S. K.; Gianotti, P.; Giubellino, P.; Giubilato, P.; Gladysz-Dziadus, E.; Glässel, P.; Goméz Coral, D. M.; Gomez Ramirez, A.; Gonzalez, A. S.; González-Zamora, P.; Gorbunov, S.; Görlich, L.; Gotovac, S.; Grabski, V.; Graczykowski, L. K.; Graham, K. L.; Greiner, L.; Grelli, A.; Grigoras, C.; Grigoriev, V.; Grigoryan, A.; Grigoryan, S.; Gronefeld, J. M.; Grosa, F.; Grosse-Oetringhaus, J. F.; Grosso, R.; Gruber, L.; Guber, F.; Guernane, R.; Guerzoni, B.; Gulbrandsen, K.; Gunji, T.; Gupta, A.; Gupta, R.; Guzman, I. B.; Haake, R.; Hadjidakis, C.; Hamagaki, H.; Hamar, G.; Hamon, J. C.; Haque, M. R.; Harris, J. W.; Harton, A.; Hassan, H.; Hatzifotiadou, D.; Hayashi, S.; Heckel, S. T.; Hellbär, E.; Helstrup, H.; Herghelegiu, A.; Hernandez, E. G.; Herrera Corral, G.; Herrmann, F.; Hess, B. A.; Hetland, K. F.; Hillemanns, H.; Hills, C.; Hippolyte, B.; Hladky, J.; Hohlweger, B.; Horak, D.; Hornung, S.; Hosokawa, R.; Hristov, P.; Hughes, C.; Humanic, T. J.; Hussain, N.; Hussain, T.; Hutter, D.; Hwang, D. S.; Iga Buitron, S. A.; Ilkaev, R.; Inaba, M.; Ippolitov, M.; Irfan, M.; Islam, M. S.; Ivanov, M.; Ivanov, V.; Izucheev, V.; Jacak, B.; Jacazio, N.; Jacobs, P. M.; Jadhav, M. B.; Jadlovsky, J.; Jaelani, S.; Jahnke, C.; Jakubowska, M. J.; Janik, M. A.; Jayarathna, P. H. S. Y.; Jena, C.; Jena, S.; Jercic, M.; Jimenez Bustamante, R. T.; Jones, P. G.; Jusko, A.; Kalinak, P.; Kalweit, A.; Kang, J. H.; Kaplin, V.; Kar, S.; Karasu Uysal, A.; Karavichev, O.; Karavicheva, T.; Karayan, L.; Karczmarczyk, P.; Karpechev, E.; Kebschull, U.; Keidel, R.; Keijdener, D. L. D.; Keil, M.; Ketzer, B.; Khabanova, Z.; Khan, P.; Khan, S. A.; Khanzadeev, A.; Kharlov, Y.; Khatun, A.; Khuntia, A.; Kielbowicz, M. M.; Kileng, B.; Kim, B.; Kim, D.; Kim, D. J.; Kim, H.; Kim, J. S.; Kim, J.; Kim, M.; Kim, M.; Kim, S.; Kim, T.; Kirsch, S.; Kisel, I.; Kiselev, S.; Kisiel, A.; Kiss, G.; Klay, J. L.; Klein, C.; Klein, J.; Klein-Bösing, C.; Klewin, S.; Kluge, A.; Knichel, M. L.; Knospe, A. G.; Kobdaj, C.; Kofarago, M.; Köhler, M. K.; Kollegger, T.; Kondratiev, V.; Kondratyeva, N.; Kondratyuk, E.; Konevskikh, A.; Konyushikhin, M.; Kopcik, M.; Kour, M.; Kouzinopoulos, C.; Kovalenko, O.; Kovalenko, V.; Kowalski, M.; Koyithatta Meethaleveedu, G.; Králik, I.; Kravčáková, A.; Kreis, L.; Krivda, M.; Krizek, F.; Kryshen, E.; Krzewicki, M.; Kubera, A. M.; Kučera, V.; Kuhn, C.; Kuijer, P. G.; Kumar, A.; Kumar, J.; Kumar, L.; Kumar, S.; Kundu, S.; Kurashvili, P.; Kurepin, A.; Kurepin, A. B.; Kuryakin, A.; Kushpil, S.; Kweon, M. J.; Kwon, Y.; La Pointe, S. L.; La Rocca, P.; Lagana Fernandes, C.; Lai, Y. S.; Lakomov, I.; Langoy, R.; Lapidus, K.; Lara, C.; Lardeux, A.; Lattuca, A.; Laudi, E.; Lavicka, R.; Lea, R.; Leardini, L.; Lee, S.; Lehas, F.; Lehner, S.; Lehrbach, J.; Lemmon, R. C.; Lenti, V.; Leogrande, E.; León Monzón, I.; Lévai, P.; Li, X.; Lien, J.; Lietava, R.; Lim, B.; Lindal, S.; Lindenstruth, V.; Lindsay, S. W.; Lippmann, C.; Lisa, M. A.; Litichevskyi, V.; Llope, W. J.; Lodato, D. F.; Loenne, P. I.; Loginov, V.; Loizides, C.; Loncar, P.; Lopez, X.; López Torres, E.; Lowe, A.; Luettig, P.; Luhder, J. R.; Lunardon, M.; Luparello, G.; Lupi, M.; Lutz, T. H.; Maevskaya, A.; Mager, M.; Mahajan, S.; Mahmood, S. M.; Maire, A.; Majka, R. D.; Malaev, M.; Malinina, L.; Mal'Kevich, D.; Malzacher, P.; Mamonov, A.; Manko, V.; Manso, F.; Manzari, V.; Mao, Y.; Marchisone, M.; Mareš, J.; Margagliotti, G. V.; Margotti, A.; Margutti, J.; Marín, A.; Markert, C.; Marquard, M.; Martin, N. A.; Martinengo, P.; Martinez, J. A. L.; Martínez, M. I.; Martínez García, G.; Martinez Pedreira, M.; Masciocchi, S.; Masera, M.; Masoni, A.; Masson, E.; Mastroserio, A.; Mathis, A. M.; Matuoka, P. F. T.; Matyja, A.; Mayer, C.; Mazer, J.; Mazzilli, M.; Mazzoni, M. A.; Meddi, F.; Melikyan, Y.; Menchaca-Rocha, A.; Meninno, E.; Mercado Pérez, J.; Meres, M.; Mhlanga, S.; Miake, Y.; Mieskolainen, M. M.; Mihaylov, D. L.; Mikhaylov, K.; Milosevic, J.; Mischke, A.; Mishra, A. N.; Miśkowiec, D.; Mitra, J.; Mitu, C. M.; Mohammadi, N.; Mohanty, B.; Mohisin Khan, M.; Moreira de Godoy, D. A.; Moreno, L. A. P.; Moretto, S.; Morreale, A.; Morsch, A.; Muccifora, V.; Mudnic, E.; Mühlheim, D.; Muhuri, S.; Mukherjee, M.; Mulligan, J. D.; Munhoz, M. G.; Münning, K.; Munzer, R. H.; Murakami, H.; Murray, S.; Musa, L.; Musinsky, J.; Myers, C. J.; Myrcha, J. W.; Nag, D.; Naik, B.; Nair, R.; Nandi, B. K.; Nania, R.; Nappi, E.; Narayan, A.; Naru, M. U.; Natal da Luz, H.; Nattrass, C.; Navarro, S. R.; Nayak, K.; Nayak, R.; Nayak, T. K.; Nazarenko, S.; Nedosekin, A.; Negrao de Oliveira, R. A.; Nellen, L.; Nesbo, S. V.; Ng, F.; Nicassio, M.; Niculescu, M.; Niedziela, J.; Nielsen, B. S.; Nikolaev, S.; Nikulin, S.; Nikulin, V.; Noferini, F.; Nomokonov, P.; Nooren, G.; Noris, J. C. C.; Norman, J.; Nyanin, A.; Nystrand, J.; Oeschler, H.; Oh, S.; Ohlson, A.; Okubo, T.; Olah, L.; Oleniacz, J.; Oliveira da Silva, A. C.; Oliver, M. H.; Onderwaater, J.; Oppedisano, C.; Orava, R.; Oravec, M.; Ortiz Velasquez, A.; Oskarsson, A.; Otwinowski, J.; Oyama, K.; Pachmayer, Y.; Pacik, V.; Pagano, D.; Pagano, P.; Paić, G.; Palni, P.; Pan, J.; Pandey, A. K.; Panebianco, S.; Papikyan, V.; Pappalardo, G. S.; Pareek, P.; Park, J.; Parmar, S.; Passfeld, A.; Pathak, S. P.; Patra, R. N.; Paul, B.; Pei, H.; Peitzmann, T.; Peng, X.; Pereira, L. G.; Pereira da Costa, H.; Peresunko, D.; Perez Lezama, E.; Peskov, V.; Pestov, Y.; Petráček, V.; Petrov, V.; Petrovici, M.; Petta, C.; Pezzi, R. P.; Piano, S.; Pikna, M.; Pillot, P.; Pimentel, L. O. D. L.; Pinazza, O.; Pinsky, L.; Piyarathna, D. B.; Płoskoń, M.; Planinic, M.; Pliquett, F.; Pluta, J.; Pochybova, S.; Podesta-Lerma, P. L. M.; Poghosyan, M. G.; Polichtchouk, B.; Poljak, N.; Poonsawat, W.; Pop, A.; Poppenborg, H.; Porteboeuf-Houssais, S.; Pozdniakov, V.; Prasad, S. K.; Preghenella, R.; Prino, F.; Pruneau, C. A.; Pshenichnov, I.; Puccio, M.; Puddu, G.; Pujahari, P.; Punin, V.; Putschke, J.; Raha, S.; Rajput, S.; Rak, J.; Rakotozafindrabe, A.; Ramello, L.; Rami, F.; Rana, D. B.; Raniwala, R.; Raniwala, S.; Räsänen, S. S.; Rascanu, B. T.; Rathee, D.; Ratza, V.; Ravasenga, I.; Read, K. F.; Redlich, K.; Rehman, A.; Reichelt, P.; Reidt, F.; Ren, X.; Renfordt, R.; Reolon, A. R.; Reshetin, A.; Reygers, K.; Riabov, V.; Ricci, R. A.; Richert, T.; Richter, M.; Riedler, P.; Riegler, W.; Riggi, F.; Ristea, C.; Rodríguez Cahuantzi, M.; Røed, K.; Rogochaya, E.; Rohr, D.; Röhrich, D.; Rokita, P. S.; Ronchetti, F.; Rosas, E. D.; Rosnet, P.; Rossi, A.; Rotondi, A.; Roukoutakis, F.; Roy, A.; Roy, C.; Roy, P.; Rueda, O. V.; Rui, R.; Rumyantsev, B.; Rustamov, A.; Ryabinkin, E.; Ryabov, Y.; Rybicki, A.; Saarinen, S.; Sadhu, S.; Sadovsky, S.; Šafařík, K.; Saha, S. K.; Sahlmuller, B.; Sahoo, B.; Sahoo, P.; Sahoo, R.; Sahoo, S.; Sahu, P. K.; Saini, J.; Sakai, S.; Saleh, M. A.; Salzwedel, J.; Sambyal, S.; Samsonov, V.; Sandoval, A.; Sarkar, D.; Sarkar, N.; Sarma, P.; Sas, M. H. P.; Scapparone, E.; Scarlassara, F.; Schaefer, B.; Scharenberg, R. P.; Scheid, H. S.; Schiaua, C.; Schicker, R.; Schmidt, C.; Schmidt, H. R.; Schmidt, M. O.; Schmidt, M.; Schmidt, N. V.; Schukraft, J.; Schutz, Y.; Schwarz, K.; Schweda, K.; Scioli, G.; Scomparin, E.; Šefčík, M.; Seger, J. E.; Sekiguchi, Y.; Sekihata, D.; Selyuzhenkov, I.; Senosi, K.; Senyukov, S.; Serradilla, E.; Sett, P.; Sevcenco, A.; Shabanov, A.; Shabetai, A.; Shahoyan, R.; Shaikh, W.; Shangaraev, A.; Sharma, A.; Sharma, A.; Sharma, M.; Sharma, M.; Sharma, N.; Sheikh, A. I.; Shigaki, K.; Shou, Q.; Shtejer, K.; Sibiriak, Y.; Siddhanta, S.; Sielewicz, K. M.; Siemiarczuk, T.; Silaeva, S.; Silvermyr, D.; Silvestre, C.; Simatovic, G.; Simonetti, G.; Singaraju, R.; Singh, R.; Singhal, V.; Sinha, T.; Sitar, B.; Sitta, M.; Skaali, T. B.; Slupecki, M.; Smirnov, N.; Snellings, R. J. M.; Snellman, T. W.; Song, J.; Song, M.; Soramel, F.; Sorensen, S.; Sozzi, F.; Spiriti, E.; Sputowska, I.; Srivastava, B. K.; Stachel, J.; Stan, I.; Stankus, P.; Stenlund, E.; Stocco, D.; Storetvedt, M. M.; Strmen, P.; Suaide, A. A. P.; Sugitate, T.; Suire, C.; Suleymanov, M.; Suljic, M.; Sultanov, R.; Šumbera, M.; Sumowidagdo, S.; Suzuki, K.; Swain, S.; Szabo, A.; Szarka, I.; Tabassam, U.; Takahashi, J.; Tambave, G. J.; Tanaka, N.; Tarhini, M.; Tariq, M.; Tarzila, M. G.; Tauro, A.; Tejeda Muñoz, G.; Telesca, A.; Terasaki, K.; Terrevoli, C.; Teyssier, B.; Thakur, D.; Thakur, S.; Thomas, D.; Thoresen, F.; Tieulent, R.; Tikhonov, A.; Timmins, A. R.; Toia, A.; Torres, S. R.; Tripathy, S.; Trogolo, S.; Trombetta, G.; Tropp, L.; Trubnikov, V.; Trzaska, W. H.; Trzeciak, B. A.; Tsuji, T.; Tumkin, A.; Turrisi, R.; Tveter, T. S.; Ullaland, K.; Umaka, E. N.; Uras, A.; Usai, G. L.; Utrobicic, A.; Vala, M.; van der Maarel, J.; van Hoorne, J. W.; van Leeuwen, M.; Vanat, T.; Vande Vyvre, P.; Varga, D.; Vargas, A.; Vargyas, M.; Varma, R.; Vasileiou, M.; Vasiliev, A.; Vauthier, A.; Vázquez Doce, O.; Vechernin, V.; Veen, A. M.; Velure, A.; Vercellin, E.; Vergara Limón, S.; Vernet, R.; Vértesi, R.; Vickovic, L.; Vigolo, S.; Viinikainen, J.; Vilakazi, Z.; Villalobos Baillie, O.; Villatoro Tello, A.; Vinogradov, A.; Vinogradov, L.; Virgili, T.; Vislavicius, V.; Vodopyanov, A.; Völkl, M. A.; Voloshin, K.; Voloshin, S. A.; Volpe, G.; von Haller, B.; Vorobyev, I.; Voscek, D.; Vranic, D.; Vrláková, J.; Wagner, B.; Wang, H.; Wang, M.; Watanabe, D.; Watanabe, Y.; Weber, M.; Weber, S. G.; Weiser, D. F.; Wenzel, S. C.; Wessels, J. P.; Westerhoff, U.; Whitehead, A. M.; Wiechula, J.; Wikne, J.; Wilk, G.; Wilkinson, J.; Willems, G. A.; Williams, M. C. S.; Willsher, E.; Windelband, B.; Witt, W. E.; Yalcin, S.; Yamakawa, K.; Yang, P.; Yano, S.; Yin, Z.; Yokoyama, H.; Yoo, I.-K.; Yoon, J. H.; Yurchenko, V.; Zaccolo, V.; Zaman, A.; Zampolli, C.; Zanoli, H. J. C.; Zardoshti, N.; Zarochentsev, A.; Závada, P.; Zaviyalov, N.; Zbroszczyk, H.; Zhalov, M.; Zhang, H.; Zhang, X.; Zhang, Y.; Zhang, C.; Zhang, Z.; Zhao, C.; Zhigareva, N.; Zhou, D.; Zhou, Y.; Zhou, Z.; Zhu, H.; Zhu, J.; Zichichi, A.; Zimmermann, A.; Zimmermann, M. B.; Zinovjev, G.; Zmeskal, J.; Zou, S.; Alice Collaboration
2018-06-01
First results on the longitudinal asymmetry and its effect on the pseudorapidity distributions in Pb-Pb collisions at √{sNN } = 2.76 TeV at the Large Hadron Collider are obtained with the ALICE detector. The longitudinal asymmetry arises because of an unequal number of participating nucleons from the two colliding nuclei, and is estimated for each event by measuring the energy in the forward neutron-Zero-Degree-Calorimeters (ZNs). The effect of the longitudinal asymmetry is measured on the pseudorapidity distributions of charged particles in the regions | η | < 0.9, 2.8 < η < 5.1 and - 3.7 < η < - 1.7 by taking the ratio of the pseudorapidity distributions from events corresponding to different regions of asymmetry. The coefficients of a polynomial fit to the ratio characterise the effect of the asymmetry. A Monte Carlo simulation using a Glauber model for the colliding nuclei is tuned to reproduce the spectrum in the ZNs and provides a relation between the measurable longitudinal asymmetry and the shift in the rapidity (y0) of the participant zone formed by the unequal number of participating nucleons. The dependence of the coefficient of the linear term in the polynomial expansion, c1, on the mean value of y0 is investigated.
Squeezing in a 2-D generalized oscillator
NASA Technical Reports Server (NTRS)
Castanos, Octavio; Lopez-Pena, Ramon; Manko, Vladimir I.
1994-01-01
A two-dimensional generalized oscillator with time-dependent parameters is considered to study the two-mode squeezing phenomena. Specific choices of the parameters are used to determine the dispersion matrix and analytic expressions, in terms of standard hermite polynomials, of the wavefunctions and photon distributions.
Finnbjornsdottir, Ragnhildur Gudrun; Carlsen, Hanne Krage; Thorsteinsson, Throstur; Oudin, Anna; Lund, Sigrun Helga; Gislason, Thorarinn; Rafnsson, Vilhjalmur
2016-01-01
Background The adverse health effects of high concentrations of hydrogen sulfide (H2S) exposure are well known, though the possible effects of low concentrations have not been thoroughly studied. The aim was to study short-term associations between modelled ambient low-level concentrations of intermittent hydrogen sulfide (H2S) and emergency hospital visits with heart diseases (HD), respiratory diseases, and stroke as primary diagnosis. Methods The study is population-based, using data from patient-, and population-registers from the only acute care institution in the Reykjavik capital area, between 1 January, 2007 and 30 June, 2014. The study population was individuals (≥18yr) living in the Reykjavik capital area. The H2S emission originates from a geothermal power plant in the vicinity. A model was used to estimate H2S exposure in different sections of the area. A generalized linear model assuming Poisson distribution was used to investigate the association between emergency hospital visits and H2S exposure. Distributed lag models were adjusted for seasonality, gender, age, traffic zones, and other relevant factors. Lag days from 0 to 4 were considered. Results The total number of emergency hospital visits was 32961 with a mean age of 70 years. In fully adjusted un-stratified models, H2S concentrations exceeding 7.00μg/m3 were associated with increases in emergency hospital visits with HD as primary diagnosis at lag 0 risk ratio (RR): 1.067; 95% confidence interval (CI): 1.024–1.111, lag 2 RR: 1.049; 95%CI: 1.005–1.095, and lag 4 RR: 1.046; 95%CI: 1.004–1.089. Among males an association was found between H2S concentrations exceeding 7.00μg/m3, and HD at lag 0 RR: 1.087; 95%CI: 1.032–1.146 and lag 4 RR: 1080; 95%CI: 1.025–1.138; and among those 73 years and older at lag 0 RR: 1.075; 95%CI: 1.014–1.140 and lag 3 RR: 1.072; 95%CI: 1.009–1.139. No associations were found with other diseases. Conclusions The study showed an association between emergency hospital visits with HD as primary diagnosis and same day H2S concentrations exceeding 7.00μg/m3, more pronounced among males and those 73 years and older than among females and younger individuals. PMID:27218467
Towards a model of pion generalized parton distributions from Dyson-Schwinger equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moutarde, H.
2015-04-10
We compute the pion quark Generalized Parton Distribution H{sup q} and Double Distributions F{sup q} and G{sup q} in a coupled Bethe-Salpeter and Dyson-Schwinger approach. We use simple algebraic expressions inspired by the numerical resolution of Dyson-Schwinger and Bethe-Salpeter equations. We explicitly check the support and polynomiality properties, and the behavior under charge conjugation or time invariance of our model. We derive analytic expressions for the pion Double Distributions and Generalized Parton Distribution at vanishing pion momentum transfer at a low scale. Our model compares very well to experimental pion form factor or parton distribution function data.
Staley, James R; Burgess, Stephen
2017-05-01
Mendelian randomization, the use of genetic variants as instrumental variables (IV), can test for and estimate the causal effect of an exposure on an outcome. Most IV methods assume that the function relating the exposure to the expected value of the outcome (the exposure-outcome relationship) is linear. However, in practice, this assumption may not hold. Indeed, often the primary question of interest is to assess the shape of this relationship. We present two novel IV methods for investigating the shape of the exposure-outcome relationship: a fractional polynomial method and a piecewise linear method. We divide the population into strata using the exposure distribution, and estimate a causal effect, referred to as a localized average causal effect (LACE), in each stratum of population. The fractional polynomial method performs metaregression on these LACE estimates. The piecewise linear method estimates a continuous piecewise linear function, the gradient of which is the LACE estimate in each stratum. Both methods were demonstrated in a simulation study to estimate the true exposure-outcome relationship well, particularly when the relationship was a fractional polynomial (for the fractional polynomial method) or was piecewise linear (for the piecewise linear method). The methods were used to investigate the shape of relationship of body mass index with systolic blood pressure and diastolic blood pressure. © 2017 The Authors Genetic Epidemiology Published by Wiley Periodicals, Inc.
Staley, James R.
2017-01-01
ABSTRACT Mendelian randomization, the use of genetic variants as instrumental variables (IV), can test for and estimate the causal effect of an exposure on an outcome. Most IV methods assume that the function relating the exposure to the expected value of the outcome (the exposure‐outcome relationship) is linear. However, in practice, this assumption may not hold. Indeed, often the primary question of interest is to assess the shape of this relationship. We present two novel IV methods for investigating the shape of the exposure‐outcome relationship: a fractional polynomial method and a piecewise linear method. We divide the population into strata using the exposure distribution, and estimate a causal effect, referred to as a localized average causal effect (LACE), in each stratum of population. The fractional polynomial method performs metaregression on these LACE estimates. The piecewise linear method estimates a continuous piecewise linear function, the gradient of which is the LACE estimate in each stratum. Both methods were demonstrated in a simulation study to estimate the true exposure‐outcome relationship well, particularly when the relationship was a fractional polynomial (for the fractional polynomial method) or was piecewise linear (for the piecewise linear method). The methods were used to investigate the shape of relationship of body mass index with systolic blood pressure and diastolic blood pressure. PMID:28317167
Polynomial Chaos Based Acoustic Uncertainty Predictions from Ocean Forecast Ensembles
NASA Astrophysics Data System (ADS)
Dennis, S.
2016-02-01
Most significant ocean acoustic propagation occurs at tens of kilometers, at scales small compared basin and to most fine scale ocean modeling. To address the increased emphasis on uncertainty quantification, for example transmission loss (TL) probability density functions (PDF) within some radius, a polynomial chaos (PC) based method is utilized. In order to capture uncertainty in ocean modeling, Navy Coastal Ocean Model (NCOM) now includes ensembles distributed to reflect the ocean analysis statistics. Since the ensembles are included in the data assimilation for the new forecast ensembles, the acoustic modeling uses the ensemble predictions in a similar fashion for creating sound speed distribution over an acoustically relevant domain. Within an acoustic domain, singular value decomposition over the combined time-space structure of the sound speeds can be used to create Karhunen-Loève expansions of sound speed, subject to multivariate normality testing. These sound speed expansions serve as a basis for Hermite polynomial chaos expansions of derived quantities, in particular TL. The PC expansion coefficients result from so-called non-intrusive methods, involving evaluation of TL at multi-dimensional Gauss-Hermite quadrature collocation points. Traditional TL calculation from standard acoustic propagation modeling could be prohibitively time consuming at all multi-dimensional collocation points. This method employs Smolyak order and gridding methods to allow adaptive sub-sampling of the collocation points to determine only the most significant PC expansion coefficients to within a preset tolerance. Practically, the Smolyak order and grid sizes grow only polynomially in the number of Karhunen-Loève terms, alleviating the curse of dimensionality. The resulting TL PC coefficients allow the determination of TL PDF normality and its mean and standard deviation. In the non-normal case, PC Monte Carlo methods are used to rapidly establish the PDF. This work was sponsored by the Office of Naval Research
Gunda, Resign; Chimbari, Moses John; Shamu, Shepherd; Sartorius, Benn; Mukaratirwa, Samson
2017-09-30
Malaria is a public health problem in Zimbabwe. Although many studies have indicated that climate change may influence the distribution of malaria, there is paucity of information on its trends and association with climatic variables in Zimbabwe. To address this shortfall, the trends of malaria incidence and its interaction with climatic variables in rural Gwanda, Zimbabwe for the period January 2005 to April 2015 was assessed. Retrospective data analysis of reported cases of malaria in three selected Gwanda district rural wards (Buvuma, Ntalale and Selonga) was carried out. Data on malaria cases was collected from the district health information system and ward clinics while data on precipitation and temperature were obtained from the climate hazards group infrared precipitation with station data (CHIRPS) database and the moderate resolution imaging spectro-radiometer (MODIS) satellite data, respectively. Distributed lag non-linear models (DLNLM) were used to determine the temporal lagged association between monthly malaria incidence and monthly climatic variables. There were 246 confirmed malaria cases in the three wards with a mean incidence of 0.16/1000 population/month. The majority of malaria cases (95%) occurred in the > 5 years age category. The results showed no correlation between trends of clinical malaria (unconfirmed) and confirmed malaria cases in all the three study wards. There was a significant association between malaria incidence and the climatic variables in Buvuma and Selonga wards at specific lag periods. In Ntalale ward, only precipitation (1- and 3-month lag) and mean temperature (1- and 2-month lag) were significantly associated with incidence at specific lag periods (p < 0.05). DLNM results suggest a key risk period in current month, based on key climatic conditions in the 1-4 month period prior. As the period of high malaria risk is associated with precipitation and temperature at 1-4 month prior in a seasonal cycle, intensifying malaria control activities over this period will likely contribute to lowering the seasonal malaria incidence.
Stochastic Modeling of Flow-Structure Interactions using Generalized Polynomial Chaos
2001-09-11
Some basic hypergeometric polynomials that generalize Jacobi polynomials . Memoirs Amer. Math. Soc...scheme, which is represented as a tree structure in figure 1 (following [24]), classifies the hypergeometric orthogonal polynomials and indicates the...2F0(1) 2F0(0) Figure 1: The Askey scheme of orthogonal polynomials The orthogonal polynomials associated with the generalized polynomial chaos,
Detecting changes in the spatial distribution of nitrate contamination in ground water
Liu, Z.-J.; Hallberg, G.R.; Zimmerman, D.L.; Libra, R.D.
1997-01-01
Many studies of ground water pollution in general and nitrate contamination in particular have often relied on a one-time investigation, tracking of individual wells, or aggregate summaries. Studies of changes in spatial distribution of contaminants over time are lacking. This paper presents a method to compare spatial distributions for possible changes over time. The large-scale spatial distribution at a given time can be considered as a surface over the area (a trend surface). The changes in spatial distribution from period to period can be revealed by the differences in the shape and/or height of surfaces. If such a surface is described by a polynomial function, changes in surfaces can be detected by testing statistically for differences in their corresponding polynomial functions. This method was applied to nitrate concentration in a population of wells in an agricultural drainage basin in Iowa, sampled in three different years. For the period of 1981-1992, the large-scale spatial distribution of nitrate concentration did not show significant change in the shape of spatial surfaces; while the magnitude of nitrate concentration in the basin, or height of the computed surfaces showed significant fluctuations. The change in magnitude of nitrate concentration is closely related to climatic variations, especially in precipitation. The lack of change in the shape of spatial surfaces means that either the influence of land use/nitrogen management was overshadowed by climatic influence, or the changes in land use/management occurred in a random fashion.
Jones, Mirkka M; Tuomisto, Hanna; Borcard, Daniel; Legendre, Pierre; Clark, David B; Olivas, Paulo C
2008-03-01
The degree to which variation in plant community composition (beta-diversity) is predictable from environmental variation, relative to other spatial processes, is of considerable current interest. We addressed this question in Costa Rican rain forest pteridophytes (1,045 plots, 127 species). We also tested the effect of data quality on the results, which has largely been overlooked in earlier studies. To do so, we compared two alternative spatial models [polynomial vs. principal coordinates of neighbour matrices (PCNM)] and ten alternative environmental models (all available environmental variables vs. four subsets, and including their polynomials vs. not). Of the environmental data types, soil chemistry contributed most to explaining pteridophyte community variation, followed in decreasing order of contribution by topography, soil type and forest structure. Environmentally explained variation increased moderately when polynomials of the environmental variables were included. Spatially explained variation increased substantially when the multi-scale PCNM spatial model was used instead of the traditional, broad-scale polynomial spatial model. The best model combination (PCNM spatial model and full environmental model including polynomials) explained 32% of pteridophyte community variation, after correcting for the number of sampling sites and explanatory variables. Overall evidence for environmental control of beta-diversity was strong, and the main floristic gradients detected were correlated with environmental variation at all scales encompassed by the study (c. 100-2,000 m). Depending on model choice, however, total explained variation differed more than fourfold, and the apparent relative importance of space and environment could be reversed. Therefore, we advocate a broader recognition of the impacts that data quality has on analysis results. A general understanding of the relative contributions of spatial and environmental processes to species distributions and beta-diversity requires that methodological artefacts are separated from real ecological differences.
Consensus seeking in a network of discrete-time linear agents with communication noises
NASA Astrophysics Data System (ADS)
Wang, Yunpeng; Cheng, Long; Hou, Zeng-Guang; Tan, Min; Zhou, Chao; Wang, Ming
2015-07-01
This paper studies the mean square consensus of discrete-time linear time-invariant multi-agent systems with communication noises. A distributed consensus protocol, which is composed of the agent's own state feedback and the relative states between the agent and its neighbours, is proposed. A time-varying consensus gain a[k] is applied to attenuate the effect of noises which inherits in the inaccurate measurement of relative states with neighbours. A polynomial, namely 'parameter polynomial', is constructed. And its coefficients are the parameters in the feedback gain vector of the proposed protocol. It turns out that the parameter polynomial plays an important role in guaranteeing the consensus of linear multi-agent systems. By the proposed protocol, necessary and sufficient conditions for mean square consensus are presented under different topology conditions: (1) if the communication topology graph has a spanning tree and every node in the graph has at least one parent node, then the mean square consensus can be achieved if and only if ∑∞k = 0a[k] = ∞, ∑∞k = 0a2[k] < ∞ and all roots of the parameter polynomial are in the unit circle; (2) if the communication topology graph has a spanning tree and there exits one node without any parent node (the leader-follower case), then the mean square consensus can be achieved if and only if ∑∞k = 0a[k] = ∞, limk → ∞a[k] = 0 and all roots of the parameter polynomial are in the unit circle; (3) if the communication topology graph does not have a spanning tree, then the mean square consensus can never be achieved. Finally, one simulation example on the multiple aircrafts system is provided to validate the theoretical analysis.
NASA Astrophysics Data System (ADS)
Cieplak, Agnieszka; Slosar, Anze
2017-01-01
The Lyman-alpha forest has become a powerful cosmological probe of the underlying matter distribution at high redshift. It is a highly non-linear field with much information present beyond the two-point statistics of the power spectrum. The flux probability distribution function (PDF) in particular has been used as a successful probe of small-scale physics. In addition to the cosmological evolution however, it is also sensitive to pixel noise, spectrum resolution, and continuum fitting, all of which lead to possible biased estimators. Here we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over the binned PDF as is commonly done. Since the n-th coefficient can be expressed as a linear combination of the first n moments of the field, this allows for the coefficients to be measured in the presence of noise and allows for a clear route towards marginalization over the mean flux. In addition, we use hydrodynamic cosmological simulations to demonstrate that in the presence of noise, a finite number of these coefficients are well measured with a very sharp transition into noise dominance. This compresses the information into a finite small number of well-measured quantities.
High-order regularization in lattice-Boltzmann equations
NASA Astrophysics Data System (ADS)
Mattila, Keijo K.; Philippi, Paulo C.; Hegele, Luiz A.
2017-04-01
A lattice-Boltzmann equation (LBE) is the discrete counterpart of a continuous kinetic model. It can be derived using a Hermite polynomial expansion for the velocity distribution function. Since LBEs are characterized by discrete, finite representations of the microscopic velocity space, the expansion must be truncated and the appropriate order of truncation depends on the hydrodynamic problem under investigation. Here we consider a particular truncation where the non-equilibrium distribution is expanded on a par with the equilibrium distribution, except that the diffusive parts of high-order non-equilibrium moments are filtered, i.e., only the corresponding advective parts are retained after a given rank. The decomposition of moments into diffusive and advective parts is based directly on analytical relations between Hermite polynomial tensors. The resulting, refined regularization procedure leads to recurrence relations where high-order non-equilibrium moments are expressed in terms of low-order ones. The procedure is appealing in the sense that stability can be enhanced without local variation of transport parameters, like viscosity, or without tuning the simulation parameters based on embedded optimization steps. The improved stability properties are here demonstrated using the perturbed double periodic shear layer flow and the Sod shock tube problem as benchmark cases.
2007-08-01
GPS) antennas. A fluxgate magnetometer is mounted in the forward assembly to compensate for the magnetic signature of the aircraft. A laser...recorded digitally on the ORAGS™ console (Figure 5) inside the helicopter in a binary format. The magnetometers are sampled at a 1200-Hz sample rate and...GPS. Accurate positioning requires a correction for this lag. Time lags among the magnetometers , fluxgate , and GPS signals were measured by a
Extending the Distributed Lag Model framework to handle chemical mixtures.
Bello, Ghalib A; Arora, Manish; Austin, Christine; Horton, Megan K; Wright, Robert O; Gennings, Chris
2017-07-01
Distributed Lag Models (DLMs) are used in environmental health studies to analyze the time-delayed effect of an exposure on an outcome of interest. Given the increasing need for analytical tools for evaluation of the effects of exposure to multi-pollutant mixtures, this study attempts to extend the classical DLM framework to accommodate and evaluate multiple longitudinally observed exposures. We introduce 2 techniques for quantifying the time-varying mixture effect of multiple exposures on an outcome of interest. Lagged WQS, the first technique, is based on Weighted Quantile Sum (WQS) regression, a penalized regression method that estimates mixture effects using a weighted index. We also introduce Tree-based DLMs, a nonparametric alternative for assessment of lagged mixture effects. This technique is based on the Random Forest (RF) algorithm, a nonparametric, tree-based estimation technique that has shown excellent performance in a wide variety of domains. In a simulation study, we tested the feasibility of these techniques and evaluated their performance in comparison to standard methodology. Both methods exhibited relatively robust performance, accurately capturing pre-defined non-linear functional relationships in different simulation settings. Further, we applied these techniques to data on perinatal exposure to environmental metal toxicants, with the goal of evaluating the effects of exposure on neurodevelopment. Our methods identified critical neurodevelopmental windows showing significant sensitivity to metal mixtures. Copyright © 2017 Elsevier Inc. All rights reserved.
Estimation of genetic parameters related to eggshell strength using random regression models.
Guo, J; Ma, M; Qu, L; Shen, M; Dou, T; Wang, K
2015-01-01
This study examined the changes in eggshell strength and the genetic parameters related to this trait throughout a hen's laying life using random regression. The data were collected from a crossbred population between 2011 and 2014, where the eggshell strength was determined repeatedly for 2260 hens. Using random regression models (RRMs), several Legendre polynomials were employed to estimate the fixed, direct genetic and permanent environment effects. The residual effects were treated as independently distributed with heterogeneous variance for each test week. The direct genetic variance was included with second-order Legendre polynomials and the permanent environment with third-order Legendre polynomials. The heritability of eggshell strength ranged from 0.26 to 0.43, the repeatability ranged between 0.47 and 0.69, and the estimated genetic correlations between test weeks was high at > 0.67. The first eigenvalue of the genetic covariance matrix accounted for about 97% of the sum of all the eigenvalues. The flexibility and statistical power of RRM suggest that this model could be an effective method to improve eggshell quality and to reduce losses due to cracked eggs in a breeding plan.
NASA Astrophysics Data System (ADS)
Schneider, Kai; Kadoch, Benjamin; Bos, Wouter
2017-11-01
The angle between two subsequent particle displacement increments is evaluated as a function of the time lag. The directional change of particles can thus be quantified at different scales and multiscale statistics can be performed. Flow dependent and geometry dependent features can be distinguished. The mean angle satisfies scaling behaviors for short time lags based on the smoothness of the trajectories. For intermediate time lags a power law behavior can be observed for some turbulent flows, which can be related to Kolmogorov scaling. The long time behavior depends on the confinement geometry of the flow. We show that the shape of the probability distribution function of the directional change can be well described by a Fischer distribution. Results for two-dimensional (direct and inverse cascade) and three-dimensional turbulence with and without confinement, illustrate the properties of the proposed multiscale statistics. The presented Monte-Carlo simulations allow disentangling geometry dependent and flow independent features. Finally, we also analyze trajectories of football players, which are, in general, not randomly spaced on a field.
The Shock and Vibration Digest, Volume 18, Number 3
1986-03-01
Linear Distributed Parameter Des., Proc. Intl. Symp., 11th ONR Naval Struc. Systems by Shifted Legendre Polynomial Func- Mech. Symp., Tucson, AZ, pp...University, Atlanta, Georgia nonlinear problems with elementary algebra . It J. Sound Vib., 102 (2), pp 247-257 (Sept 22, uses i = -1, the Pascal’s...eigenvalues specified. The optimal avoid failure due to resonance under the action control problem of a linear distributed parameter 0School of Mechanical
Yang, Wengui; Yu, Wenwu; Cao, Jinde; Alsaadi, Fuad E; Hayat, Tasawar
2018-02-01
This paper investigates the stability and lag synchronization for memristor-based fuzzy Cohen-Grossberg bidirectional associative memory (BAM) neural networks with mixed delays (asynchronous time delays and continuously distributed delays) and impulses. By applying the inequality analysis technique, homeomorphism theory and some suitable Lyapunov-Krasovskii functionals, some new sufficient conditions for the uniqueness and global exponential stability of equilibrium point are established. Furthermore, we obtain several sufficient criteria concerning globally exponential lag synchronization for the proposed system based on the framework of Filippov solution, differential inclusion theory and control theory. In addition, some examples with numerical simulations are given to illustrate the feasibility and validity of obtained results. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vignat, C.; Bercher, J.-F.
The family of Tsallis entropies was introduced by Tsallis in 1988. The Shannon entropy belongs to this family as the limit case q{yields}1. The canonical distributions in R{sup n} that maximize this entropy under a covariance constraint are easily derived as Student-t (q<1) and Student-r (q>1) multivariate distributions. A nice geometrical result about these Student-r distributions is that they are marginal of uniform distributions on a sphere of larger dimension d with the relationship p = n+2+(2/q-1). As q{yields}1, we recover the famous Poincare's observation according to which a Gaussian vector can be viewed as the projection of a vectormore » uniformly distributed on the infinite dimensional sphere. A related property in the case q<1 is also available. Often associated to Renyi-Tsallis entropies is the notion of escort distributions. We provide here a geometric interpretation of these distributions. Another result concerns a universal system in physics, the harmonic oscillator: in the usual quantum context, the waveform of the n-th state of the harmonic oscillator is a Gaussian waveform multiplied by the degree n Hermite polynomial. We show, starting from recent results by Carinena et al., that the quantum harmonic oscillator on spaces with constant curvature is described by maximal Tsallis entropy waveforms multiplied by the extended Hermite polynomials derived from this measure. This gives a neat interpretation of the non-extensive parameter q in terms of the curvature of the space the oscillator evolves on; as q{yields}1, the curvature of the space goes to 0 and we recover the classical harmonic oscillator in R{sup 3}.« less
Numerical solution of transport equation for applications in environmental hydraulics and hydrology
NASA Astrophysics Data System (ADS)
Rashidul Islam, M.; Hanif Chaudhry, M.
1997-04-01
The advective term in the one-dimensional transport equation, when numerically discretized, produces artificial diffusion. To minimize such artificial diffusion, which vanishes only for Courant number equal to unity, transport owing to advection has been modeled separately. The numerical solution of the advection equation for a Gaussian initial distribution is well established; however, large oscillations are observed when applied to an initial distribution with sleep gradients, such as trapezoidal distribution of a constituent or propagation of mass from a continuous input. In this study, the application of seven finite-difference schemes and one polynomial interpolation scheme is investigated to solve the transport equation for both Gaussian and non-Gaussian (trapezoidal) initial distributions. The results obtained from the numerical schemes are compared with the exact solutions. A constant advective velocity is assumed throughout the transport process. For a Gaussian distribution initial condition, all eight schemes give excellent results, except the Lax scheme which is diffusive. In application to the trapezoidal initial distribution, explicit finite-difference schemes prove to be superior to implicit finite-difference schemes because the latter produce large numerical oscillations near the steep gradients. The Warming-Kutler-Lomax (WKL) explicit scheme is found to be better among this group. The Hermite polynomial interpolation scheme yields the best result for a trapezoidal distribution among all eight schemes investigated. The second-order accurate schemes are sufficiently accurate for most practical problems, but the solution of unusual problems (concentration with steep gradient) requires the application of higher-order (e.g. third- and fourth-order) accurate schemes.
Prestressing force monitoring method for a box girder through distributed long-gauge FBG sensors
NASA Astrophysics Data System (ADS)
Chen, Shi-Zhi; Wu, Gang; Xing, Tuo; Feng, De-Cheng
2018-01-01
Monitoring prestressing forces is essential for prestressed concrete box girder bridges. However, the current monitoring methods used for prestressing force were not applicable for a box girder neither because of the sensor’s setup being constrained or shear lag effect not being properly considered. Through combining with the previous analysis model of shear lag effect in the box girder, this paper proposed an indirect monitoring method for on-site determination of prestressing force in a concrete box girder utilizing the distributed long-gauge fiber Bragg grating sensor. The performance of this method was initially verified using numerical simulation for three different distribution forms of prestressing tendons. Then, an experiment involving two concrete box girders was conducted to study the feasibility of this method under different prestressing levels preliminarily. The results of both numerical simulation and lab experiment validated this method’s practicability in a box girder.
Estimation of spectral distribution of sky radiance using a commercial digital camera.
Saito, Masanori; Iwabuchi, Hironobu; Murata, Isao
2016-01-10
Methods for estimating spectral distribution of sky radiance from images captured by a digital camera and for accurately estimating spectral responses of the camera are proposed. Spectral distribution of sky radiance is represented as a polynomial of the wavelength, with coefficients obtained from digital RGB counts by linear transformation. The spectral distribution of radiance as measured is consistent with that obtained by spectrometer and radiative transfer simulation for wavelengths of 430-680 nm, with standard deviation below 1%. Preliminary applications suggest this method is useful for detecting clouds and studying the relation between irradiance at the ground and cloud distribution.
Gamma-Ray Burst Intensity Distributions
NASA Technical Reports Server (NTRS)
Band, David L.; Norris, Jay P.; Bonnell, Jerry T.
2004-01-01
We use the lag-luminosity relation to calculate self-consistently the redshifts, apparent peak bolometric luminosities L(sub B1), and isotropic energies E(sub iso) for a large sample of BATSE bursts. We consider two different forms of the lag-luminosity relation; for both forms the median redshift, for our burst database is 1.6. We model the resulting sample of burst energies with power law and Gaussian dis- tributions, both of which are reasonable models. The power law model has an index of a = 1.76 plus or minus 0.05 (95% confidence) as opposed to the index of a = 2 predicted by the simple universal jet profile model; however, reasonable refinements to this model permit much greater flexibility in reconciling predicted and observed energy distributions.
Meta-Regression Approximations to Reduce Publication Selection Bias
ERIC Educational Resources Information Center
Stanley, T. D.; Doucouliagos, Hristos
2014-01-01
Publication selection bias is a serious challenge to the integrity of all empirical sciences. We derive meta-regression approximations to reduce this bias. Our approach employs Taylor polynomial approximations to the conditional mean of a truncated distribution. A quadratic approximation without a linear term, precision-effect estimate with…
NASA Astrophysics Data System (ADS)
Condemi, Vincenzo; Gestro, Massimo; Dozio, Elena; Tartaglino, Bruno; Corsi Romanelli, Massimiliano Marco; Solimene, Umberto; Meco, Roberto
2015-03-01
The incidence of nephrolithiasis is rising worldwide, especially in women and with increasing age. Incidence and prevalence of kidney stones are affected by genetic, nutritional, and environmental factors. The aim of this study is to investigate the link between various meteorological factors (independent variables) and the daily number of visits to the Emergency Department (ED of the S. Croce and Carle Hospital of Cuneo for renal colic (RC) and urinary stones (UC) as the dependent variable over the years 2007-2010. The Poisson generalized regression models (PGAMs) have been used in different progressive ways. The results of PGAMs (stage 1) adjusted for seasonal and calendar factors confirmed a significant correlation ( p < 0.03) with the thermal parameter. Evaluation of the dose-response effect [PGAMs combined with distributed lags nonlinear models (DLNMs)—stage 2], expressed in terms of relative risk (RR) and cumulative relative risk (RRC), indicated a relative significant effect up to 15 lag days of lag (RR > 1), with a first peak after 5 days (lag ranges 0-1, 0-3, and 0-5) and a second weak peak observed along the 5-15 lag range days. The estimated RR for females was significant, mainly in the second and fourth age group considered (19-44 and >65 years): RR for total ED visits 1.27, confidence interval (CI) 1.11-1.46 (lag 0-5 days); RR 1.42, CI 1.01-2.01 (lag 0-10 days); and RR 1.35, CI 1.09-1.68 (lag 0-15 days). The research also indicated a moderate involvement of the thermal factor in the onset of RC caused by UC, exclusively in the female sex. Further studies will be necessary to confirm these results.
Sewe, Maquins Odhiambo; Ahlm, Clas; Rocklöv, Joacim
2016-01-01
Malaria is an important cause of morbidity and mortality in malaria endemic countries. The malaria mosquito vectors depend on environmental conditions, such as temperature and rainfall, for reproduction and survival. To investigate the potential for weather driven early warning systems to prevent disease occurrence, the disease relationship to weather conditions need to be carefully investigated. Where meteorological observations are scarce, satellite derived products provide new opportunities to study the disease patterns depending on remotely sensed variables. In this study, we explored the lagged association of Normalized Difference Vegetation Index (NVDI), day Land Surface Temperature (LST) and precipitation on malaria mortality in three areas in Western Kenya. The lagged effect of each environmental variable on weekly malaria mortality was modeled using a Distributed Lag Non Linear Modeling approach. For each variable we constructed a natural spline basis with 3 degrees of freedom for both the lag dimension and the variable. Lag periods up to 12 weeks were considered. The effect of day LST varied between the areas with longer lags. In all the three areas, malaria mortality was associated with precipitation. The risk increased with increasing weekly total precipitation above 20 mm and peaking at 80 mm. The NDVI threshold for increased mortality risk was between 0.3 and 0.4 at shorter lags. This study identified lag patterns and association of remote- sensing environmental factors and malaria mortality in three malaria endemic regions in Western Kenya. Our results show that rainfall has the most consistent predictive pattern to malaria transmission in the endemic study area. Results highlight a potential for development of locally based early warning forecasts that could potentially reduce the disease burden by enabling timely control actions.
Ambient ozone concentration and emergency department visits for panic attacks.
Cho, Jaelim; Choi, Yoon Jung; Sohn, Jungwoo; Suh, Mina; Cho, Seong-Kyung; Ha, Kyoung Hwa; Kim, Changsoo; Shin, Dong Chun
2015-03-01
The effect of ambient air pollution on panic disorder in the general population has not yet been thoroughly elucidated, although the occurrence of panic disorder in workers exposed to organic solvents has been reported previously. We investigated the association of ambient air pollution with the risk of panic attack-related emergency department visits. Using health insurance claims, we collected data from emergency department visits for panic attacks in Seoul, Republic of Korea (2005-2009). Daily air pollutant concentrations were obtained using automatic monitoring system data. We conducted a time-series study using a generalized additive model with Poisson distribution, which included spline variables (date of visit, daily mean temperature, and relative humidity) and parametric variables (daily mean air pollutant concentration, national holiday, and day of the week). In addition to single lag models (lag1 to lag3), cumulative lag models (lag0-1 to lag0-3) were constructed using moving-average concentrations on the days leading up to the visit. The risk was expressed as relative risk (RR) per one standard deviation of each air pollutant and its 95% confidence interval (95% CI). A total of 2320 emergency department visits for panic attacks were observed during the study period. The adjusted RR of panic attack-related emergency department visits was 1.051 (95% CI, 1.014-1.090) for same-day exposure to ozone. In cumulative models, adjusted RRs were 1.068 (1.029-1.107) in lag0-2 and 1.074 (1.035-1.114) in lag0-3. The ambient ozone concentration was significantly associated with emergency department visits for panic attacks. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Sanz, J. M.
1983-01-01
The method of complex characteristics and hodograph transformation for the design of shockless airfoils was extended to design supercritical cascades with high solidities and large inlet angles. This capability was achieved by introducing a conformal mapping of the hodograph domain onto an ellipse and expanding the solution in terms of Tchebycheff polynomials. A computer code was developd based on this idea. A number of airfoils designed with the code are presented. Various supercritical and subcritical compressor, turbine and propeller sections are shown. The lag-entrainment method for the calculation of a turbulent boundary layer was incorporated to the inviscid design code. The results of this calculation are shown for the airfoils described. The elliptic conformal transformation developed to map the hodograph domain onto an ellipse can be used to generate a conformal grid in the physical domain of a cascade of airfoils with open trailing edges with a single transformation. A grid generated with this transformation is shown for the Korn airfoil.
Modeling Uncertainty in Steady State Diffusion Problems via Generalized Polynomial Chaos
2002-07-25
Some basic hypergeometric polynomials that generalize Jacobi polynomials . Memoirs Amer. Math. Soc., AMS... orthogonal polynomial functionals from the Askey scheme, as a generalization of the original polynomial chaos idea of Wiener (1938). A Galerkin projection...1) by generalized polynomial chaos expansion, where the uncertainties can be introduced through κ, f , or g, or some combinations. It is worth
Orthonormal aberration polynomials for anamorphic optical imaging systems with circular pupils.
Mahajan, Virendra N
2012-06-20
In a recent paper, we considered the classical aberrations of an anamorphic optical imaging system with a rectangular pupil, representing the terms of a power series expansion of its aberration function. These aberrations are inherently separable in the Cartesian coordinates (x,y) of a point on the pupil. Accordingly, there is x-defocus and x-coma, y-defocus and y-coma, and so on. We showed that the aberration polynomials orthonormal over the pupil and representing balanced aberrations for such a system are represented by the products of two Legendre polynomials, one for each of the two Cartesian coordinates of the pupil point; for example, L(l)(x)L(m)(y), where l and m are positive integers (including zero) and L(l)(x), for example, represents an orthonormal Legendre polynomial of degree l in x. The compound two-dimensional (2D) Legendre polynomials, like the classical aberrations, are thus also inherently separable in the Cartesian coordinates of the pupil point. Moreover, for every orthonormal polynomial L(l)(x)L(m)(y), there is a corresponding orthonormal polynomial L(l)(y)L(m)(x) obtained by interchanging x and y. These polynomials are different from the corresponding orthogonal polynomials for a system with rotational symmetry but a rectangular pupil. In this paper, we show that the orthonormal aberration polynomials for an anamorphic system with a circular pupil, obtained by the Gram-Schmidt orthogonalization of the 2D Legendre polynomials, are not separable in the two coordinates. Moreover, for a given polynomial in x and y, there is no corresponding polynomial obtained by interchanging x and y. For example, there are polynomials representing x-defocus, balanced x-coma, and balanced x-spherical aberration, but no corresponding y-aberration polynomials. The missing y-aberration terms are contained in other polynomials. We emphasize that the Zernike circle polynomials, although orthogonal over a circular pupil, are not suitable for an anamorphic system as they do not represent balanced aberrations for such a system.
Understanding User Behavioral Patterns in Open Knowledge Communities
ERIC Educational Resources Information Center
Yang, Xianmin; Song, Shuqiang; Zhao, Xinshuo; Yu, Shengquan
2018-01-01
Open knowledge communities (OKCs) have become popular in the era of knowledge economy. This study aimed to explore how users collaboratively create and share knowledge in OKCs. In particular, this research identified the behavior distribution and behavioral patterns of users by conducting frequency distribution and lag sequential analyses. Some…
Population trends influence species ability to track climate change
Joel Ralston; William V. DeLuca; Richard E. Feldman; David I. King
2016-01-01
Shifts of distributions have been attributed to species tracking their fundamental climate niches through space. However, several studies have now demonstrated that niche tracking is imperfect, that species' climate niches may vary with population trends, and that geographic distributions may lag behind rapid climate change. These reports of imperfect niche...
Approximating exponential and logarithmic functions using polynomial interpolation
NASA Astrophysics Data System (ADS)
Gordon, Sheldon P.; Yang, Yajun
2017-04-01
This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is analysed. The results of interpolating polynomials are compared with those of Taylor polynomials.
Investigation of Spectral Lag and Epeak as Joint Luminosity Indicators in GRBs
NASA Technical Reports Server (NTRS)
White, Nicholas E. (Technical Monitor); Norris, Jay P.
2003-01-01
Models for gamma-ray bursts which invoke jetted, colliding shells would appear to have at least two determinants for luminosity, e.g., observer viewing angle and Lorentz factor, or possibly shell mass. The latter two internal physical parameters may vary from pulse to pulse within a burst, and such variation might be reflected in evolution of observables such as spectral lag and peak in the spectral energy distribution. We analyze bright BATSE bursts using the 16-channel medium energy resolution (MER) data, with time resolutions of 16 and 64 ms, measuring spectral lags and peak energies for significant pulse structures within a burst, identified using a Bayesian block algorithm. We then explore correlations between the measured parameters and total flux for the individual pulse structures.
Tibayrenc, Pierre; Preziosi-Belloy, Laurence; Ghommidh, Charles
2011-06-01
Interest in bioethanol production has experienced a resurgence in the last few years. Poor temperature control in industrial fermentation tanks exposes the yeast cells used for this production to intermittent heat stress which impairs fermentation efficiency. Therefore, there is a need for yeast strains with improved tolerance, able to recover from such temperature variations. Accordingly, this paper reports the development of methods for the characterization of Saccharomyces cerevisiae growth recovery after a sublethal heat stress. Single-cell measurements were carried out in order to detect cell-to-cell variability. Alcoholic batch fermentations were performed on a defined medium in a 2 l instrumented bioreactor. A rapid temperature shift from 33 to 43 °C was applied when ethanol concentration reached 50 g l⁻¹. Samples were collected at different times after the temperature shift. Single cell growth capability, lag-time and initial growth rate were determined by monitoring the growth of a statistically significant number of cells after agar medium plating. The rapid temperature shift resulted in an immediate arrest of growth and triggered a progressive loss of cultivability from 100 to 0.0001% within 8 h. Heat-injured cells were able to recover their growth capability on agar medium after a lag phase. Lag-time was longer and more widely distributed as the time of heat exposure increased. Thus, lag-time distribution gives an insight into strain sensitivity to heat-stress, and could be helpful for the selection of yeast strains of technological interest.
Guynot, M E; Marín, S; Sanchis, V; Ramos, A J
2003-10-01
A sponge cake analog was used to study the influence of pH, water activity (aw), and carbon dioxide (CO2) levels on the growth of seven fungal species commonly causing bakery product spoilage (Eurotium amstelodami, Eurotium herbariorum, Eurotium repens, Eurotium rubrum, Aspergillus niger, Aspergillus flavus, and Penicillium corylophilum). A full factorial design was used. Water activity, CO2, and their interaction were the main factors significantly affecting fungal growth. Water activity at levels of 0.80 to 0.90 had a significant influence on fungal growth and determined the concentration of CO2 needed to prevent cake analog spoilage. At an aw level of 0.85, lag phases increased twofold when the level of CO2 in the headspace increased from 0 to 70%. In general, no fungal growth was observed for up to 28 days of incubation at 25 degrees C when samples were packaged with 100% CO2, regardless of the aw level. Partial least squares projection to latent structures regression was used to build a polynomial model to predict sponge cake shelf life on the basis of the lag phases of all seven species tested. The model developed explained quite well (R2 = 79%) the growth of almost all species, which responded similarly to changes in tested factors. The results of this study emphasize the importance of combining several hurdles, such as modified atmosphere packaging, aw, and pH, that have synergistic or additive effects on the inhibition of mold growth.
Dynamic Bidirectional Reflectance Distribution Functions: Measurement and Representation
2008-02-01
be included in the harmonic fits. Other sets of orthogonal functions such as Zernike polynomials have also been used to characterize BRDF and could...reflectance spectra of 3D objects,” Proc. SPIE 4663, 370–378 2001. 13J. R. Shell II, C. Salvagio, and J. R. Schott, “A novel BRDF measurement technique
Characterizing the Lyα forest flux probability distribution function using Legendre polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cieplak, Agnieszka M.; Slosar, Anze
The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n-th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation overmore » mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. In conclusion, we find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.« less
Characterizing the Lyα forest flux probability distribution function using Legendre polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cieplak, Agnieszka M.; Slosar, Anže, E-mail: acieplak@bnl.gov, E-mail: anze@bnl.gov
The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n -th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisationmore » over mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. We find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.« less
Characterizing the Lyα forest flux probability distribution function using Legendre polynomials
Cieplak, Agnieszka M.; Slosar, Anze
2017-10-12
The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n-th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation overmore » mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. In conclusion, we find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.« less
Characterizing the Lyα forest flux probability distribution function using Legendre polynomials
NASA Astrophysics Data System (ADS)
Cieplak, Agnieszka M.; Slosar, Anže
2017-10-01
The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n-th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation over mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. We find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.
NASA Astrophysics Data System (ADS)
Othmani, Cherif; Takali, Farid; Njeh, Anouar
2017-11-01
Modeling of guided Lamb waves propagation in piezoelectric-semiconductor multilayered structures made of AlAs and GaAs is evaluated in this paper. Here, the Legendre polynomial method is used to calculate dispersion curves, frequency spectrum and field distributions of guided Lamb waves propagation modes in AlAs, GaAs, AlAs/GaAs and AlAs/GaAs/AlAs-1/2/1 structures. In fact, formulations are given for open-circuit surface. Consequently, the polynomial method is numerically stable according to the total number of layers and the frequency range. This analysis is meaningful for the applications of the piezoelectric-semiconductor multilayered structures made of AlAs and GaAs such as in novel acoustic devices.
Polynomial Monogamy Relations for Entanglement Negativity.
Allen, Grant W; Meyer, David A
2017-02-24
The notion of nonclassical correlations is a powerful contrivance for explaining phenomena exhibited in quantum systems. It is well known, however, that quantum systems are not free to explore arbitrary correlations-the church of the smaller Hilbert space only accepts monogamous congregants. We demonstrate how to characterize the limits of what is quantum mechanically possible with a computable measure, entanglement negativity. We show that negativity only saturates the standard linear monogamy inequality in trivial cases implied by its monotonicity under local operations and classical communication, and derive a necessary and sufficient inequality which, for the first time, is a nonlinear higher degree polynomial. For very large quantum systems, we prove that the negativity can be distributed at least linearly for the tightest constraint and conjecture that it is at most linear.
Polynomial Monogamy Relations for Entanglement Negativity
NASA Astrophysics Data System (ADS)
Allen, Grant W.; Meyer, David A.
2017-02-01
The notion of nonclassical correlations is a powerful contrivance for explaining phenomena exhibited in quantum systems. It is well known, however, that quantum systems are not free to explore arbitrary correlations—the church of the smaller Hilbert space only accepts monogamous congregants. We demonstrate how to characterize the limits of what is quantum mechanically possible with a computable measure, entanglement negativity. We show that negativity only saturates the standard linear monogamy inequality in trivial cases implied by its monotonicity under local operations and classical communication, and derive a necessary and sufficient inequality which, for the first time, is a nonlinear higher degree polynomial. For very large quantum systems, we prove that the negativity can be distributed at least linearly for the tightest constraint and conjecture that it is at most linear.
Equivalences of the multi-indexed orthogonal polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Odake, Satoru
2014-01-15
Multi-indexed orthogonal polynomials describe eigenfunctions of exactly solvable shape-invariant quantum mechanical systems in one dimension obtained by the method of virtual states deletion. Multi-indexed orthogonal polynomials are labeled by a set of degrees of polynomial parts of virtual state wavefunctions. For multi-indexed orthogonal polynomials of Laguerre, Jacobi, Wilson, and Askey-Wilson types, two different index sets may give equivalent multi-indexed orthogonal polynomials. We clarify these equivalences. Multi-indexed orthogonal polynomials with both type I and II indices are proportional to those of type I indices only (or type II indices only) with shifted parameters.
Effects of longitudinal asymmetry in heavy-ion collisions
NASA Astrophysics Data System (ADS)
Raniwala, Rashmi; Raniwala, Sudhir; Loizides, Constantin
2018-02-01
In collisions of identical nuclei at a given impact parameter, the number of nucleons participating in the overlap region of each nucleus can be unequal due to nuclear density fluctuations. The asymmetry due to the unequal number of participating nucleons, referred to as longitudinal asymmetry, causes a shift in the center-of-mass rapidity of the participant zone. The information of the event asymmetry allows us to isolate and study the effect of longitudinal asymmetry on rapidity distribution of final state particles. In a Monte Carlo Glauber model the average rapidity shift is found to be almost linearly related to the asymmetry. Using toy models, as well as Monte Carlo data for Pb-Pb collisions at 2.76 TeV generated with hijing, two different versions of ampt and dpmjet models, we demonstrate that the effect of asymmetry on final state rapidity distribution can be quantitatively related to the average rapidity shift via a third-order polynomial with a dominantly linear term. The coefficients of the polynomial are proportional to the rapidity shift with the dependence being sensitive to the details of the rapidity distribution. Experimental estimates of the spectator asymmetry through the measurement of spectator nucleons in a zero-degree calorimeter may hence be used to further constrain the initial conditions in ultra-relativistic heavy-ion collisions.
Modeling exposure–lag–response associations with distributed lag non-linear models
Gasparrini, Antonio
2014-01-01
In biomedical research, a health effect is frequently associated with protracted exposures of varying intensity sustained in the past. The main complexity of modeling and interpreting such phenomena lies in the additional temporal dimension needed to express the association, as the risk depends on both intensity and timing of past exposures. This type of dependency is defined here as exposure–lag–response association. In this contribution, I illustrate a general statistical framework for such associations, established through the extension of distributed lag non-linear models, originally developed in time series analysis. This modeling class is based on the definition of a cross-basis, obtained by the combination of two functions to flexibly model linear or nonlinear exposure-responses and the lag structure of the relationship, respectively. The methodology is illustrated with an example application to cohort data and validated through a simulation study. This modeling framework generalizes to various study designs and regression models, and can be applied to study the health effects of protracted exposures to environmental factors, drugs or carcinogenic agents, among others. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24027094
Geometric accuracy of LANDSAT-4 MSS image data
NASA Technical Reports Server (NTRS)
Welch, R.; Usery, E. L.
1983-01-01
Analyses of the LANDSAT-4 MSS image data of North Georgia provided by the EDC in CCT-p formats reveal that errors of approximately + or - 30 m in the raw data can be reduced to about + or - 55 m based on rectification procedures involving the use of 20 to 30 well-distributed GCPs and 2nd or 3rd degree polynomial equations. Higher order polynomials do not appear to improve the rectification accuracy. A subscene area of 256 x 256 pixels was rectified with a 1st degree polynomial to yield an RMSE sub xy value of + or - 40 m, indicating that USGS 1:24,000 scale quadrangle-sized areas of LANDSAT-4 data can be fitted to a map base with relatively few control points and simple equations. The errors in the rectification process are caused by the spatial resolution of the MSS data, by errors in the maps and GCP digitizing process, and by displacements caused by terrain relief. Overall, due to the improved pointing and attitude control of the spacecraft, the geometric quality of the LANDSAT-4 MSS data appears much improved over that of LANDSATS -1, -2 and -3.
Luo, Yanxia; Li, Haibin; Huang, Fangfang; Van Halm-Lutterodt, Nicholas; Qin Xu; Wang, Anxin; Guo, Jin; Tao, Lixin; Li, Xia; Liu, Mengyang; Zheng, Deqiang; Chen, Sipeng; Zhang, Feng; Yang, Xinghua; Tan, Peng; Wang, Wei; Xie, Xueqin; Guo, Xiuhua
2018-01-01
The effects of ambient temperature on stroke death in China have been well addressed. However, few studies are focused on the attributable burden for the incident of different types of stroke due to ambient temperature, especially in Beijing, China. We purpose to assess the influence of ambient temperature on hospital stroke admissions in Beijing, China. Data on daily temperature, air pollution, and relative humidity measurements and stroke admissions in Beijing were obtained between 2013 and 2014. Distributed lag non-linear model was employed to determine the association between daily ambient temperature and stroke admissions. Relative risk (RR) with 95% confidence interval (CI) and Attribution fraction (AF) with 95% CI were calculated based on stroke subtype, gender and age group. A total number of 147, 624 stroke admitted cases (including hemorrhagic and ischemic types of stroke) were documented. A non-linear acute effect of cold temperature on ischemic and hemorrhagic stroke hospital admissions was evaluated. Compared with the 25th percentile of temperature (1.2 °C), the cumulative RR of extreme cold temperature (first percentile of temperature, -9.6 °C) was 1.51 (95% CI: 1.08-2.10) over lag 0-14 days for ischemic type and 1.28 (95% CI: 1.03-1.59) for hemorrhagic stroke over lag 0-3 days. Overall, 1.57% (95% CI: 0.06%-2.88%) of ischemic stroke and 1.90% (95% CI: 0.40%-3.41%) of hemorrhagic stroke was attributed to the extreme cold temperature over lag 0-7 days and lag 0-3 days, respectively. The cold temperature's impact on stroke admissions was found to be more obvious in male gender and the youth compared to female gender and the elderly. Exposure to extreme cold temperature is associated with increasing both ischemic and hemorrhagic stroke admissions in Beijing, China. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bao, Junzhe; Wang, Zhenkun; Yu, Chuanhua; Li, Xudong
2016-05-04
Global climate change is one of the most serious environmental issues faced by humanity, and the resultant change in frequency and intensity of heat waves and cold spells could increase mortality. The influence of temperature on human health could be immediate or delayed. Latitude, relative humidity, and air pollution may influence the temperature-mortality relationship. We studied the influence of temperature on mortality and its lag effect in four Chinese cities with a range of latitudes over 2008-2011, adjusting for relative humidity and air pollution. We recorded the city-specific distributions of temperature and mortality by month and adopted a Poisson regression model combined with a distributed lag nonlinear model to investigate the lag effect of temperature on mortality. We found that the coldest months in the study area are December through March and the hottest months are June through September. The ratios of deaths during cold months to hot months were 1.43, 1.54, 1.37 and 1.12 for the cities of Wuhan, Changsha, Guilin and Haikou, respectively. The effects of extremely high temperatures generally persisted for 3 days, whereas the risk of extremely low temperatures could persist for 21 days. Compared with the optimum temperature of each city, at a lag of 21 days, the relative risks (95 % confidence interval) of extreme cold temperatures were 4.78 (3.63, 6.29), 2.38 (1.35, 4.19), 2.62 (1.15, 5.95) and 2.62 (1.44, 4.79) for Wuhan, Changsha, Guilin and Haikou, respectively. The respective risks were 1.35 (1.18, 1.55), 1.19 (0.96, 1.48), 1.22 (0.82, 1.82) and 2.47 (1.61, 3.78) for extreme hot temperatures, at a lag of 3 days. Temperature-mortality relationships vary among cities at different latitudes. Local governments should establish regional prevention and protection measures to more effectively confront and adapt to local climate change. The effects of hot temperatures predominantly occur over the short term, whereas those of cold temperatures can persist for an extended number of days.
Karthick, P A; Ghosh, Diptasree Maitra; Ramakrishnan, S
2018-02-01
Surface electromyography (sEMG) based muscle fatigue research is widely preferred in sports science and occupational/rehabilitation studies due to its noninvasiveness. However, these signals are complex, multicomponent and highly nonstationary with large inter-subject variations, particularly during dynamic contractions. Hence, time-frequency based machine learning methodologies can improve the design of automated system for these signals. In this work, the analysis based on high-resolution time-frequency methods, namely, Stockwell transform (S-transform), B-distribution (BD) and extended modified B-distribution (EMBD) are proposed to differentiate the dynamic muscle nonfatigue and fatigue conditions. The nonfatigue and fatigue segments of sEMG signals recorded from the biceps brachii of 52 healthy volunteers are preprocessed and subjected to S-transform, BD and EMBD. Twelve features are extracted from each method and prominent features are selected using genetic algorithm (GA) and binary particle swarm optimization (BPSO). Five machine learning algorithms, namely, naïve Bayes, support vector machine (SVM) of polynomial and radial basis kernel, random forest and rotation forests are used for the classification. The results show that all the proposed time-frequency distributions (TFDs) are able to show the nonstationary variations of sEMG signals. Most of the features exhibit statistically significant difference in the muscle fatigue and nonfatigue conditions. The maximum number of features (66%) is reduced by GA and BPSO for EMBD and BD-TFD respectively. The combination of EMBD- polynomial kernel based SVM is found to be most accurate (91% accuracy) in classifying the conditions with the features selected using GA. The proposed methods are found to be capable of handling the nonstationary and multicomponent variations of sEMG signals recorded in dynamic fatiguing contractions. Particularly, the combination of EMBD- polynomial kernel based SVM could be used to detect the dynamic muscle fatigue conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
Dirac(-Pauli), Fokker-Planck equations and exceptional Laguerre polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, Choon-Lin, E-mail: hcl@mail.tku.edu.tw
2011-04-15
Research Highlights: > Physical examples involving exceptional orthogonal polynomials. > Exceptional polynomials as deformations of classical orthogonal polynomials. > Exceptional polynomials from Darboux-Crum transformation. - Abstract: An interesting discovery in the last two years in the field of mathematical physics has been the exceptional X{sub l} Laguerre and Jacobi polynomials. Unlike the well-known classical orthogonal polynomials which start with constant terms, these new polynomials have lowest degree l = 1, 2, and ..., and yet they form complete set with respect to some positive-definite measure. While the mathematical properties of these new X{sub l} polynomials deserve further analysis, it ismore » also of interest to see if they play any role in physical systems. In this paper we indicate some physical models in which these new polynomials appear as the main part of the eigenfunctions. The systems we consider include the Dirac equations coupled minimally and non-minimally with some external fields, and the Fokker-Planck equations. The systems presented here have enlarged the number of exactly solvable physical systems known so far.« less
Solutions of interval type-2 fuzzy polynomials using a new ranking method
NASA Astrophysics Data System (ADS)
Rahman, Nurhakimah Ab.; Abdullah, Lazim; Ghani, Ahmad Termimi Ab.; Ahmad, Noor'Ani
2015-10-01
A few years ago, a ranking method have been introduced in the fuzzy polynomial equations. Concept of the ranking method is proposed to find actual roots of fuzzy polynomials (if exists). Fuzzy polynomials are transformed to system of crisp polynomials, performed by using ranking method based on three parameters namely, Value, Ambiguity and Fuzziness. However, it was found that solutions based on these three parameters are quite inefficient to produce answers. Therefore in this study a new ranking method have been developed with the aim to overcome the inherent weakness. The new ranking method which have four parameters are then applied in the interval type-2 fuzzy polynomials, covering the interval type-2 of fuzzy polynomial equation, dual fuzzy polynomial equations and system of fuzzy polynomials. The efficiency of the new ranking method then numerically considered in the triangular fuzzy numbers and the trapezoidal fuzzy numbers. Finally, the approximate solutions produced from the numerical examples indicate that the new ranking method successfully produced actual roots for the interval type-2 fuzzy polynomials.
NASA Technical Reports Server (NTRS)
Bhattacharya, K.; Ghil, M.
1979-01-01
A slightly modified version of the one-dimensional time-dependent energy-balance climate model of Ghil and Bhattacharya (1978) is presented. The albedo-temperature parameterization has been reformulated and the smoothing of the temperature distribution in the tropics has been eliminated. The model albedo depends on time-lagged temperature in order to account for finite growth and decay time of continental ice sheets. Two distinct regimes of oscillatory behavior which depend on the value of the albedo-temperature time lag are considered.
Coherent orthogonal polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Celeghini, E., E-mail: celeghini@fi.infn.it; Olmo, M.A. del, E-mail: olmo@fta.uva.es
2013-08-15
We discuss a fundamental characteristic of orthogonal polynomials, like the existence of a Lie algebra behind them, which can be added to their other relevant aspects. At the basis of the complete framework for orthogonal polynomials we include thus–in addition to differential equations, recurrence relations, Hilbert spaces and square integrable functions–Lie algebra theory. We start here from the square integrable functions on the open connected subset of the real line whose bases are related to orthogonal polynomials. All these one-dimensional continuous spaces allow, besides the standard uncountable basis (|x〉), for an alternative countable basis (|n〉). The matrix elements that relatemore » these two bases are essentially the orthogonal polynomials: Hermite polynomials for the line and Laguerre and Legendre polynomials for the half-line and the line interval, respectively. Differential recurrence relations of orthogonal polynomials allow us to realize that they determine an infinite-dimensional irreducible representation of a non-compact Lie algebra, whose second order Casimir C gives rise to the second order differential equation that defines the corresponding family of orthogonal polynomials. Thus, the Weyl–Heisenberg algebra h(1) with C=0 for Hermite polynomials and su(1,1) with C=−1/4 for Laguerre and Legendre polynomials are obtained. Starting from the orthogonal polynomials the Lie algebra is extended both to the whole space of the L{sup 2} functions and to the corresponding Universal Enveloping Algebra and transformation group. Generalized coherent states from each vector in the space L{sup 2} and, in particular, generalized coherent polynomials are thus obtained. -- Highlights: •Fundamental characteristic of orthogonal polynomials (OP): existence of a Lie algebra. •Differential recurrence relations of OP determine a unitary representation of a non-compact Lie group. •2nd order Casimir originates a 2nd order differential equation that defines the corresponding OP family. •Generalized coherent polynomials are obtained from OP.« less
NASA Astrophysics Data System (ADS)
Molz, F. J.; Kozubowski, T. J.; Miller, R. S.; Podgorski, K.
2005-12-01
The theory of non-stationary stochastic processes with stationary increments gives rise to stochastic fractals. When such fractals are used to represent measurements of (assumed stationary) physical properties, such as ln(k) increments in sediments or velocity increments "delta(v)" in turbulent flows, the resulting measurements exhibit scaling, either spatial, temporal or both. (In the present context, such scaling refers to systematic changes in the statistical properties of the increment distributions, such as variance, with the lag size over which the increments are determined.) Depending on the class of probability density functions (PDFs) that describe the increment distributions, the resulting stochastic fractals will display different properties. Until recently, the stationary increment process was represented using mainly Gaussian, Gamma or Levy PDFs. However, measurements in both sediments and fluid turbulence indicate that these PDFs are not commonly observed. Based on recent data and previous studies referenced and discussed in Meerschaert et al. (2004) and Molz et al. (2005), the measured increment PDFs display an approximate double exponential (Laplace) shape at smaller lags, and this shape evolves towards Gaussian at larger lags. A model for this behavior based on the Generalized Laplace PDF family called fractional Laplace motion, in analogy with its Gaussian counterpart - fractional Brownian motion, has been suggested (Meerschaert et al., 2004) and the necessary mathematics elaborated (Kozubowski et al., 2005). The resulting stochastic fractal is not a typical self-affine monofractal, but it does exhibit monofractal-like scaling in certain lag size ranges. To date, it has been shown that the Generalized Laplace family fits ln(k) increment distributions and reproduces the original 1941 theory of Kolmogorov when applied to Eulerian turbulent velocity increments. However, to make a physically self-consistent application to turbulence, one must adopt a Lagrangian viewpoint, and the details of this approach are still being developed. The potential analogy between turbulent delta(v) and sediment delta[ln(k)] is intriguing, and perhaps offers insight into the underlying chaotic processes that constitute turbulence and may result also in the pervasive heterogeneity observed in most natural sediments. Properties of the new Laplace fractal are presented, and potential applications to both sediments and fluid turbulence are discussed.
Simple Proof of Jury Test for Complex Polynomials
NASA Astrophysics Data System (ADS)
Choo, Younseok; Kim, Dongmin
Recently some attempts have been made in the literature to give simple proofs of Jury test for real polynomials. This letter presents a similar result for complex polynomials. A simple proof of Jury test for complex polynomials is provided based on the Rouché's Theorem and a single-parameter characterization of Schur stability property for complex polynomials.
NASA Astrophysics Data System (ADS)
Tang, Kunkun; Massa, Luca; Wang, Jonathan; Freund, Jonathan B.
2018-05-01
We introduce an efficient non-intrusive surrogate-based methodology for global sensitivity analysis and uncertainty quantification. Modified covariance-based sensitivity indices (mCov-SI) are defined for outputs that reflect correlated effects. The overall approach is applied to simulations of a complex plasma-coupled combustion system with disparate uncertain parameters in sub-models for chemical kinetics and a laser-induced breakdown ignition seed. The surrogate is based on an Analysis of Variance (ANOVA) expansion, such as widely used in statistics, with orthogonal polynomials representing the ANOVA subspaces and a polynomial dimensional decomposition (PDD) representing its multi-dimensional components. The coefficients of the PDD expansion are obtained using a least-squares regression, which both avoids the direct computation of high-dimensional integrals and affords an attractive flexibility in choosing sampling points. This facilitates importance sampling using a Bayesian calibrated posterior distribution, which is fast and thus particularly advantageous in common practical cases, such as our large-scale demonstration, for which the asymptotic convergence properties of polynomial expansions cannot be realized due to computation expense. Effort, instead, is focused on efficient finite-resolution sampling. Standard covariance-based sensitivity indices (Cov-SI) are employed to account for correlation of the uncertain parameters. Magnitude of Cov-SI is unfortunately unbounded, which can produce extremely large indices that limit their utility. Alternatively, mCov-SI are then proposed in order to bound this magnitude ∈ [ 0 , 1 ]. The polynomial expansion is coupled with an adaptive ANOVA strategy to provide an accurate surrogate as the union of several low-dimensional spaces, avoiding the typical computational cost of a high-dimensional expansion. It is also adaptively simplified according to the relative contribution of the different polynomials to the total variance. The approach is demonstrated for a laser-induced turbulent combustion simulation model, which includes parameters with correlated effects.
NASA Astrophysics Data System (ADS)
Doha, E. H.
2003-05-01
A formula expressing the Laguerre coefficients of a general-order derivative of an infinitely differentiable function in terms of its original coefficients is proved, and a formula expressing explicitly the derivatives of Laguerre polynomials of any degree and for any order as a linear combination of suitable Laguerre polynomials is deduced. A formula for the Laguerre coefficients of the moments of one single Laguerre polynomial of certain degree is given. Formulae for the Laguerre coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its Laguerre coefficients are also obtained. A simple approach in order to build and solve recursively for the connection coefficients between Jacobi-Laguerre and Hermite-Laguerre polynomials is described. An explicit formula for these coefficients between Jacobi and Laguerre polynomials is given, of which the ultra-spherical polynomials of the first and second kinds and Legendre polynomials are important special cases. An analytical formula for the connection coefficients between Hermite and Laguerre polynomials is also obtained.
Time-lag of the earthquake energy release between three seismic regions
NASA Astrophysics Data System (ADS)
Tsapanos, Theodoros M.; Liritzis, Ioannis
1992-06-01
Three complete data sets of strong earthquakes ( M≥5.5), which occurred in the seismic regions of Chile, Mexico and Kamchatka during the time period 1899 1985, have been used to test the existence of a time-lag in the seismic energy release between these regions. These data sets were cross-correlated in order to determine whether any pair of the sets are correlated. For this purpose statistical tests, such as the T-test, the Fisher's transformation and probability distribution have been applied to determine the significance of the obtained correlation coefficients. The results show that the time-lag between Chile and Kamchatka is -2, which means that Kamchatka precedes Chile by 2 years, with a correlation coefficient significant at 99.80% level, a weak correlation between Kamchatka-Mexico and noncorrelation for Mexico-Chile.
Supersonic Pitch Damping Predictions of Blunt Entry Vehicles from Static CFD Solutions
NASA Technical Reports Server (NTRS)
Schoenenberger, Mark
2013-01-01
A technique for predicting supersonic pitch damping of blunt axisymmetric bodies from static CFD data is presented. The contributions to static pitching moment due to forebody and aftbody pressure distributions are broken out and considered separately. The one-dimension moment equation is cast to model the separate contributions from forebody and aftbody pressures with no traditional damping term included. The aftbody contribution to pitching moment is lagged by a phase angle of the natural oscillation period. This lag represents the time for aftbody wake structures to equilibrate while the body is oscillation. The characteristic equation of this formulation indicates that the lagged backshell moment adds a damping moment equivalent in form to a constant pitch damping term. CFD calculations of the backshell's contribution to the static pitching moment for a range of angles-of-attack is used to predict pitch damping coefficients. These predictions are compared with ballistic range data taken of the Mars Exploration Rover (MER) capsule and forced oscillation data of the Mars Viking capsule. The lag model appears to capture dynamic stability variation due to backshell geometry as well as Mach number.
Single molecular biology: coming of age in DNA replication.
Liu, Xiao-Jing; Lou, Hui-Qiang
2017-09-20
DNA replication is an essential process of the living organisms. To achieve precise and reliable replication, DNA polymerases play a central role in DNA synthesis. Previous investigations have shown that the average rates of DNA synthesis on the leading and lagging strands in a replisome must be similar to avoid the formation of significant gaps in the nascent strands. The underlying mechanism has been assumed to be coordination between leading- and lagging-strand polymerases. However, Kowalczykowski's lab members recently performed single molecule techniques in E. coli and showed the real-time behavior of a replisome. The leading- and lagging-strand polymerases function stochastically and independently. Furthermore, when a DNA polymerase is paused, the helicase slows down in a self-regulating fail-safe mechanism, akin to a ''dead-man's switch''. Based on the real-time single-molecular observation, the authors propose that leading- and lagging-strand polymerases synthesize DNA stochastically within a Gaussian distribution. Along with the development and application of single-molecule techniques, we will witness a new age of DNA replication and other biological researches.
NASA Astrophysics Data System (ADS)
Chen, Zhixiang; Fu, Bin
This paper is our third step towards developing a theory of testing monomials in multivariate polynomials and concentrates on two problems: (1) How to compute the coefficients of multilinear monomials; and (2) how to find a maximum multilinear monomial when the input is a ΠΣΠ polynomial. We first prove that the first problem is #P-hard and then devise a O *(3 n s(n)) upper bound for this problem for any polynomial represented by an arithmetic circuit of size s(n). Later, this upper bound is improved to O *(2 n ) for ΠΣΠ polynomials. We then design fully polynomial-time randomized approximation schemes for this problem for ΠΣ polynomials. On the negative side, we prove that, even for ΠΣΠ polynomials with terms of degree ≤ 2, the first problem cannot be approximated at all for any approximation factor ≥ 1, nor "weakly approximated" in a much relaxed setting, unless P=NP. For the second problem, we first give a polynomial time λ-approximation algorithm for ΠΣΠ polynomials with terms of degrees no more a constant λ ≥ 2. On the inapproximability side, we give a n (1 - ɛ)/2 lower bound, for any ɛ> 0, on the approximation factor for ΠΣΠ polynomials. When the degrees of the terms in these polynomials are constrained as ≤ 2, we prove a 1.0476 lower bound, assuming Pnot=NP; and a higher 1.0604 lower bound, assuming the Unique Games Conjecture.
Mafusire, Cosmas; Krüger, Tjaart P J
2018-06-01
The concept of orthonormal vector circle polynomials is revisited by deriving a set from the Cartesian gradient of Zernike polynomials in a unit circle using a matrix-based approach. The heart of this model is a closed-form matrix equation of the gradient of Zernike circle polynomials expressed as a linear combination of lower-order Zernike circle polynomials related through a gradient matrix. This is a sparse matrix whose elements are two-dimensional standard basis transverse Euclidean vectors. Using the outer product form of the Cholesky decomposition, the gradient matrix is used to calculate a new matrix, which we used to express the Cartesian gradient of the Zernike circle polynomials as a linear combination of orthonormal vector circle polynomials. Since this new matrix is singular, the orthonormal vector polynomials are recovered by reducing the matrix to its row echelon form using the Gauss-Jordan elimination method. We extend the model to derive orthonormal vector general polynomials, which are orthonormal in a general pupil by performing a similarity transformation on the gradient matrix to give its equivalent in the general pupil. The outer form of the Gram-Schmidt procedure and the Gauss-Jordan elimination method are then applied to the general pupil to generate the orthonormal vector general polynomials from the gradient of the orthonormal Zernike-based polynomials. The performance of the model is demonstrated with a simulated wavefront in a square pupil inscribed in a unit circle.
Discrete-time state estimation for stochastic polynomial systems over polynomial observations
NASA Astrophysics Data System (ADS)
Hernandez-Gonzalez, M.; Basin, M.; Stepanov, O.
2018-07-01
This paper presents a solution to the mean-square state estimation problem for stochastic nonlinear polynomial systems over polynomial observations confused with additive white Gaussian noises. The solution is given in two steps: (a) computing the time-update equations and (b) computing the measurement-update equations for the state estimate and error covariance matrix. A closed form of this filter is obtained by expressing conditional expectations of polynomial terms as functions of the state estimate and error covariance. As a particular case, the mean-square filtering equations are derived for a third-degree polynomial system with second-degree polynomial measurements. Numerical simulations show effectiveness of the proposed filter compared to the extended Kalman filter.
Dang, Tran Ngoc; Seposo, Xerxes T; Duc, Nguyen Huu Chau; Thang, Tran Binh; An, Do Dang; Hang, Lai Thi Minh; Long, Tran Thanh; Loan, Bui Thi Hong; Honda, Yasushi
2016-01-01
The relationship between temperature and mortality has been found to be U-, V-, or J-shaped in developed temperate countries; however, in developing tropical/subtropical cities, it remains unclear. Our goal was to investigate the relationship between temperature and mortality in Hue, a subtropical city in Viet Nam. We collected daily mortality data from the Vietnamese A6 mortality reporting system for 6,214 deceased persons between 2009 and 2013. A distributed lag non-linear model was used to examine the temperature effects on all-cause and cause-specific mortality by assuming negative binomial distribution for count data. We developed an objective-oriented model selection with four steps following the Akaike information criterion (AIC) rule (i.e. a smaller AIC value indicates a better model). High temperature-related mortality was more strongly associated with short lags, whereas low temperature-related mortality was more strongly associated with long lags. The low temperatures increased risk in all-category mortality compared to high temperatures. We observed elevated temperature-mortality risk in vulnerable groups: elderly people (high temperature effect, relative risk [RR]=1.42, 95% confidence interval [CI]=1.11-1.83; low temperature effect, RR=2.0, 95% CI=1.13-3.52), females (low temperature effect, RR=2.19, 95% CI=1.14-4.21), people with respiratory disease (high temperature effect, RR=2.45, 95% CI=0.91-6.63), and those with cardiovascular disease (high temperature effect, RR=1.6, 95% CI=1.15-2.22; low temperature effect, RR=1.99, 95% CI=0.92-4.28). In Hue, the temperature significantly increased the risk of mortality, especially in vulnerable groups (i.e. elderly, female, people with respiratory and cardiovascular diseases). These findings may provide a foundation for developing adequate policies to address the effects of temperature on health in Hue City.
Dang, Tran Ngoc; Seposo, Xerxes T.; Duc, Nguyen Huu Chau; Thang, Tran Binh; An, Do Dang; Hang, Lai Thi Minh; Long, Tran Thanh; Loan, Bui Thi Hong; Honda, Yasushi
2016-01-01
Background The relationship between temperature and mortality has been found to be U-, V-, or J-shaped in developed temperate countries; however, in developing tropical/subtropical cities, it remains unclear. Objectives Our goal was to investigate the relationship between temperature and mortality in Hue, a subtropical city in Viet Nam. Design We collected daily mortality data from the Vietnamese A6 mortality reporting system for 6,214 deceased persons between 2009 and 2013. A distributed lag non-linear model was used to examine the temperature effects on all-cause and cause-specific mortality by assuming negative binomial distribution for count data. We developed an objective-oriented model selection with four steps following the Akaike information criterion (AIC) rule (i.e. a smaller AIC value indicates a better model). Results High temperature-related mortality was more strongly associated with short lags, whereas low temperature-related mortality was more strongly associated with long lags. The low temperatures increased risk in all-category mortality compared to high temperatures. We observed elevated temperature-mortality risk in vulnerable groups: elderly people (high temperature effect, relative risk [RR]=1.42, 95% confidence interval [CI]=1.11–1.83; low temperature effect, RR=2.0, 95% CI=1.13–3.52), females (low temperature effect, RR=2.19, 95% CI=1.14–4.21), people with respiratory disease (high temperature effect, RR=2.45, 95% CI=0.91–6.63), and those with cardiovascular disease (high temperature effect, RR=1.6, 95% CI=1.15–2.22; low temperature effect, RR=1.99, 95% CI=0.92–4.28). Conclusions In Hue, the temperature significantly increased the risk of mortality, especially in vulnerable groups (i.e. elderly, female, people with respiratory and cardiovascular diseases). These findings may provide a foundation for developing adequate policies to address the effects of temperature on health in Hue City. PMID:26781954
Accounting for inherent variability of growth in microbial risk assessment.
Marks, H M; Coleman, M E
2005-04-15
Risk assessments of pathogens need to account for the growth of small number of cells under varying conditions. In order to determine the possible risks that occur when there are small numbers of cells, stochastic models of growth are needed that would capture the distribution of the number of cells over replicate trials of the same scenario or environmental conditions. This paper provides a simple stochastic growth model, accounting only for inherent cell-growth variability, assuming constant growth kinetic parameters, for an initial, small, numbers of cells assumed to be transforming from a stationary to an exponential phase. Two, basic, microbial sets of assumptions are considered: serial, where it is assume that cells transform through a lag phase before entering the exponential phase of growth; and parallel, where it is assumed that lag and exponential phases develop in parallel. The model is based on, first determining the distribution of the time when growth commences, and then modelling the conditional distribution of the number of cells. For the latter distribution, it is found that a Weibull distribution provides a simple approximation to the conditional distribution of the relative growth, so that the model developed in this paper can be easily implemented in risk assessments using commercial software packages.
Topology of Large-Scale Structures of Galaxies in two Dimensions—Systematic Effects
NASA Astrophysics Data System (ADS)
Appleby, Stephen; Park, Changbom; Hong, Sungwook E.; Kim, Juhan
2017-02-01
We study the two-dimensional topology of galactic distribution when projected onto two-dimensional spherical shells. Using the latest Horizon Run 4 simulation data, we construct the genus of the two-dimensional field and consider how this statistic is affected by late-time nonlinear effects—principally gravitational collapse and redshift space distortion (RSD). We also consider systematic and numerical artifacts, such as shot noise, galaxy bias, and finite pixel effects. We model the systematics using a Hermite polynomial expansion and perform a comprehensive analysis of known effects on the two-dimensional genus, with a view toward using the statistic for cosmological parameter estimation. We find that the finite pixel effect is dominated by an amplitude drop and can be made less than 1% by adopting pixels smaller than 1/3 of the angular smoothing length. Nonlinear gravitational evolution introduces time-dependent coefficients of the zeroth, first, and second Hermite polynomials, but the genus amplitude changes by less than 1% between z = 1 and z = 0 for smoothing scales {R}{{G}}> 9 {Mpc}/{{h}}. Non-zero terms are measured up to third order in the Hermite polynomial expansion when studying RSD. Differences in the shapes of the genus curves in real and redshift space are small when we adopt thick redshift shells, but the amplitude change remains a significant ˜ { O }(10 % ) effect. The combined effects of galaxy biasing and shot noise produce systematic effects up to the second Hermite polynomial. It is shown that, when sampling, the use of galaxy mass cuts significantly reduces the effect of shot noise relative to random sampling.
Elastic strain field due to an inclusion of a polyhedral shape with a non-uniform lattice misfit
NASA Astrophysics Data System (ADS)
Nenashev, A. V.; Dvurechenskii, A. V.
2017-03-01
An analytical solution in a closed form is obtained for the three-dimensional elastic strain distribution in an unlimited medium containing an inclusion with a coordinate-dependent lattice mismatch (an eigenstrain). Quantum dots consisting of a solid solution with a spatially varying composition are examples of such inclusions. It is assumed that both the inclusion and the surrounding medium (the matrix) are elastically isotropic and have the same Young's modulus and Poisson ratio. The inclusion shape is supposed to be an arbitrary polyhedron, and the coordinate dependence of the lattice misfit, with respect to the matrix, is assumed to be a polynomial of any degree. It is shown that, both inside and outside the inclusion, the strain tensor is expressed as a sum of contributions of all faces, edges, and vertices of the inclusion. Each of these contributions, as a function of the observation point's coordinates, is a product of some polynomial and a simple analytical function, which is the solid angle subtended by the face from the observation point (for a contribution of a face), or the potential of the uniformly charged edge (for a contribution of an edge), or the distance from the vertex to the observation point (for a contribution of a vertex). The method of constructing the relevant polynomial functions is suggested. We also found out that similar expressions describe an electrostatic or gravitational potential, as well as its first and second derivatives, of a polyhedral body with a charge/mass density that depends on coordinates polynomially.
Condemi, Vincenzo; Gestro, Massimo; Dozio, Elena; Tartaglino, Bruno; Corsi Romanelli, Massimiliano Marco; Solimene, Umberto; Meco, Roberto
2015-03-01
The incidence of nephrolithiasis is rising worldwide, especially in women and with increasing age. Incidence and prevalence of kidney stones are affected by genetic, nutritional, and environmental factors. The aim of this study is to investigate the link between various meteorological factors (independent variables) and the daily number of visits to the Emergency Department (ED of the S. Croce and Carle Hospital of Cuneo for renal colic (RC) and urinary stones (UC) as the dependent variable over the years 2007-2010.The Poisson generalized regression models (PGAMs) have been used in different progressive ways. The results of PGAMs (stage 1) adjusted for seasonal and calendar factors confirmed a significant correlation (p < 0.03) with the thermal parameter. Evaluation of the dose-response effect [PGAMs combined with distributed lags nonlinear models (DLNMs)-stage 2], expressed in terms of relative risk (RR) and cumulative relative risk (RRC), indicated a relative significant effect up to 15 lag days of lag (RR > 1), with a first peak after 5 days (lag ranges 0-1, 0-3, and 0-5) and a second weak peak observed along the 5-15 lag range days. The estimated RR for females was significant, mainly in the second and fourth age group considered (19-44 and >65 years): RR for total ED visits 1.27, confidence interval (CI) 1.11-1.46 (lag 0-5 days); RR 1.42, CI 1.01-2.01 (lag 0-10 days); and RR 1.35, CI 1.09-1.68 (lag 0-15 days). The research also indicated a moderate involvement of the thermal factor in the onset of RC caused by UC, exclusively in the female sex. Further studies will be necessary to confirm these results.
Transport and time lag of chlorofluorocarbon gases in the unsaturated zone, Rabis Creek, Denmark
Engesgaard, Peter; Højberg, Anker L.; Hinsby, Klaus; Jensen, Karsten H.; Laier, Troels; Larsen, Flemming; Busenberg, Eurybiades; Plummer, Niel
2004-01-01
Transport of chlorofluorocarbon (CFC) gases through the unsaturated zone to the water table is affected by gas diffusion, air–water exchange (solubility), sorption to the soil matrix, advective–dispersive transport in the water phase, and, in some cases, anaerobic degradation. In deep unsaturated zones, this may lead to a time lag between entry of gases at the land surface and recharge to groundwater. Data from a Danish field site were used to investigate how time lag is affected by variations in water content and to explore the use of simple analytical solutions to calculate time lag. Numerical simulations demonstrate that either degradation or sorption of CFC-11 takes place, whereas CFC-12 and CFC-113 are nonreactive. Water flow did not appreciably affect transport. An analytical solution for the period with a linear increase in atmospheric CFC concentrations (approximately early 1970s to early 1990s) was used to calculate CFC profiles and time lags. We compared the analytical results with numerical simulations. The time lags in the 15-m-deep unsaturated zone increase from 4.2 to between 5.2 and 6.1 yr and from 3.4 to 3.9 yr for CFC-11 and CFC-12, respectively, when simulations change from use of an exponential to a linear increase in atmospheric concentrations. The CFC concentrations at the water table before the early 1990s can be estimated by displacing the atmospheric input function by these fixed time lags. A sensitivity study demonstrates conditions under which a time lag in the unsaturated zone becomes important. The most critical parameter is the tortuosity coefficient. The analytical approach is valid for the low range of tortuosity coefficients (τ = 0.1–0.4) and unsaturated zones greater than approximately 20 m in thickness. In these cases the CFC distribution may still be from either the exponential or linear phase. In other cases, the use of numerical models, as described in our work and elsewhere, is an option.
Legendre modified moments for Euler's constant
NASA Astrophysics Data System (ADS)
Prévost, Marc
2008-10-01
Polynomial moments are often used for the computation of Gauss quadrature to stabilize the numerical calculation of the orthogonal polynomials, see [W. Gautschi, Computational aspects of orthogonal polynomials, in: P. Nevai (Ed.), Orthogonal Polynomials-Theory and Practice, NATO ASI Series, Series C: Mathematical and Physical Sciences, vol. 294. Kluwer, Dordrecht, 1990, pp. 181-216 [6]; W. Gautschi, On the sensitivity of orthogonal polynomials to perturbations in the moments, Numer. Math. 48(4) (1986) 369-382 [5]; W. Gautschi, On generating orthogonal polynomials, SIAM J. Sci. Statist. Comput. 3(3) (1982) 289-317 [4
An out of phase coupling between the atmosphere and the ocean over the North Atlantic Ocean
NASA Astrophysics Data System (ADS)
Ribera, Pedro; Ordoñez, Paulina; Gallego, David; Peña-Ortiz, Cristina
2017-04-01
An oscillation band, with a period ranging between 40 and 60 years, has been identified as the most intense signal over the North Atlantic Ocean using several oceanic and atmospheric reanalyses between 1856 and the present. This signal represents the Atlantic Multidecadal Oscillation, an oscillation between warmer and colder than normal conditions in SST. Simultaneously, those changes in SST are accompanied by changes in atmospheric conditions represented by surface pressure, temperature and circulation. In fact, the evolution of the surface pressure pattern along this oscillation shows a North Atlantic Oscillation-like pattern, suggesting the existence of an out of phase coupling between atmospheric and oceanic conditions. Further analysis shows that the evolution of the oceanic SST distribution modifies atmospheric baroclinic conditions in the mid to high latitudes of the North Atlantic and leads the atmospheric variability by 6-7 years. If AMO represents the oceanic conditons and NAO represents the atmospheric variability then it could be said that AMO of one sign leads NAO of the opposite sign with a lag of 6-7 years. On the other hand, the evolution of atmospheric conditions, represented by pressure distribution patterns, favors atmospheric circulation anomalies and induces a heat advection which tends to change the sign of the existing SST distribution and oceanic conditions with a lag of 16-17 years. In this case, NAO of one sign leads AMO of the same sign with a lag of 16-17 years.
Short-Term Mortality Rates during a Decade of Improved Air Quality in Erfurt, Germany
Breitner, Susanne; Stölzel, Matthias; Cyrys, Josef; Pitz, Mike; Wölke, Gabriele; Kreyling, Wolfgang; Küchenhoff, Helmut; Heinrich, Joachim; Wichmann, H.-Erich; Peters, Annette
2009-01-01
Background Numerous studies have shown associations between ambient air pollution and daily mortality. Objectives Our goal was to investigate the association of ambient air pollution and daily mortality in Erfurt, Germany, over a 10.5-year period after the German unification, when air quality improved. Methods We obtained daily mortality counts and data on mass concentrations of particulate matter (PM) < 10 μm in aerodynamic diameter (PM10), gaseous pollutants, and meteorology in Erfurt between October 1991 and March 2002. We obtained ultrafine particle number concentrations (UFP) and mass concentrations of PM < 2.5 μm in aerodynamic diameter (PM2.5) from September 1995 to March 2002. We analyzed the data using semiparametric Poisson regression models adjusting for trend, seasonality, influenza epidemics, day of the week, and meteorology. We evaluated cumulative associations between air pollution and mortality using polynomial distributed lag (PDL) models and multiday moving averages of air pollutants. We evaluated changes in the associations over time in time-varying coefficient models. Results Air pollution concentrations decreased over the study period. Cumulative exposure to UFP was associated with increased mortality. An interquartile range (IQR) increase in the 15-day cumulative mean UFP of 7,649 cm−3 was associated with a relative risk (RR) of 1.060 [95% confidence interval (CI), 1.008–1.114] for PDL models and an RR/IQR of 1.055 (95% CI, 1.011–1.101) for moving averages. RRs decreased from the mid-1990s to the late 1990s. Conclusion Results indicate an elevated mortality risk from short-term exposure to UFP. They further suggest that RRs for short-term associations of air pollution decreased as pollution control measures were implemented in Eastern Germany. PMID:19337521
Comparing drinking water treatment costs to source water protection costs using time series analysis
NASA Astrophysics Data System (ADS)
Heberling, Matthew T.; Nietch, Christopher T.; Thurston, Hale W.; Elovitz, Michael; Birkenhauer, Kelly H.; Panguluri, Srinivas; Ramakrishnan, Balaji; Heiser, Eric; Neyer, Tim
2015-11-01
We present a framework to compare water treatment costs to source water protection costs, an important knowledge gap for drinking water treatment plants (DWTPs). This trade-off helps to determine what incentives a DWTP has to invest in natural infrastructure or pollution reduction in the watershed rather than pay for treatment on site. To illustrate, we use daily observations from 2007 to 2011 for the Bob McEwen Water Treatment Plant, Clermont County, Ohio, to understand the relationship between treatment costs and water quality and operational variables (e.g., turbidity, total organic carbon [TOC], pool elevation, and production volume). Part of our contribution to understanding drinking water treatment costs is examining both long-run and short-run relationships using error correction models (ECMs). Treatment costs per 1000 gallons (per 3.79 m3) were based on chemical, pumping, and granular activated carbon costs. Results from the ECM suggest that a 1% decrease in turbidity decreases treatment costs by 0.02% immediately and an additional 0.1% over future days. Using mean values for the plant, a 1% decrease in turbidity leads to $1123/year decrease in treatment costs. To compare these costs with source water protection costs, we use a polynomial distributed lag model to link total phosphorus loads, a source water quality parameter affected by land use changes, to turbidity at the plant. We find the costs for source water protection to reduce loads much greater than the reduction in treatment costs during these years. Although we find no incentive to protect source water in our case study, this framework can help DWTPs quantify the trade-offs.
Per capita alcohol consumption and suicide mortality in a panel of US states from 1950 to 2002
Kerr, William C.; Subbaraman, Meenakshi; Ye, Yu
2011-01-01
Introduction and Aims The relationship between per capita alcohol consumption and suicide rates has been found to vary in significance and magnitude across countries. This study utilizes a panel of time-series measures from the US states to estimate the effects of changes in current and lagged alcohol sales on suicide mortality risk. Design and Methods Generalized least squares estimation utilized 53 years of data from 48 US states or state groups to estimate relationships between total and beverage-specific alcohol consumption measures and age-standardized suicide mortality rates in first-differenced semi-logged models. Results An additional liter of ethanol from total alcohol sales was estimated to increase suicide rates by 2.3% in models utilizing a distributed lag specification while no effect was found in models including only current alcohol consumption. A similar result is found for men, while for women both current and distributed lag measures were found to be significantly related to suicide rates with an effect of about 3.2% per liter from current and 5.8% per liter from the lagged measure. Beverage-specific models indicate that spirits is most closely linked with suicide risk for women while beer and wine are for men. Unemployment rates are consistently positively related to suicide rates. Discussion and Conclusions Results suggest that chronic effects, potentially related to alcohol abuse and dependence, are the main source of alcohol’s impact on suicide rates in the US for men and are responsible for about half of the effect for women. PMID:21896069
Time series regression model for infectious disease and weather.
Imai, Chisato; Armstrong, Ben; Chalabi, Zaid; Mangtani, Punam; Hashizume, Masahiro
2015-10-01
Time series regression has been developed and long used to evaluate the short-term associations of air pollution and weather with mortality or morbidity of non-infectious diseases. The application of the regression approaches from this tradition to infectious diseases, however, is less well explored and raises some new issues. We discuss and present potential solutions for five issues often arising in such analyses: changes in immune population, strong autocorrelations, a wide range of plausible lag structures and association patterns, seasonality adjustments, and large overdispersion. The potential approaches are illustrated with datasets of cholera cases and rainfall from Bangladesh and influenza and temperature in Tokyo. Though this article focuses on the application of the traditional time series regression to infectious diseases and weather factors, we also briefly introduce alternative approaches, including mathematical modeling, wavelet analysis, and autoregressive integrated moving average (ARIMA) models. Modifications proposed to standard time series regression practice include using sums of past cases as proxies for the immune population, and using the logarithm of lagged disease counts to control autocorrelation due to true contagion, both of which are motivated from "susceptible-infectious-recovered" (SIR) models. The complexity of lag structures and association patterns can often be informed by biological mechanisms and explored by using distributed lag non-linear models. For overdispersed models, alternative distribution models such as quasi-Poisson and negative binomial should be considered. Time series regression can be used to investigate dependence of infectious diseases on weather, but may need modifying to allow for features specific to this context. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Statistical properties of Fourier-based time-lag estimates
NASA Astrophysics Data System (ADS)
Epitropakis, A.; Papadakis, I. E.
2016-06-01
Context. The study of X-ray time-lag spectra in active galactic nuclei (AGN) is currently an active research area, since it has the potential to illuminate the physics and geometry of the innermost region (I.e. close to the putative super-massive black hole) in these objects. To obtain reliable information from these studies, the statistical properties of time-lags estimated from data must be known as accurately as possible. Aims: We investigated the statistical properties of Fourier-based time-lag estimates (I.e. based on the cross-periodogram), using evenly sampled time series with no missing points. Our aim is to provide practical "guidelines" on estimating time-lags that are minimally biased (I.e. whose mean is close to their intrinsic value) and have known errors. Methods: Our investigation is based on both analytical work and extensive numerical simulations. The latter consisted of generating artificial time series with various signal-to-noise ratios and sampling patterns/durations similar to those offered by AGN observations with present and past X-ray satellites. We also considered a range of different model time-lag spectra commonly assumed in X-ray analyses of compact accreting systems. Results: Discrete sampling, binning and finite light curve duration cause the mean of the time-lag estimates to have a smaller magnitude than their intrinsic values. Smoothing (I.e. binning over consecutive frequencies) of the cross-periodogram can add extra bias at low frequencies. The use of light curves with low signal-to-noise ratio reduces the intrinsic coherence, and can introduce a bias to the sample coherence, time-lag estimates, and their predicted error. Conclusions: Our results have direct implications for X-ray time-lag studies in AGN, but can also be applied to similar studies in other research fields. We find that: a) time-lags should be estimated at frequencies lower than ≈ 1/2 the Nyquist frequency to minimise the effects of discrete binning of the observed time series; b) smoothing of the cross-periodogram should be avoided, as this may introduce significant bias to the time-lag estimates, which can be taken into account by assuming a model cross-spectrum (and not just a model time-lag spectrum); c) time-lags should be estimated by dividing observed time series into a number, say m, of shorter data segments and averaging the resulting cross-periodograms; d) if the data segments have a duration ≳ 20 ks, the time-lag bias is ≲15% of its intrinsic value for the model cross-spectra and power-spectra considered in this work. This bias should be estimated in practise (by considering possible intrinsic cross-spectra that may be applicable to the time-lag spectra at hand) to assess the reliability of any time-lag analysis; e) the effects of experimental noise can be minimised by only estimating time-lags in the frequency range where the sample coherence is larger than 1.2/(1 + 0.2m). In this range, the amplitude of noise variations caused by measurement errors is smaller than the amplitude of the signal's intrinsic variations. As long as m ≳ 20, time-lags estimated by averaging over individual data segments have analytical error estimates that are within 95% of the true scatter around their mean, and their distribution is similar, albeit not identical, to a Gaussian.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pride, Kerry R., E-mail: hgp3@cdc.gov; Wyoming Department of Health, 6101 Yellowstone Road, Suite 510, Cheyenne, WY 82002; Peel, Jennifer L.
Objective: Short-term exposure to ground-level ozone has been linked to adverse respiratory and other health effects; previous studies typically have focused on summer ground-level ozone in urban areas. During 2008–2011, Sublette County, Wyoming (population: ~10,000 persons), experienced periods of elevated ground-level ozone concentrations during the winter. This study sought to evaluate the association of daily ground-level ozone concentrations and health clinic visits for respiratory disease in this rural county. Methods: Clinic visits for respiratory disease were ascertained from electronic billing records of the two clinics in Sublette County for January 1, 2008–December 31, 2011. A time-stratified case-crossover design, adjusted formore » temperature and humidity, was used to investigate associations between ground-level ozone concentrations measured at one station and clinic visits for a respiratory health concern by using an unconstrained distributed lag of 0–3 days and single-day lags of 0 day, 1 day, 2 days, and 3 days. Results: The data set included 12,742 case-days and 43,285 selected control-days. The mean ground-level ozone observed was 47±8 ppb. The unconstrained distributed lag of 0–3 days was consistent with a null association (adjusted odds ratio [aOR]: 1.001; 95% confidence interval [CI]: 0.990–1.012); results for lags 0, 2, and 3 days were consistent with the null. However, the results for lag 1 were indicative of a positive association; for every 10-ppb increase in the 8-h maximum average ground-level ozone, a 3.0% increase in respiratory clinic visits the following day was observed (aOR: 1.031; 95% CI: 0.994–1.069). Season modified the adverse respiratory effects: ground-level ozone was significantly associated with respiratory clinic visits during the winter months. The patterns of results from all sensitivity analyzes were consistent with the a priori model. Conclusions: The results demonstrate an association of increasing ground-level ozone with an increase in clinic visits for adverse respiratory-related effects in the following day (lag day 1) in Sublette County; the magnitude was strongest during the winter months; this association during the winter months in a rural location warrants further investigation. - Highlights: • We assessed elevated ground-level ozone in frontier Sublette County, Wyoming. • Ground-level ozone concentrations were moderately to highly correlated between stations. • Adverse respiratory-related clinic visits occurred year round at lag 1. • Strongest association of clinic visits was in the coldest months at lag 1.« less
The complexity of divisibility.
Bausch, Johannes; Cubitt, Toby
2016-09-01
We address two sets of long-standing open questions in linear algebra and probability theory, from a computational complexity perspective: stochastic matrix divisibility, and divisibility and decomposability of probability distributions. We prove that finite divisibility of stochastic matrices is an NP-complete problem, and extend this result to nonnegative matrices, and completely-positive trace-preserving maps, i.e. the quantum analogue of stochastic matrices. We further prove a complexity hierarchy for the divisibility and decomposability of probability distributions, showing that finite distribution divisibility is in P, but decomposability is NP-hard. For the former, we give an explicit polynomial-time algorithm. All results on distributions extend to weak-membership formulations, proving that the complexity of these problems is robust to perturbations.
Crooks, James Lewis; Cascio, Wayne E; Percy, Madelyn S; Reyes, Jeanette; Neas, Lucas M; Hilborn, Elizabeth D
2016-11-01
The impact of dust storms on human health has been studied in the context of Asian, Saharan, Arabian, and Australian storms, but there has been no recent population-level epidemiological research on the dust storms in North America. The relevance of dust storms to public health is likely to increase as extreme weather events are predicted to become more frequent with anticipated changes in climate through the 21st century. We examined the association between dust storms and county-level non-accidental mortality in the United States from 1993 through 2005. Dust storm incidence data, including date and approximate location, are taken from the U.S. National Weather Service storm database. County-level mortality data for the years 1993-2005 were acquired from the National Center for Health Statistics. Distributed lag conditional logistic regression models under a time-stratified case-crossover design were used to study the relationship between dust storms and daily mortality counts over the whole United States and in Arizona and California specifically. End points included total non-accidental mortality and three mortality subgroups (cardiovascular, respiratory, and other non-accidental). We estimated that for the United States as a whole, total non-accidental mortality increased by 7.4% (95% CI: 1.6, 13.5; p = 0.011) and 6.7% (95% CI: 1.1, 12.6; p = 0.018) at 2- and 3-day lags, respectively, and by an average of 2.7% (95% CI: 0.4, 5.1; p = 0.023) over lags 0-5 compared with referent days. Significant associations with non-accidental mortality were estimated for California (lag 2 and 0-5 day) and Arizona (lag 3), for cardiovascular mortality in the United States (lag 2) and Arizona (lag 3), and for other non-accidental mortality in California (lags 1-3 and 0-5). Dust storms are associated with increases in lagged non-accidental and cardiovascular mortality. Citation: Crooks JL, Cascio WE, Percy MS, Reyes J, Neas LM, Hilborn ED. 2016. The association between dust storms and daily non-accidental mortality in the United States, 1993-2005. Environ Health Perspect 124:1735-1743; http://dx.doi.org/10.1289/EHP216.
Estimating Bias Error Distributions
NASA Technical Reports Server (NTRS)
Liu, Tian-Shu; Finley, Tom D.
2001-01-01
This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.
Direct calculation of modal parameters from matrix orthogonal polynomials
NASA Astrophysics Data System (ADS)
El-Kafafy, Mahmoud; Guillaume, Patrick
2011-10-01
The object of this paper is to introduce a new technique to derive the global modal parameter (i.e. system poles) directly from estimated matrix orthogonal polynomials. This contribution generalized the results given in Rolain et al. (1994) [5] and Rolain et al. (1995) [6] for scalar orthogonal polynomials to multivariable (matrix) orthogonal polynomials for multiple input multiple output (MIMO) system. Using orthogonal polynomials improves the numerical properties of the estimation process. However, the derivation of the modal parameters from the orthogonal polynomials is in general ill-conditioned if not handled properly. The transformation of the coefficients from orthogonal polynomials basis to power polynomials basis is known to be an ill-conditioned transformation. In this paper a new approach is proposed to compute the system poles directly from the multivariable orthogonal polynomials. High order models can be used without any numerical problems. The proposed method will be compared with existing methods (Van Der Auweraer and Leuridan (1987) [4] Chen and Xu (2003) [7]). For this comparative study, simulated as well as experimental data will be used.
Independence polynomial and matching polynomial of the Koch network
NASA Astrophysics Data System (ADS)
Liao, Yunhua; Xie, Xiaoliang
2015-11-01
The lattice gas model and the monomer-dimer model are two classical models in statistical mechanics. It is well known that the partition functions of these two models are associated with the independence polynomial and the matching polynomial in graph theory, respectively. Both polynomials have been shown to belong to the “#P-complete” class, which indicate the problems are computationally “intractable”. We consider these two polynomials of the Koch networks which are scale-free with small-world effects. Explicit recurrences are derived, and explicit formulae are presented for the number of independent sets of a certain type.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vignat, C.; Lamberti, P. W.
2009-10-15
Recently, Carinena, et al. [Ann. Phys. 322, 434 (2007)] introduced a new family of orthogonal polynomials that appear in the wave functions of the quantum harmonic oscillator in two-dimensional constant curvature spaces. They are a generalization of the Hermite polynomials and will be called curved Hermite polynomials in the following. We show that these polynomials are naturally related to the relativistic Hermite polynomials introduced by Aldaya et al. [Phys. Lett. A 156, 381 (1991)], and thus are Jacobi polynomials. Moreover, we exhibit a natural bijection between the solutions of the quantum harmonic oscillator on negative curvature spaces and on positivemore » curvature spaces. At last, we show a maximum entropy property for the ground states of these oscillators.« less
Stabilisation of discrete-time polynomial fuzzy systems via a polynomial lyapunov approach
NASA Astrophysics Data System (ADS)
Nasiri, Alireza; Nguang, Sing Kiong; Swain, Akshya; Almakhles, Dhafer
2018-02-01
This paper deals with the problem of designing a controller for a class of discrete-time nonlinear systems which is represented by discrete-time polynomial fuzzy model. Most of the existing control design methods for discrete-time fuzzy polynomial systems cannot guarantee their Lyapunov function to be a radially unbounded polynomial function, hence the global stability cannot be assured. The proposed control design in this paper guarantees a radially unbounded polynomial Lyapunov functions which ensures global stability. In the proposed design, state feedback structure is considered and non-convexity problem is solved by incorporating an integrator into the controller. Sufficient conditions of stability are derived in terms of polynomial matrix inequalities which are solved via SOSTOOLS in MATLAB. A numerical example is presented to illustrate the effectiveness of the proposed controller.
Lag threads organize the brain’s intrinsic activity
Mitra, Anish; Snyder, Abraham Z.; Blazey, Tyler; Raichle, Marcus E.
2015-01-01
It has been widely reported that intrinsic brain activity, in a variety of animals including humans, is spatiotemporally structured. Specifically, propagated slow activity has been repeatedly demonstrated in animals. In human resting-state fMRI, spontaneous activity has been understood predominantly in terms of zero-lag temporal synchrony within widely distributed functional systems (resting-state networks). Here, we use resting-state fMRI from 1,376 normal, young adults to demonstrate that multiple, highly reproducible, temporal sequences of propagated activity, which we term “lag threads,” are present in the brain. Moreover, this propagated activity is largely unidirectional within conventionally understood resting-state networks. Modeling experiments show that resting-state networks naturally emerge as a consequence of shared patterns of propagation. An implication of these results is that common physiologic mechanisms may underlie spontaneous activity as imaged with fMRI in humans and slowly propagated activity as studied in animals. PMID:25825720
Mechanism of asymmetric polymerase assembly at the eukaryotic replication fork
Georgescu, Roxana E; Langston, Lance; Yao, Nina Y; Yurieva, Olga; Zhang, Dan; Finkelstein, Jeff; Agarwal, Tani; O’Donnell, Mike E
2015-01-01
Eukaryotes use distinct polymerases for leading- and lagging-strand replication, but how they target their respective strands is uncertain. We reconstituted Saccharomyces cerevisiae replication forks and found that CMG helicase selects polymerase (Pol) ε to the exclusion of Pol δ on the leading strand. Even if Pol δ assembles on the leading strand, Pol ε rapidly replaces it. Pol δ–PCNA is distributive with CMG, in contrast to its high stability on primed ssDNA. Hence CMG will not stabilize Pol δ, instead leaving the leading strand accessible for Pol ε and stabilizing Pol ε. Comparison of Pol ε and Pol δ on a lagging-strand model DNA reveals the opposite. Pol δ dominates over excess Pol ε on PCNA-primed ssDNA. Thus, PCNA strongly favors Pol δ over Pol ε on the lagging strand, but CMG over-rides and flips this balance in favor of Pol ε on the leading strand. PMID:24997598
Zhang, Fengying; Li, Liping; Krafft, Thomas; Lv, Jinmei; Wang, Wuyi; Pei, Desheng
2011-06-01
The association between daily cardiovascular/respiratory mortality and air pollution in an urban district of Beijing was investigated over a 6-year period (January 2003 to December 2008). The purpose of this study was to evaluate the relative importance of the major air pollutants [particulate matter (PM), SO2, NO2] as predictors of daily cardiovascular/respiratory mortality. The time-series studied comprises years with lower level interventions to control air pollution (2003-2006) and years with high level interventions in preparation for and during the Olympics/Paralympics (2007-2008). Concentrations of PM10, SO2, and NO2, were measured daily during the study period. A generalized additive model was used to evaluate daily numbers of cardiovascular/respiratory deaths in relation to each air pollutant, controlling for time trends and meteorological influences such as temperature and relative humidity. The results show that the daily cardiovascular/respiratory death rates were significantly associated with the concentration air pollutants, especially deaths related to cardiovascular disease. The current day effects of PM10 and NO2 were higher than that of single lags (distributed lags) and moving average lags for respiratory disease mortality. The largest RR of SO2 for respiratory disease mortality was in Lag02. For cardiovascular disease mortality, the largest RR was in Lag01 for PM10, and in current day (Lag0) for SO2 and NO2. NO2 was associated with the largest RRs for deaths from both cardiovascular disease and respiratory disease.
Hundessa, Samuel; Williams, Gail; Li, Shanshan; Guo, Jinpeng; Zhang, Wenyi; Guo, Yuming
2017-05-01
Meteorological factors play a crucial role in malaria transmission, but limited evidence is available from China. This study aimed to estimate the weekly associations between meteorological factors and Plasmodium vivax and Plasmodium falciparum malaria in China. The Distributed Lag Non-Linear Model was used to examine non-linearity and delayed effects of average temperature, rainfall, relative humidity, sunshine hours, wind speed and atmospheric pressure on malaria. Average temperature was associated with P. vivax and P. falciparum cases over long ranges of lags. The effect was more immediate on P. vivax (0-6 weeks) than on P. falciparum (1-9 weeks). Relative humidity was associated with P. vivax and P. falciparum over 8-10 weeks and 5-8 weeks lag, respectively. A significant effect of wind speed on P. vivax was observed at 0-2 weeks lag, but no association was found with P. falciparum. Rainfall had a decreasing effect on P. vivax, but no association was found with P. falciparum. Sunshine hours were negatively associated with P. falciparum, but the association was unclear for P. vixax. However, the effects of atmospheric pressure on both malaria types were not significant at any lag. Our study highlights a substantial effect of weekly climatic factors on P. vivax and P. falciparum malaria transmission in China, with different lags. This provides an evidence base for health authorities in developing a malaria early-warning system. © The Author 2017. Published by Oxford University Press on behalf of Royal Society of Tropical Medicine and Hygiene. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Manore, C.; Conrad, J.; Del Valle, S.; Ziemann, A.; Fairchild, G.; Generous, E. N.
2017-12-01
Mosquito-borne diseases such as Zika, dengue, and chikungunya viruses have dynamics coupled to weather, ecology, human infrastructure, socio-economic demographics, and behavior. We use time-varying remote sensing and weather data, along with demographics and ecozones to predict risk through time for Zika, dengue, and chikungunya outbreaks in Brazil. We use distributed lag methods to quantify the lag between outbreaks and weather. Our statistical model indicates that the relationships between the variables are complex, but that quantifying risk is possible with the right data at appropriate spatio-temporal scales.
NASA Technical Reports Server (NTRS)
Koenig, B.
1977-01-01
Young lunar impact structures were investigated by using lunar orbiter, Apollo Metric and panorama photographs. Measurements on particularly homogeneous areas low in secondary craters made possible an expansion of primary crater distribution to small diameters. This is now sure for a range between 20m or = D or = 20km and this indicates that the size and velocity distribution of the impacting bodies in the last 3 billion years has been constant. A numerical approximation in the form of a 7th degree polynomial was obtained for the distribution.
[The reconstruction of welding arc 3D electron density distribution based on Stark broadening].
Zhang, Wang; Hua, Xue-Ming; Pan, Cheng-Gang; Li, Fang; Wang, Min
2012-10-01
The three-dimensional electron density is very important for welding arc quality control. In the present paper, Side-on characteristic line profile was collected by a spectrometer, and the lateral experimental data were approximated by a polynomial fitting. By applying an Abel inversion technique, the authors obtained the radial intensity distribution at each wavelength and thus constructed a profile for the radial positions. The Fourier transform was used to separate the Lorentz linear from the spectrum reconstructed, thus got the accurate Stark width. And we calculated the electronic density three-dimensional distribution of the TIG welding are plasma.
Hadamard Factorization of Stable Polynomials
NASA Astrophysics Data System (ADS)
Loredo-Villalobos, Carlos Arturo; Aguirre-Hernández, Baltazar
2011-11-01
The stable (Hurwitz) polynomials are important in the study of differential equations systems and control theory (see [7] and [19]). A property of these polynomials is related to Hadamard product. Consider two polynomials p,q ∈ R[x]:p(x) = anxn+an-1xn-1+...+a1x+a0q(x) = bmx m+bm-1xm-1+...+b1x+b0the Hadamard product (p × q) is defined as (p×q)(x) = akbkxk+ak-1bk-1xk-1+...+a1b1x+a0b0where k = min(m,n). Some results (see [16]) shows that if p,q ∈R[x] are stable polynomials then (p×q) is stable, also, i.e. the Hadamard product is closed; however, the reciprocal is not always true, that is, not all stable polynomial has a factorization into two stable polynomials the same degree n, if n> 4 (see [15]).In this work we will give some conditions to Hadamard factorization existence for stable polynomials.
NASA Astrophysics Data System (ADS)
Doha, E. H.
2004-01-01
Formulae expressing explicitly the Jacobi coefficients of a general-order derivative (integral) of an infinitely differentiable function in terms of its original expansion coefficients, and formulae for the derivatives (integrals) of Jacobi polynomials in terms of Jacobi polynomials themselves are stated. A formula for the Jacobi coefficients of the moments of one single Jacobi polynomial of certain degree is proved. Another formula for the Jacobi coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its original expanded coefficients is also given. A simple approach in order to construct and solve recursively for the connection coefficients between Jacobi-Jacobi polynomials is described. Explicit formulae for these coefficients between ultraspherical and Jacobi polynomials are deduced, of which the Chebyshev polynomials of the first and second kinds and Legendre polynomials are important special cases. Two analytical formulae for the connection coefficients between Laguerre-Jacobi and Hermite-Jacobi are developed.
Celik, Talip; Mutlu, Ibrahim; Ozkan, Arif; Kisioglu, Yasin
2016-01-01
Background. In this study, the cut-out risk of Dynamic Hip Screw (DHS) was investigated in nine different positions of the lag screw for two fracture types by using Finite Element Analysis (FEA). Methods. Two types of fractures (31-A1.1 and A2.1 in AO classification) were generated in the femur model obtained from Computerized Tomography images. The DHS model was placed into the fractured femur model in nine different positions. Tip-Apex Distances were measured using SolidWorks. In FEA, the force applied to the femoral head was determined according to the maximum value being observed during walking. Results. The highest volume percentage exceeding the yield strength of trabecular bone was obtained in posterior-inferior region in both fracture types. The best placement region for the lag screw was found in the middle of both fracture types. There are compatible results between Tip-Apex Distances and the cut-out risk except for posterior-superior and superior region of 31-A2.1 fracture type. Conclusion. The position of the lag screw affects the risk of cut-out significantly. Also, Tip-Apex Distance is a good predictor of the cut-out risk. All in all, we can supposedly say that the density distribution of the trabecular bone is a more efficient factor compared to the positions of lag screw in the cut-out risk.
2013-04-01
completely change the entire landscape. For example, under the quantum computing regime, factoring prime numbers requires only polynomial time (i.e., Shor’s...AFRL-OSR-VA-TR-2013-0206 Wireless Cybersecurity Biao Chen Syracuse University April 2013 Final Report DISTRIBUTION A...19a. NAME OF RESPONSIBLE PERSON 19b. TELEPHONE NUMBER (Include area code) 21-02-2013 FINAL REPORT 01-04-2009 TO 30-11-2012 Wireless Cybersecurity
NASA Astrophysics Data System (ADS)
Cieplak, Agnieszka; Slosar, Anze
2018-01-01
The Lyman-alpha forest has become a powerful cosmological probe at intermediate redshift. It is a highly non-linear field with much information present beyond the power spectrum. The flux probability flux distribution (PDF) in particular has been a successful probe of small scale physics. However, it is also sensitive to pixel noise, spectrum resolution, and continuum fitting, all of which lead to possible biased estimators. Here we argue that measuring the coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. Since the n-th Legendre coefficient can be expressed as a linear combination of the first n moments of the field, this allows for the coefficients to be measured in the presence of noise and allows for a clear route towards marginalization over the mean flux. Additionally, in the presence of noise, a finite number of these coefficients are well measured with a very sharp transition into noise dominance. This compresses the information into a small amount of well-measured quantities. Finally, we find that measuring fewer quasars with high signal-to-noise produces a higher amount of recoverable information.
Stable Numerical Approach for Fractional Delay Differential Equations
NASA Astrophysics Data System (ADS)
Singh, Harendra; Pandey, Rajesh K.; Baleanu, D.
2017-12-01
In this paper, we present a new stable numerical approach based on the operational matrix of integration of Jacobi polynomials for solving fractional delay differential equations (FDDEs). The operational matrix approach converts the FDDE into a system of linear equations, and hence the numerical solution is obtained by solving the linear system. The error analysis of the proposed method is also established. Further, a comparative study of the approximate solutions is provided for the test examples of the FDDE by varying the values of the parameters in the Jacobi polynomials. As in special case, the Jacobi polynomials reduce to the well-known polynomials such as (1) Legendre polynomial, (2) Chebyshev polynomial of second kind, (3) Chebyshev polynomial of third and (4) Chebyshev polynomial of fourth kind respectively. Maximum absolute error and root mean square error are calculated for the illustrated examples and presented in form of tables for the comparison purpose. Numerical stability of the presented method with respect to all four kind of polynomials are discussed. Further, the obtained numerical results are compared with some known methods from the literature and it is observed that obtained results from the proposed method is better than these methods.
Percolation critical polynomial as a graph invariant
Scullard, Christian R.
2012-10-18
Every lattice for which the bond percolation critical probability can be found exactly possesses a critical polynomial, with the root in [0; 1] providing the threshold. Recent work has demonstrated that this polynomial may be generalized through a definition that can be applied on any periodic lattice. The polynomial depends on the lattice and on its decomposition into identical finite subgraphs, but once these are specified, the polynomial is essentially unique. On lattices for which the exact percolation threshold is unknown, the polynomials provide approximations for the critical probability with the estimates appearing to converge to the exact answer withmore » increasing subgraph size. In this paper, I show how the critical polynomial can be viewed as a graph invariant like the Tutte polynomial. In particular, the critical polynomial is computed on a finite graph and may be found using the deletion-contraction algorithm. This allows calculation on a computer, and I present such results for the kagome lattice using subgraphs of up to 36 bonds. For one of these, I find the prediction p c = 0:52440572:::, which differs from the numerical value, p c = 0:52440503(5), by only 6:9 X 10 -7.« less
On Certain Wronskians of Multiple Orthogonal Polynomials
NASA Astrophysics Data System (ADS)
Zhang, Lun; Filipuk, Galina
2014-11-01
We consider determinants of Wronskian type whose entries are multiple orthogonal polynomials associated with a path connecting two multi-indices. By assuming that the weight functions form an algebraic Chebyshev (AT) system, we show that the polynomials represented by the Wronskians keep a constant sign in some cases, while in some other cases oscillatory behavior appears, which generalizes classical results for orthogonal polynomials due to Karlin and Szegő. There are two applications of our results. The first application arises from the observation that the m-th moment of the average characteristic polynomials for multiple orthogonal polynomial ensembles can be expressed as a Wronskian of the type II multiple orthogonal polynomials. Hence, it is straightforward to obtain the distinct behavior of the moments for odd and even m in a special multiple orthogonal ensemble - the AT ensemble. As the second application, we derive some Turán type inequalities for m! ultiple Hermite and multiple Laguerre polynomials (of two kinds). Finally, we study numerically the geometric configuration of zeros for the Wronskians of these multiple orthogonal polynomials. We observe that the zeros have regular configurations in the complex plane, which might be of independent interest.
Uncertainty propagation through an aeroelastic wind turbine model using polynomial surrogates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murcia, Juan Pablo; Réthoré, Pierre-Elouan; Dimitrov, Nikolay
Polynomial surrogates are used to characterize the energy production and lifetime equivalent fatigue loads for different components of the DTU 10 MW reference wind turbine under realistic atmospheric conditions. The variability caused by different turbulent inflow fields are captured by creating independent surrogates for the mean and standard deviation of each output with respect to the inflow realizations. A global sensitivity analysis shows that the turbulent inflow realization has a bigger impact on the total distribution of equivalent fatigue loads than the shear coefficient or yaw miss-alignment. The methodology presented extends the deterministic power and thrust coefficient curves to uncertaintymore » models and adds new variables like damage equivalent fatigue loads in different components of the turbine. These surrogate models can then be implemented inside other work-flows such as: estimation of the uncertainty in annual energy production due to wind resource variability and/or robust wind power plant layout optimization. It can be concluded that it is possible to capture the global behavior of a modern wind turbine and its uncertainty under realistic inflow conditions using polynomial response surfaces. In conclusion, the surrogates are a way to obtain power and load estimation under site specific characteristics without sharing the proprietary aeroelastic design.« less
Uncertainty propagation through an aeroelastic wind turbine model using polynomial surrogates
Murcia, Juan Pablo; Réthoré, Pierre-Elouan; Dimitrov, Nikolay; ...
2017-07-17
Polynomial surrogates are used to characterize the energy production and lifetime equivalent fatigue loads for different components of the DTU 10 MW reference wind turbine under realistic atmospheric conditions. The variability caused by different turbulent inflow fields are captured by creating independent surrogates for the mean and standard deviation of each output with respect to the inflow realizations. A global sensitivity analysis shows that the turbulent inflow realization has a bigger impact on the total distribution of equivalent fatigue loads than the shear coefficient or yaw miss-alignment. The methodology presented extends the deterministic power and thrust coefficient curves to uncertaintymore » models and adds new variables like damage equivalent fatigue loads in different components of the turbine. These surrogate models can then be implemented inside other work-flows such as: estimation of the uncertainty in annual energy production due to wind resource variability and/or robust wind power plant layout optimization. It can be concluded that it is possible to capture the global behavior of a modern wind turbine and its uncertainty under realistic inflow conditions using polynomial response surfaces. In conclusion, the surrogates are a way to obtain power and load estimation under site specific characteristics without sharing the proprietary aeroelastic design.« less
NASA Astrophysics Data System (ADS)
Chakraborty, Souvik; Chowdhury, Rajib
2017-12-01
Hybrid polynomial correlated function expansion (H-PCFE) is a novel metamodel formulated by coupling polynomial correlated function expansion (PCFE) and Kriging. Unlike commonly available metamodels, H-PCFE performs a bi-level approximation and hence, yields more accurate results. However, till date, it is only applicable to medium scaled problems. In order to address this apparent void, this paper presents an improved H-PCFE, referred to as locally refined hp - adaptive H-PCFE. The proposed framework computes the optimal polynomial order and important component functions of PCFE, which is an integral part of H-PCFE, by using global variance based sensitivity analysis. Optimal number of training points are selected by using distribution adaptive sequential experimental design. Additionally, the formulated model is locally refined by utilizing the prediction error, which is inherently obtained in H-PCFE. Applicability of the proposed approach has been illustrated with two academic and two industrial problems. To illustrate the superior performance of the proposed approach, results obtained have been compared with those obtained using hp - adaptive PCFE. It is observed that the proposed approach yields highly accurate results. Furthermore, as compared to hp - adaptive PCFE, significantly less number of actual function evaluations are required for obtaining results of similar accuracy.
New scaling model for variables and increments with heavy-tailed distributions
NASA Astrophysics Data System (ADS)
Riva, Monica; Neuman, Shlomo P.; Guadagnini, Alberto
2015-06-01
Many hydrological (as well as diverse earth, environmental, ecological, biological, physical, social, financial and other) variables, Y, exhibit frequency distributions that are difficult to reconcile with those of their spatial or temporal increments, ΔY. Whereas distributions of Y (or its logarithm) are at times slightly asymmetric with relatively mild peaks and tails, those of ΔY tend to be symmetric with peaks that grow sharper, and tails that become heavier, as the separation distance (lag) between pairs of Y values decreases. No statistical model known to us captures these behaviors of Y and ΔY in a unified and consistent manner. We propose a new, generalized sub-Gaussian model that does so. We derive analytical expressions for probability distribution functions (pdfs) of Y and ΔY as well as corresponding lead statistical moments. In our model the peak and tails of the ΔY pdf scale with lag in line with observed behavior. The model allows one to estimate, accurately and efficiently, all relevant parameters by analyzing jointly sample moments of Y and ΔY. We illustrate key features of our new model and method of inference on synthetically generated samples and neutron porosity data from a deep borehole.
Improvement of Reynolds-Stress and Triple-Product Lag Models
NASA Technical Reports Server (NTRS)
Olsen, Michael E.; Lillard, Randolph P.
2017-01-01
The Reynolds-stress and triple product Lag models were created with a normal stress distribution which was denied by a 4:3:2 distribution of streamwise, spanwise and wall normal stresses, and a ratio of r(sub w) = 0.3k in the log layer region of high Reynolds number flat plate flow, which implies R11(+)= [4/(9/2)*.3] approximately 2.96. More recent measurements show a more complex picture of the log layer region at high Reynolds numbers. The first cut at improving these models along with the direction for future refinements is described. Comparison with recent high Reynolds number data shows areas where further work is needed, but also shows inclusion of the modeled turbulent transport terms improve the prediction where they influence the solution. Additional work is needed to make the model better match experiment, but there is significant improvement in many of the details of the log layer behavior.
Lee, Yong Ju; Jung, Byeong Su; Kim, Kee-Tae; Paik, Hyun-Dong
2015-09-01
A predictive model was performed to describe the growth of Staphylococcus aureus in raw pork by using Integrated Pathogen Modeling Program 2013 and a polynomial model as a secondary predictive model. S. aureus requires approximately 180 h to reach 5-6 log CFU/g at 10 °C. At 15 °C and 25 °C, approximately 48 and 20 h, respectively, are required to cause food poisoning. Predicted data using the Gompertz model was the most accurate in this study. For lag time (LT) model, bias factor (Bf) and accuracy factor (Af) values were both 1.014, showing that the predictions were within a reliable range. For specific growth rate (SGR) model, Bf and Af were 1.188 and 1.190, respectively. Additionally, both Bf and Af values of the LT and SGR models were close to 1, indicating that IPMP Gompertz model is more adequate for predicting the growth of S. aureus on raw pork than other models. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ganguli, R.
2002-11-01
An aeroelastic analysis based on finite elements in space and time is used to model the helicopter rotor in forward flight. The rotor blade is represented as an elastic cantilever beam undergoing flap and lag bending, elastic torsion and axial deformations. The objective of the improved design is to reduce vibratory loads at the rotor hub that are the main source of helicopter vibration. Constraints are imposed on aeroelastic stability, and move limits are imposed on the blade elastic stiffness design variables. Using the aeroelastic analysis, response surface approximations are constructed for the objective function (vibratory hub loads). It is found that second order polynomial response surfaces constructed using the central composite design of the theory of design of experiments adequately represents the aeroelastic model in the vicinity of the baseline design. Optimization results show a reduction in the objective function of about 30 per cent. A key accomplishment of this paper is the decoupling of the analysis problem and the optimization problems using response surface methods, which should encourage the use of optimization methods by the helicopter industry.
NASA Astrophysics Data System (ADS)
Hounga, C.; Hounkonnou, M. N.; Ronveaux, A.
2006-10-01
In this paper, we give Laguerre-Freud equations for the recurrence coefficients of discrete semi-classical orthogonal polynomials of class two, when the polynomials in the Pearson equation are of the same degree. The case of generalized Charlier polynomials is also presented.
The Gibbs Phenomenon for Series of Orthogonal Polynomials
ERIC Educational Resources Information Center
Fay, T. H.; Kloppers, P. Hendrik
2006-01-01
This note considers the four classes of orthogonal polynomials--Chebyshev, Hermite, Laguerre, Legendre--and investigates the Gibbs phenomenon at a jump discontinuity for the corresponding orthogonal polynomial series expansions. The perhaps unexpected thing is that the Gibbs constant that arises for each class of polynomials appears to be the same…
Determinants with orthogonal polynomial entries
NASA Astrophysics Data System (ADS)
Ismail, Mourad E. H.
2005-06-01
We use moment representations of orthogonal polynomials to evaluate the corresponding Hankel determinants formed by the orthogonal polynomials. We also study the Hankel determinants which start with pn on the top left-hand corner. As examples we evaluate the Hankel determinants whose entries are q-ultraspherical or Al-Salam-Chihara polynomials.
From sequences to polynomials and back, via operator orderings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amdeberhan, Tewodros, E-mail: tamdeber@tulane.edu; Dixit, Atul, E-mail: adixit@tulane.edu; Moll, Victor H., E-mail: vhm@tulane.edu
2013-12-15
Bender and Dunne [“Polynomials and operator orderings,” J. Math. Phys. 29, 1727–1731 (1988)] showed that linear combinations of words q{sup k}p{sup n}q{sup n−k}, where p and q are subject to the relation qp − pq = ı, may be expressed as a polynomial in the symbol z=1/2 (qp+pq). Relations between such polynomials and linear combinations of the transformed coefficients are explored. In particular, examples yielding orthogonal polynomials are provided.
Using lagged dependence to identify (de)coupled surface and subsurface soil moisture values
NASA Astrophysics Data System (ADS)
Carranza, Coleen D. U.; van der Ploeg, Martine J.; Torfs, Paul J. J. F.
2018-04-01
Recent advances in radar remote sensing popularized the mapping of surface soil moisture at different spatial scales. Surface soil moisture measurements are used in combination with hydrological models to determine subsurface soil moisture values. However, variability of soil moisture across the soil column is important for estimating depth-integrated values, as decoupling between surface and subsurface can occur. In this study, we employ new methods to investigate the occurrence of (de)coupling between surface and subsurface soil moisture. Using time series datasets, lagged dependence was incorporated in assessing (de)coupling with the idea that surface soil moisture conditions will be reflected at the subsurface after a certain delay. The main approach involves the application of a distributed-lag nonlinear model (DLNM) to simultaneously represent both the functional relation and the lag structure in the time series. The results of an exploratory analysis using residuals from a fitted loess function serve as a posteriori information to determine (de)coupled values. Both methods allow for a range of (de)coupled soil moisture values to be quantified. Results provide new insights into the decoupled range as its occurrence among the sites investigated is not limited to dry conditions.
Urbanization and Income Inequality in Post-Reform China: A Causal Analysis Based on Time Series Data
Chen, Guo; Glasmeier, Amy K.; Zhang, Min; Shao, Yang
2016-01-01
This paper investigates the potential causal relationship(s) between China’s urbanization and income inequality since the start of the economic reform. Based on the economic theory of urbanization and income distribution, we analyze the annual time series of China’s urbanization rate and Gini index from 1978 to 2014. The results show that urbanization has an immediate alleviating effect on income inequality, as indicated by the negative relationship between the two time series at the same year (lag = 0). However, urbanization also seems to have a lagged aggravating effect on income inequality, as indicated by positive relationship between urbanization and the Gini index series at lag 1. Although the link between urbanization and income inequality is not surprising, the lagged aggravating effect of urbanization on the Gini index challenges the popular belief that urbanization in post-reform China generally helps reduce income inequality. At deeper levels, our results suggest an urgent need to focus on the social dimension of urbanization as China transitions to the next stage of modernization. Comprehensive social reforms must be prioritized to avoid a long-term economic dichotomy and permanent social segregation. PMID:27433966
Chen, Guo; Glasmeier, Amy K; Zhang, Min; Shao, Yang
2016-01-01
This paper investigates the potential causal relationship(s) between China's urbanization and income inequality since the start of the economic reform. Based on the economic theory of urbanization and income distribution, we analyze the annual time series of China's urbanization rate and Gini index from 1978 to 2014. The results show that urbanization has an immediate alleviating effect on income inequality, as indicated by the negative relationship between the two time series at the same year (lag = 0). However, urbanization also seems to have a lagged aggravating effect on income inequality, as indicated by positive relationship between urbanization and the Gini index series at lag 1. Although the link between urbanization and income inequality is not surprising, the lagged aggravating effect of urbanization on the Gini index challenges the popular belief that urbanization in post-reform China generally helps reduce income inequality. At deeper levels, our results suggest an urgent need to focus on the social dimension of urbanization as China transitions to the next stage of modernization. Comprehensive social reforms must be prioritized to avoid a long-term economic dichotomy and permanent social segregation.
Algorithm for Compressing Time-Series Data
NASA Technical Reports Server (NTRS)
Hawkins, S. Edward, III; Darlington, Edward Hugo
2012-01-01
An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").
Uncertainty Analysis via Failure Domain Characterization: Polynomial Requirement Functions
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Munoz, Cesar A.; Narkawicz, Anthony J.; Kenny, Sean P.; Giesy, Daniel P.
2011-01-01
This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. A Bernstein expansion approach is used to size hyper-rectangular subsets while a sum of squares programming approach is used to size quasi-ellipsoidal subsets. These methods are applicable to requirement functions whose functional dependency on the uncertainty is a known polynomial. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the uncertainty model assumed (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.
NASA Astrophysics Data System (ADS)
Dobronets, Boris S.; Popova, Olga A.
2018-05-01
The paper considers a new approach of regression modeling that uses aggregated data presented in the form of density functions. Approaches to Improving the reliability of aggregation of empirical data are considered: improving accuracy and estimating errors. We discuss the procedures of data aggregation as a preprocessing stage for subsequent to regression modeling. An important feature of study is demonstration of the way how represent the aggregated data. It is proposed to use piecewise polynomial models, including spline aggregate functions. We show that the proposed approach to data aggregation can be interpreted as the frequency distribution. To study its properties density function concept is used. Various types of mathematical models of data aggregation are discussed. For the construction of regression models, it is proposed to use data representation procedures based on piecewise polynomial models. New approaches to modeling functional dependencies based on spline aggregations are proposed.
Piecewise Polynomial Aggregation as Preprocessing for Data Numerical Modeling
NASA Astrophysics Data System (ADS)
Dobronets, B. S.; Popova, O. A.
2018-05-01
Data aggregation issues for numerical modeling are reviewed in the present study. The authors discuss data aggregation procedures as preprocessing for subsequent numerical modeling. To calculate the data aggregation, the authors propose using numerical probabilistic analysis (NPA). An important feature of this study is how the authors represent the aggregated data. The study shows that the offered approach to data aggregation can be interpreted as the frequency distribution of a variable. To study its properties, the density function is used. For this purpose, the authors propose using the piecewise polynomial models. A suitable example of such approach is the spline. The authors show that their approach to data aggregation allows reducing the level of data uncertainty and significantly increasing the efficiency of numerical calculations. To demonstrate the degree of the correspondence of the proposed methods to reality, the authors developed a theoretical framework and considered numerical examples devoted to time series aggregation.
Investigation of modification design of the fan stage in axial compressor
NASA Astrophysics Data System (ADS)
Zhou, Xun; Yan, Peigang; Han, Wanjin
2010-04-01
The S2 flow path design method of the transonic compressor is used to design the one stage fan in order to replace the original designed blade cascade which has two-stage transonic fan rotors. In the modification design, the camber line is parameterized by a quartic polynomial curve and the thickness distribution of the blade profile is controlled by the double-thrice polynomial. Therefore, the inlet flow has been pre-compressed and the location and intensity of the shock wave at supersonic area have been controlled in order to let the new blade profiles have better aerodynamic performance. The computational results show that the new single stage fan rotor increases the efficiency by two percent at the design condition and the total pressure ratio is slightly higher than that of the original design. At the same time, it also meets the mass flow rate and the geometrical size requirements for the modification design.
Bayesian median regression for temporal gene expression data
NASA Astrophysics Data System (ADS)
Yu, Keming; Vinciotti, Veronica; Liu, Xiaohui; 't Hoen, Peter A. C.
2007-09-01
Most of the existing methods for the identification of biologically interesting genes in a temporal expression profiling dataset do not fully exploit the temporal ordering in the dataset and are based on normality assumptions for the gene expression. In this paper, we introduce a Bayesian median regression model to detect genes whose temporal profile is significantly different across a number of biological conditions. The regression model is defined by a polynomial function where both time and condition effects as well as interactions between the two are included. MCMC-based inference returns the posterior distribution of the polynomial coefficients. From this a simple Bayes factor test is proposed to test for significance. The estimation of the median rather than the mean, and within a Bayesian framework, increases the robustness of the method compared to a Hotelling T2-test previously suggested. This is shown on simulated data and on muscular dystrophy gene expression data.
Alkhaldy, Ibrahim
2017-04-01
The aim of this study was to examine the role of environmental factors in the temporal distribution of dengue fever in Jeddah, Saudi Arabia. The relationship between dengue fever cases and climatic factors such as relative humidity and temperature was investigated during 2006-2009 to determine whether there is any relationship between dengue fever cases and climatic parameters in Jeddah City, Saudi Arabia. A generalised linear model (GLM) with a break-point was used to determine how different levels of temperature and relative humidity affected the distribution of the number of cases of dengue fever. Break-point analysis was performed to modelled the effect before and after a break-point (change point) in the explanatory parameters under various scenarios. Akaike information criterion (AIC) and cross validation (CV) were used to assess the performance of the models. The results showed that maximum temperature and mean relative humidity are most probably the better predictors of the number of dengue fever cases in Jeddah. In this study three scenarios were modelled: no time lag, 1-week lag and 2-weeks lag. Among these scenarios, the 1-week lag model using mean relative humidity as an explanatory variable showed better performance. This study showed a clear relationship between the meteorological variables and the number of dengue fever cases in Jeddah. The results also demonstrated that meteorological variables can be successfully used to estimate the number of dengue fever cases for a given period of time. Break-point analysis provides further insight into the association between meteorological parameters and dengue fever cases by dividing the meteorological parameters into certain break-points. Copyright © 2016 Elsevier B.V. All rights reserved.
Brely, Lucas; Bosia, Federico; Pugno, Nicola M
2018-06-20
Contact unit size reduction is a widely studied mechanism as a means to improve adhesion in natural fibrillar systems, such as those observed in beetles or geckos. However, these animals also display complex structural features in the way the contact is subdivided in a hierarchical manner. Here, we study the influence of hierarchical fibrillar architectures on the load distribution over the contact elements of the adhesive system, and the corresponding delamination behaviour. We present an analytical model to derive the load distribution in a fibrillar system loaded in shear, including hierarchical splitting of contacts, i.e. a "hierarchical shear-lag" model that generalizes the well-known shear-lag model used in mechanics. The influence on the detachment process is investigated introducing a numerical procedure that allows the derivation of the maximum delamination force as a function of the considered geometry, including statistical variability of local adhesive energy. Our study suggests that contact splitting generates improved adhesion only in the ideal case of extremely compliant contacts. In real cases, to produce efficient adhesive performance, contact splitting needs to be coupled with hierarchical architectures to counterbalance high load concentrations resulting from contact unit size reduction, generating multiple delamination fronts and helping to avoid detrimental non-uniform load distributions. We show that these results can be summarized in a generalized adhesion scaling scheme for hierarchical structures, proving the beneficial effect of multiple hierarchical levels. The model can thus be used to predict the adhesive performance of hierarchical adhesive structures, as well as the mechanical behaviour of composite materials with hierarchical reinforcements.
Extending Romanovski polynomials in quantum mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quesne, C.
2013-12-15
Some extensions of the (third-class) Romanovski polynomials (also called Romanovski/pseudo-Jacobi polynomials), which appear in bound-state wavefunctions of rationally extended Scarf II and Rosen-Morse I potentials, are considered. For the former potentials, the generalized polynomials satisfy a finite orthogonality relation, while for the latter an infinite set of relations among polynomials with degree-dependent parameters is obtained. Both types of relations are counterparts of those known for conventional polynomials. In the absence of any direct information on the zeros of the Romanovski polynomials present in denominators, the regularity of the constructed potentials is checked by taking advantage of the disconjugacy properties ofmore » second-order differential equations of Schrödinger type. It is also shown that on going from Scarf I to Scarf II or from Rosen-Morse II to Rosen-Morse I potentials, the variety of rational extensions is narrowed down from types I, II, and III to type III only.« less
Polynomial solutions of the Monge-Ampère equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aminov, Yu A
2014-11-30
The question of the existence of polynomial solutions to the Monge-Ampère equation z{sub xx}z{sub yy}−z{sub xy}{sup 2}=f(x,y) is considered in the case when f(x,y) is a polynomial. It is proved that if f is a polynomial of the second degree, which is positive for all values of its arguments and has a positive squared part, then no polynomial solution exists. On the other hand, a solution which is not polynomial but is analytic in the whole of the x, y-plane is produced. Necessary and sufficient conditions for the existence of polynomial solutions of degree up to 4 are found and methods for the construction ofmore » such solutions are indicated. An approximation theorem is proved. Bibliography: 10 titles.« less
Solving the interval type-2 fuzzy polynomial equation using the ranking method
NASA Astrophysics Data System (ADS)
Rahman, Nurhakimah Ab.; Abdullah, Lazim
2014-07-01
Polynomial equations with trapezoidal and triangular fuzzy numbers have attracted some interest among researchers in mathematics, engineering and social sciences. There are some methods that have been developed in order to solve these equations. In this study we are interested in introducing the interval type-2 fuzzy polynomial equation and solving it using the ranking method of fuzzy numbers. The ranking method concept was firstly proposed to find real roots of fuzzy polynomial equation. Therefore, the ranking method is applied to find real roots of the interval type-2 fuzzy polynomial equation. We transform the interval type-2 fuzzy polynomial equation to a system of crisp interval type-2 fuzzy polynomial equation. This transformation is performed using the ranking method of fuzzy numbers based on three parameters, namely value, ambiguity and fuzziness. Finally, we illustrate our approach by numerical example.
Parallel multigrid smoothing: polynomial versus Gauss-Seidel
NASA Astrophysics Data System (ADS)
Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray
2003-07-01
Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines.
NASA Technical Reports Server (NTRS)
Wood, C. A.
1974-01-01
For polynomials of higher degree, iterative numerical methods must be used. Four iterative methods are presented for approximating the zeros of a polynomial using a digital computer. Newton's method and Muller's method are two well known iterative methods which are presented. They extract the zeros of a polynomial by generating a sequence of approximations converging to each zero. However, both of these methods are very unstable when used on a polynomial which has multiple zeros. That is, either they fail to converge to some or all of the zeros, or they converge to very bad approximations of the polynomial's zeros. This material introduces two new methods, the greatest common divisor (G.C.D.) method and the repeated greatest common divisor (repeated G.C.D.) method, which are superior methods for numerically approximating the zeros of a polynomial having multiple zeros. These methods were programmed in FORTRAN 4 and comparisons in time and accuracy are given.
Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation
ERIC Educational Resources Information Center
Gordon, Sheldon P.; Yang, Yajun
2017-01-01
This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…
Interpolation and Polynomial Curve Fitting
ERIC Educational Resources Information Center
Yang, Yajun; Gordon, Sheldon P.
2014-01-01
Two points determine a line. Three noncollinear points determine a quadratic function. Four points that do not lie on a lower-degree polynomial curve determine a cubic function. In general, n + 1 points uniquely determine a polynomial of degree n, presuming that they do not fall onto a polynomial of lower degree. The process of finding such a…
A note on the zeros of Freud-Sobolev orthogonal polynomials
NASA Astrophysics Data System (ADS)
Moreno-Balcazar, Juan J.
2007-10-01
We prove that the zeros of a certain family of Sobolev orthogonal polynomials involving the Freud weight function e-x4 on are real, simple, and interlace with the zeros of the Freud polynomials, i.e., those polynomials orthogonal with respect to the weight function e-x4. Some numerical examples are shown.
Optimal Chebyshev polynomials on ellipses in the complex plane
NASA Technical Reports Server (NTRS)
Fischer, Bernd; Freund, Roland
1989-01-01
The design of iterative schemes for sparse matrix computations often leads to constrained polynomial approximation problems on sets in the complex plane. For the case of ellipses, we introduce a new class of complex polynomials which are in general very good approximations to the best polynomials and even optimal in most cases.
NASA Astrophysics Data System (ADS)
Karakus, Dogan
2013-12-01
In mining, various estimation models are used to accurately assess the size and the grade distribution of an ore body. The estimation of the positional properties of unknown regions using random samples with known positional properties was first performed using polynomial approximations. Although the emergence of computer technologies and statistical evaluation of random variables after the 1950s rendered the polynomial approximations less important, theoretically the best surface passing through the random variables can be expressed as a polynomial approximation. In geoscience studies, in which the number of random variables is high, reliable solutions can be obtained only with high-order polynomials. Finding the coefficients of these types of high-order polynomials can be computationally intensive. In this study, the solution coefficients of high-order polynomials were calculated using a generalized inverse matrix method. A computer algorithm was developed to calculate the polynomial degree giving the best regression between the values obtained for solutions of different polynomial degrees and random observational data with known values, and this solution was tested with data derived from a practical application. In this application, the calorie values for data from 83 drilling points in a coal site located in southwestern Turkey were used, and the results are discussed in the context of this study. W górnictwie wykorzystuje się rozmaite modele estymacji do dokładnego określenia wielkości i rozkładu zawartości pierwiastka użytecznego w rudzie. Estymację położenia i właściwości skał w nieznanych obszarach z wykorzystaniem próbek losowych o znanym położeniu przeprowadzano na początku z wykorzystaniem przybliżenia wielomianowego. Pomimo tego, że rozwój technik komputerowych i statystycznych metod ewaluacji próbek losowych sprawiły, że po roku 1950 metody przybliżenia wielomianowego straciły na znaczeniu, nadal teoretyczna powierzchnia najlepszej zgodności przechodząca przez zmienne losowe wyrażana jest właśnie poprzez przybliżenie wielomianowe. W geofizyce, gdzie liczba próbek losowych jest zazwyczaj bardzo wysoka, wiarygodne rozwiązania uzyskać można jedynie przy wykorzystaniu wielomianów wyższych stopni. Określenie współczynników w tego typu wielomia nach jest skomplikowaną procedurą obliczeniową. W pracy tej poszukiwane współczynniki wielomianu wyższych stopni obliczono przy zastosowaniu metody uogólnionej macierzy odwrotnej. Opracowano odpowiedni algorytm komputerowy do obliczania stopnia wielomianu, zapewniający najlepszą regresję pomiędzy wartościami otrzymanymi z rozwiązań bazujących na wielomianach różnych stopni i losowymi danymi z obserwacji, o znanych wartościach. Rozwiązanie to przetestowano z użyciem danych uzyskanych z zastosowań praktycznych. W tym zastosowaniu użyto danych o wartości opałowej pochodzących z 83 odwiertów wykonanych w zagłębiu węglowym w południowo- zachodniej Turcji, wyniki obliczeń przedyskutowano w kontekście zagadnień uwzględnionych w niniejszej pracy.
A FAST POLYNOMIAL TRANSFORM PROGRAM WITH A MODULARIZED STRUCTURE
NASA Technical Reports Server (NTRS)
Truong, T. K.
1994-01-01
This program utilizes a fast polynomial transformation (FPT) algorithm applicable to two-dimensional mathematical convolutions. Two-dimensional convolution has many applications, particularly in image processing. Two-dimensional cyclic convolutions can be converted to a one-dimensional convolution in a polynomial ring. Traditional FPT methods decompose the one-dimensional cyclic polynomial into polynomial convolutions of different lengths. This program will decompose a cyclic polynomial into polynomial convolutions of the same length. Thus, only FPTs and Fast Fourier Transforms of the same length are required. This modular approach can save computational resources. To further enhance its appeal, the program is written in the transportable 'C' language. The steps in the algorithm are: 1) formulate the modulus reduction equations, 2) calculate the polynomial transforms, 3) multiply the transforms using a generalized fast Fourier transformation, 4) compute the inverse polynomial transforms, and 5) reconstruct the final matrices using the Chinese remainder theorem. Input to this program is comprised of the row and column dimensions and the initial two matrices. The matrices are printed out at all steps, ending with the final reconstruction. This program is written in 'C' for batch execution and has been implemented on the IBM PC series of computers under DOS with a central memory requirement of approximately 18K of 8 bit bytes. This program was developed in 1986.
An exact collisionless equilibrium for the Force-Free Harris Sheet with low plasma beta
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allanson, O., E-mail: oliver.allanson@st-andrews.ac.uk; Neukirch, T., E-mail: tn3@st-andrews.ac.uk; Wilson, F., E-mail: fw237@st-andrews.ac.uk
We present a first discussion and analysis of the physical properties of a new exact collisionless equilibrium for a one-dimensional nonlinear force-free magnetic field, namely, the force-free Harris sheet. The solution allows any value of the plasma beta, and crucially below unity, which previous nonlinear force-free collisionless equilibria could not. The distribution function involves infinite series of Hermite polynomials in the canonical momenta, of which the important mathematical properties of convergence and non-negativity have recently been proven. Plots of the distribution function are presented for the plasma beta modestly below unity, and we compare the shape of the distribution functionmore » in two of the velocity directions to a Maxwellian distribution.« less
AKLSQF - LEAST SQUARES CURVE FITTING
NASA Technical Reports Server (NTRS)
Kantak, A. V.
1994-01-01
The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.
Papadopoulos, Anthony
2009-01-01
The first-degree power-law polynomial function is frequently used to describe activity metabolism for steady swimming animals. This function has been used in hydrodynamics-based metabolic studies to evaluate important parameters of energetic costs, such as the standard metabolic rate and the drag power indices. In theory, however, the power-law polynomial function of any degree greater than one can be used to describe activity metabolism for steady swimming animals. In fact, activity metabolism has been described by the conventional exponential function and the cubic polynomial function, although only the power-law polynomial function models drag power since it conforms to hydrodynamic laws. Consequently, the first-degree power-law polynomial function yields incorrect parameter values of energetic costs if activity metabolism is governed by the power-law polynomial function of any degree greater than one. This issue is important in bioenergetics because correct comparisons of energetic costs among different steady swimming animals cannot be made unless the degree of the power-law polynomial function derives from activity metabolism. In other words, a hydrodynamics-based functional form of activity metabolism is a power-law polynomial function of any degree greater than or equal to one. Therefore, the degree of the power-law polynomial function should be treated as a parameter, not as a constant. This new treatment not only conforms to hydrodynamic laws, but also ensures correct comparisons of energetic costs among different steady swimming animals. Furthermore, the exponential power-law function, which is a new hydrodynamics-based functional form of activity metabolism, is a special case of the power-law polynomial function. Hence, the link between the hydrodynamics of steady swimming and the exponential-based metabolic model is defined.
Evaluation of a Game-Based Simulation During Distributed Exercises
2010-09-01
the management team guiding development of the software. The questionnaires have not been used enough to collect data sufficient for factor...capable of internationally distributed exercises without excessive time lags or technical problems, given that commercial games seem to manage while...established by RDECOM-STTC military liaison and managers . Engineering constraints combined to limit the number of participants and the possible roles that
Van Meter, Kimberly J.; Basu, Nandita B.
2015-01-01
Nutrient legacies in anthropogenic landscapes, accumulated over decades of fertilizer application, lead to time lags between implementation of conservation measures and improvements in water quality. Quantification of such time lags has remained difficult, however, due to an incomplete understanding of controls on nutrient depletion trajectories after changes in land-use or management practices. In this study, we have developed a parsimonious watershed model for quantifying catchment-scale time lags based on both soil nutrient accumulations (biogeochemical legacy) and groundwater travel time distributions (hydrologic legacy). The model accurately predicted the time lags observed in an Iowa watershed that had undergone a 41% conversion of area from row crop to native prairie. We explored the time scales of change for stream nutrient concentrations as a function of both natural and anthropogenic controls, from topography to spatial patterns of land-use change. Our results demonstrate that the existence of biogeochemical nutrient legacies increases time lags beyond those due to hydrologic legacy alone. In addition, we show that the maximum concentration reduction benefits vary according to the spatial pattern of intervention, with preferential conversion of land parcels having the shortest catchment-scale travel times providing proportionally greater concentration reductions as well as faster response times. In contrast, a random pattern of conversion results in a 1:1 relationship between percent land conversion and percent concentration reduction, irrespective of denitrification rates within the landscape. Our modeling framework allows for the quantification of tradeoffs between costs associated with implementation of conservation measures and the time needed to see the desired concentration reductions, making it of great value to decision makers regarding optimal implementation of watershed conservation measures. PMID:25985290
Phung, Dung; Talukder, Mohammad Radwanur Rahman; Rutherford, Shannon; Chu, Cordia
2016-10-01
To develop a prediction score scheme useful for prevention practitioners and authorities to implement dengue preparedness and controls in the Mekong Delta region (MDR). We applied a spatial scan statistic to identify high-risk dengue clusters in the MDR and used generalised linear-distributed lag models to examine climate-dengue associations using dengue case records and meteorological data from 2003 to 2013. The significant predictors were collapsed into categorical scales, and the β-coefficients of predictors were converted to prediction scores. The score scheme was validated for predicting dengue outbreaks using ROC analysis. The north-eastern MDR was identified as the high-risk cluster. A 1 °C increase in temperature at lag 1-4 and 5-8 weeks increased the dengue risk 11% (95% CI, 9-13) and 7% (95% CI, 6-8), respectively. A 1% rise in humidity increased dengue risk 0.9% (95% CI, 0.2-1.4) at lag 1-4 and 0.8% (95% CI, 0.2-1.4) at lag 5-8 weeks. Similarly, a 1-mm increase in rainfall increased dengue risk 0.1% (95% CI, 0.05-0.16) at lag 1-4 and 0.11% (95% CI, 0.07-0.16) at lag 5-8 weeks. The predicted scores performed with high accuracy in diagnosing the dengue outbreaks (96.3%). This study demonstrates the potential usefulness of a dengue prediction score scheme derived from complex statistical models for high-risk dengue clusters. We recommend a further study to examine the possibility of incorporating such a score scheme into the dengue early warning system in similar climate settings. © 2016 John Wiley & Sons Ltd.
Van Meter, Kimberly J; Basu, Nandita B
2015-01-01
Nutrient legacies in anthropogenic landscapes, accumulated over decades of fertilizer application, lead to time lags between implementation of conservation measures and improvements in water quality. Quantification of such time lags has remained difficult, however, due to an incomplete understanding of controls on nutrient depletion trajectories after changes in land-use or management practices. In this study, we have developed a parsimonious watershed model for quantifying catchment-scale time lags based on both soil nutrient accumulations (biogeochemical legacy) and groundwater travel time distributions (hydrologic legacy). The model accurately predicted the time lags observed in an Iowa watershed that had undergone a 41% conversion of area from row crop to native prairie. We explored the time scales of change for stream nutrient concentrations as a function of both natural and anthropogenic controls, from topography to spatial patterns of land-use change. Our results demonstrate that the existence of biogeochemical nutrient legacies increases time lags beyond those due to hydrologic legacy alone. In addition, we show that the maximum concentration reduction benefits vary according to the spatial pattern of intervention, with preferential conversion of land parcels having the shortest catchment-scale travel times providing proportionally greater concentration reductions as well as faster response times. In contrast, a random pattern of conversion results in a 1:1 relationship between percent land conversion and percent concentration reduction, irrespective of denitrification rates within the landscape. Our modeling framework allows for the quantification of tradeoffs between costs associated with implementation of conservation measures and the time needed to see the desired concentration reductions, making it of great value to decision makers regarding optimal implementation of watershed conservation measures.
Theory for Transitions Between Exponential and Stationary Phases: Universal Laws for Lag Time
NASA Astrophysics Data System (ADS)
Himeoka, Yusuke; Kaneko, Kunihiko
2017-04-01
The quantitative characterization of bacterial growth has attracted substantial attention since Monod's pioneering study. Theoretical and experimental works have uncovered several laws for describing the exponential growth phase, in which the number of cells grows exponentially. However, microorganism growth also exhibits lag, stationary, and death phases under starvation conditions, in which cell growth is highly suppressed, for which quantitative laws or theories are markedly underdeveloped. In fact, the models commonly adopted for the exponential phase that consist of autocatalytic chemical components, including ribosomes, can only show exponential growth or decay in a population; thus, phases that halt growth are not realized. Here, we propose a simple, coarse-grained cell model that includes an extra class of macromolecular components in addition to the autocatalytic active components that facilitate cellular growth. These extra components form a complex with the active components to inhibit the catalytic process. Depending on the nutrient condition, the model exhibits typical transitions among the lag, exponential, stationary, and death phases. Furthermore, the lag time needed for growth recovery after starvation follows the square root of the starvation time and is inversely related to the maximal growth rate. This is in agreement with experimental observations, in which the length of time of cell starvation is memorized in the slow accumulation of molecules. Moreover, the lag time distributed among cells is skewed with a long time tail. If the starvation time is longer, an exponential tail appears, which is also consistent with experimental data. Our theory further predicts a strong dependence of lag time on the speed of substrate depletion, which can be tested experimentally. The present model and theoretical analysis provide universal growth laws beyond the exponential phase, offering insight into how cells halt growth without entering the death phase.
Stochastic Estimation via Polynomial Chaos
2015-10-01
AFRL-RW-EG-TR-2015-108 Stochastic Estimation via Polynomial Chaos Douglas V. Nance Air Force Research...COVERED (From - To) 20-04-2015 – 07-08-2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Stochastic Estimation via Polynomial Chaos ...This expository report discusses fundamental aspects of the polynomial chaos method for representing the properties of second order stochastic
Degenerate r-Stirling Numbers and r-Bell Polynomials
NASA Astrophysics Data System (ADS)
Kim, T.; Yao, Y.; Kim, D. S.; Jang, G.-W.
2018-01-01
The purpose of this paper is to exploit umbral calculus in order to derive some properties, recurrence relations, and identities related to the degenerate r-Stirling numbers of the second kind and the degenerate r-Bell polynomials. Especially, we will express the degenerate r-Bell polynomials as linear combinations of many well-known families of special polynomials.
From Chebyshev to Bernstein: A Tour of Polynomials Small and Large
ERIC Educational Resources Information Center
Boelkins, Matthew; Miller, Jennifer; Vugteveen, Benjamin
2006-01-01
Consider the family of monic polynomials of degree n having zeros at -1 and +1 and all their other real zeros in between these two values. This article explores the size of these polynomials using the supremum of the absolute value on [-1, 1], showing that scaled Chebyshev and Bernstein polynomials give the extremes.
Crooks, James Lewis; Cascio, Wayne E.; Percy, Madelyn S.; Reyes, Jeanette; Neas, Lucas M.; Hilborn, Elizabeth D.
2016-01-01
Background: The impact of dust storms on human health has been studied in the context of Asian, Saharan, Arabian, and Australian storms, but there has been no recent population-level epidemiological research on the dust storms in North America. The relevance of dust storms to public health is likely to increase as extreme weather events are predicted to become more frequent with anticipated changes in climate through the 21st century. Objectives: We examined the association between dust storms and county-level non-accidental mortality in the United States from 1993 through 2005. Methods: Dust storm incidence data, including date and approximate location, are taken from the U.S. National Weather Service storm database. County-level mortality data for the years 1993–2005 were acquired from the National Center for Health Statistics. Distributed lag conditional logistic regression models under a time-stratified case-crossover design were used to study the relationship between dust storms and daily mortality counts over the whole United States and in Arizona and California specifically. End points included total non-accidental mortality and three mortality subgroups (cardiovascular, respiratory, and other non-accidental). Results: We estimated that for the United States as a whole, total non-accidental mortality increased by 7.4% (95% CI: 1.6, 13.5; p = 0.011) and 6.7% (95% CI: 1.1, 12.6; p = 0.018) at 2- and 3-day lags, respectively, and by an average of 2.7% (95% CI: 0.4, 5.1; p = 0.023) over lags 0–5 compared with referent days. Significant associations with non-accidental mortality were estimated for California (lag 2 and 0–5 day) and Arizona (lag 3), for cardiovascular mortality in the United States (lag 2) and Arizona (lag 3), and for other non-accidental mortality in California (lags 1–3 and 0–5). Conclusions: Dust storms are associated with increases in lagged non-accidental and cardiovascular mortality. Citation: Crooks JL, Cascio WE, Percy MS, Reyes J, Neas LM, Hilborn ED. 2016. The association between dust storms and daily non-accidental mortality in the United States, 1993–2005. Environ Health Perspect 124:1735–1743; http://dx.doi.org/10.1289/EHP216 PMID:27128449
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez-Sendino, J. E.; del Olmo, M. A.
2010-12-23
We present an umbral operator version of the classical orthogonal polynomials. We obtain three families which are the umbral counterpart of the Jacobi, Laguerre and Hermite polynomials in the classical case.
INVASIVE SPECIES: PREDICTING GEOGRAPHIC DISTRIBUTIONS USING ECOLOGICAL NICHE MODELING
Present approaches to species invasions are reactive in nature. This scenario results in management that perpetually lags behind the most recent invasion and makes control much more difficult. In contrast, spatially explicit ecological niche modeling provides an effective solut...
Mean air temperature as a risk factor for stroke mortality in São Paulo, Brazil
NASA Astrophysics Data System (ADS)
Ikefuti, Priscilla V.; Barrozo, Ligia V.; Braga, Alfésio L. F.
2018-05-01
In Brazil, chronic diseases account for the largest percentage of all deaths among men and women. Among the cardiovascular diseases, stroke is the leading cause of death, accounting for 10% of all deaths. We evaluated associations between stroke and mean air temperature using recorded mortality data and meteorological station data from 2002 to 2011. A time series analysis was applied to 55,633 mortality cases. Ischemic and hemorrhagic strokes (IS and HS, respectively) were divided to test different impact on which subgroup. Poisson regression with distributed lag non-linear model was used and adjusted for seasonality, pollutants, humidity, and days of the week. HS mortality was associated with low mean temperatures for men relative risk (RR) = 2.43 (95% CI, 1.12-5.28) and women RR = 1.39 (95% CI, 1.03-1.86). RR of IS mortality was not significant using a 21-day lag window. Analyzing the lag response separately, we observed that the effect of temperature is acute in stroke mortality (higher risk among lags 0-5). However, for IS, higher mean temperatures were significant for this subtype with more than 15-day lag. Our findings showed that mean air temperature is associated with stroke mortality in the city of São Paulo for men and women and IS and HS may have different triggers. Further studies are needed to evaluate physiologic differences between these two subtypes of stroke.
Mutlu, Ibrahim; Ozkan, Arif; Kisioglu, Yasin
2016-01-01
Background. In this study, the cut-out risk of Dynamic Hip Screw (DHS) was investigated in nine different positions of the lag screw for two fracture types by using Finite Element Analysis (FEA). Methods. Two types of fractures (31-A1.1 and A2.1 in AO classification) were generated in the femur model obtained from Computerized Tomography images. The DHS model was placed into the fractured femur model in nine different positions. Tip-Apex Distances were measured using SolidWorks. In FEA, the force applied to the femoral head was determined according to the maximum value being observed during walking. Results. The highest volume percentage exceeding the yield strength of trabecular bone was obtained in posterior-inferior region in both fracture types. The best placement region for the lag screw was found in the middle of both fracture types. There are compatible results between Tip-Apex Distances and the cut-out risk except for posterior-superior and superior region of 31-A2.1 fracture type. Conclusion. The position of the lag screw affects the risk of cut-out significantly. Also, Tip-Apex Distance is a good predictor of the cut-out risk. All in all, we can supposedly say that the density distribution of the trabecular bone is a more efficient factor compared to the positions of lag screw in the cut-out risk. PMID:27995133
Liao, Duanping; Shaffer, Michele L.; He, Fan; Rodriguez-Colon, Sol; Wu, Rongling; Whitsel, Eric A.; Bixler, Edward O.; Cascio, Wayne E.
2011-01-01
The acute effects and the time course of fine particulate pollution (PM2.5) on atrial fibrillation/flutter (AF) predictors, including P-wave duration, PR interval duration, and P-wave complexity, were investigated in a community-dwelling sample of 106 nonsmokers. Individual-level 24-h beat-to-beat electrocardiogram (ECG) data were visually examined. After identifying and removing artifacts and arrhythmic beats, the 30-min averages of the AF predictors were calculated. A personal PM2.5 monitor was used to measure individual-level, real-time PM2.5 exposures during the same 24-h period, and corresponding 30-min average PM2.5 concentration were calculated. Under a linear mixed-effects modeling framework, distributed lag models were used to estimate regression coefficients (βs) associating PM2.5 with AF predictors. Most of the adverse effects on AF predictors occurred within 1.5–2 h after PM2.5 exposure. The multivariable adjusted βs per 10-µg/m3 rise in PM2.5 at lag 1 and lag 2 were significantly associated with P-wave complexity. PM2.5 exposure was also significantly associated with prolonged PR duration at lag 3 and lag 4. Higher PM2.5 was found to be associated with increases in P-wave complexity and PR duration. Maximal effects were observed within 2 h. These findings suggest that PM2.5 adversely affects AF predictors; thus, PM2.5 may be indicative of greater susceptibility to AF. PMID:21480044
Ye, Xin; Pendyala, Ram M.; Zou, Yajie
2017-01-01
A semi-nonparametric generalized multinomial logit model, formulated using orthonormal Legendre polynomials to extend the standard Gumbel distribution, is presented in this paper. The resulting semi-nonparametric function can represent a probability density function for a large family of multimodal distributions. The model has a closed-form log-likelihood function that facilitates model estimation. The proposed method is applied to model commute mode choice among four alternatives (auto, transit, bicycle and walk) using travel behavior data from Argau, Switzerland. Comparisons between the multinomial logit model and the proposed semi-nonparametric model show that violations of the standard Gumbel distribution assumption lead to considerable inconsistency in parameter estimates and model inferences. PMID:29073152
Wang, Ke; Ye, Xin; Pendyala, Ram M; Zou, Yajie
2017-01-01
A semi-nonparametric generalized multinomial logit model, formulated using orthonormal Legendre polynomials to extend the standard Gumbel distribution, is presented in this paper. The resulting semi-nonparametric function can represent a probability density function for a large family of multimodal distributions. The model has a closed-form log-likelihood function that facilitates model estimation. The proposed method is applied to model commute mode choice among four alternatives (auto, transit, bicycle and walk) using travel behavior data from Argau, Switzerland. Comparisons between the multinomial logit model and the proposed semi-nonparametric model show that violations of the standard Gumbel distribution assumption lead to considerable inconsistency in parameter estimates and model inferences.
A Spectral Analysis of Discrete-Time Quantum Walks Related to the Birth and Death Chains
NASA Astrophysics Data System (ADS)
Ho, Choon-Lin; Ide, Yusuke; Konno, Norio; Segawa, Etsuo; Takumi, Kentaro
2018-04-01
In this paper, we consider a spectral analysis of discrete time quantum walks on the path. For isospectral coin cases, we show that the time averaged distribution and stationary distributions of the quantum walks are described by the pair of eigenvalues of the coins as well as the eigenvalues and eigenvectors of the corresponding random walks which are usually referred as the birth and death chains. As an example of the results, we derive the time averaged distribution of so-called Szegedy's walk which is related to the Ehrenfest model. It is represented by Krawtchouk polynomials which is the eigenvectors of the model and includes the arcsine law.
Design and Use of a Learning Object for Finding Complex Polynomial Roots
ERIC Educational Resources Information Center
Benitez, Julio; Gimenez, Marcos H.; Hueso, Jose L.; Martinez, Eulalia; Riera, Jaime
2013-01-01
Complex numbers are essential in many fields of engineering, but students often fail to have a natural insight of them. We present a learning object for the study of complex polynomials that graphically shows that any complex polynomials has a root and, furthermore, is useful to find the approximate roots of a complex polynomial. Moreover, we…
Extending a Property of Cubic Polynomials to Higher-Degree Polynomials
ERIC Educational Resources Information Center
Miller, David A.; Moseley, James
2012-01-01
In this paper, the authors examine a property that holds for all cubic polynomials given two zeros. This property is discovered after reviewing a variety of ways to determine the equation of a cubic polynomial given specific conditions through algebra and calculus. At the end of the article, they will connect the property to a very famous method…
Computing Galois Groups of Eisenstein Polynomials Over P-adic Fields
NASA Astrophysics Data System (ADS)
Milstead, Jonathan
The most efficient algorithms for computing Galois groups of polynomials over global fields are based on Stauduhar's relative resolvent method. These methods are not directly generalizable to the local field case, since they require a field that contains the global field in which all roots of the polynomial can be approximated. We present splitting field-independent methods for computing the Galois group of an Eisenstein polynomial over a p-adic field. Our approach is to combine information from different disciplines. We primarily, make use of the ramification polygon of the polynomial, which is the Newton polygon of a related polynomial. This allows us to quickly calculate several invariants that serve to reduce the number of possible Galois groups. Algorithms by Greve and Pauli very efficiently return the Galois group of polynomials where the ramification polygon consists of one segment as well as information about the subfields of the stem field. Second, we look at the factorization of linear absolute resolvents to further narrow the pool of possible groups.
On polynomial preconditioning for indefinite Hermitian matrices
NASA Technical Reports Server (NTRS)
Freund, Roland W.
1989-01-01
The minimal residual method is studied combined with polynomial preconditioning for solving large linear systems (Ax = b) with indefinite Hermitian coefficient matrices (A). The standard approach for choosing the polynomial preconditioners leads to preconditioned systems which are positive definite. Here, a different strategy is studied which leaves the preconditioned coefficient matrix indefinite. More precisely, the polynomial preconditioner is designed to cluster the positive, resp. negative eigenvalues of A around 1, resp. around some negative constant. In particular, it is shown that such indefinite polynomial preconditioners can be obtained as the optimal solutions of a certain two parameter family of Chebyshev approximation problems. Some basic results are established for these approximation problems and a Remez type algorithm is sketched for their numerical solution. The problem of selecting the parameters such that the resulting indefinite polynomial preconditioners speeds up the convergence of minimal residual method optimally is also addressed. An approach is proposed based on the concept of asymptotic convergence factors. Finally, some numerical examples of indefinite polynomial preconditioners are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Genest, Vincent X.; Vinet, Luc; Zhedanov, Alexei
The algebra H of the dual -1 Hahn polynomials is derived and shown to arise in the Clebsch-Gordan problem of sl{sub -1}(2). The dual -1 Hahn polynomials are the bispectral polynomials of a discrete argument obtained from the q{yields}-1 limit of the dual q-Hahn polynomials. The Hopf algebra sl{sub -1}(2) has four generators including an involution, it is also a q{yields}-1 limit of the quantum algebra sl{sub q}(2) and furthermore, the dynamical algebra of the parabose oscillator. The algebra H, a two-parameter generalization of u(2) with an involution as additional generator, is first derived from the recurrence relation of themore » -1 Hahn polynomials. It is then shown that H can be realized in terms of the generators of two added sl{sub -1}(2) algebras, so that the Clebsch-Gordan coefficients of sl{sub -1}(2) are dual -1 Hahn polynomials. An irreducible representation of H involving five-diagonal matrices and connected to the difference equation of the dual -1 Hahn polynomials is constructed.« less
Interbasis expansions in the Zernike system
NASA Astrophysics Data System (ADS)
Atakishiyev, Natig M.; Pogosyan, George S.; Wolf, Kurt Bernardo; Yakhno, Alexander
2017-10-01
The differential equation with free boundary conditions on the unit disk that was proposed by Frits Zernike in 1934 to find Jacobi polynomial solutions (indicated as I) serves to define a classical system and a quantum system which have been found to be superintegrable. We have determined two new orthogonal polynomial solutions (indicated as II and III) that are separable and involve Legendre and Gegenbauer polynomials. Here we report on their three interbasis expansion coefficients: between the I-II and I-III bases, they are given by F32(⋯|1 ) polynomials that are also special su(2) Clebsch-Gordan coefficients and Hahn polynomials. Between the II-III bases, we find an expansion expressed by F43(⋯|1 ) 's and Racah polynomials that are related to the Wigner 6j coefficients.
Shear-lag effect and its effect on the design of high-rise buildings
NASA Astrophysics Data System (ADS)
Thanh Dat, Bui; Traykov, Alexander; Traykova, Marina
2018-03-01
For super high-rise buildings, the analysis and selection of suitable structural solutions are very important. The structure has not only to carry the gravity loads (self-weight, live load, etc.), but also to resist lateral loads (wind and earthquake loads). As the buildings become taller, the demand on different structural systems dramatically increases. The article considers the division of the structural systems of tall buildings into two main categories - interior structures for which the major part of the lateral load resisting system is located within the interior of the building, and exterior structures for which the major part of the lateral load resisting system is located at the building perimeter. The basic types of each of the main structural categories are described. In particular, the framed tube structures, which belong to the second main category of exterior structures, seem to be very efficient. That type of structure system allows tall buildings resist the lateral loads. However, those tube systems are affected by shear lag effect - a nonlinear distribution of stresses across the sides of the section, which is commonly found in box girders under lateral loads. Based on a numerical example, some general conclusions for the influence of the shear-lag effect on frequencies, periods, distribution and variation of the magnitude of the internal forces in the structure are presented.
Zeros and logarithmic asymptotics of Sobolev orthogonal polynomials for exponential weights
NASA Astrophysics Data System (ADS)
Díaz Mendoza, C.; Orive, R.; Pijeira Cabrera, H.
2009-12-01
We obtain the (contracted) weak zero asymptotics for orthogonal polynomials with respect to Sobolev inner products with exponential weights in the real semiaxis, of the form , with [gamma]>0, which include as particular cases the counterparts of the so-called Freud (i.e., when [phi] has a polynomial growth at infinity) and Erdös (when [phi] grows faster than any polynomial at infinity) weights. In addition, the boundness of the distance of the zeros of these Sobolev orthogonal polynomials to the convex hull of the support and, as a consequence, a result on logarithmic asymptotics are derived.
Combinatorial theory of Macdonald polynomials I: proof of Haglund's formula.
Haglund, J; Haiman, M; Loehr, N
2005-02-22
Haglund recently proposed a combinatorial interpretation of the modified Macdonald polynomials H(mu). We give a combinatorial proof of this conjecture, which establishes the existence and integrality of H(mu). As corollaries, we obtain the cocharge formula of Lascoux and Schutzenberger for Hall-Littlewood polynomials, a formula of Sahi and Knop for Jack's symmetric functions, a generalization of this result to the integral Macdonald polynomials J(mu), a formula for H(mu) in terms of Lascoux-Leclerc-Thibon polynomials, and combinatorial expressions for the Kostka-Macdonald coefficients K(lambda,mu) when mu is a two-column shape.
Multi-indexed (q-)Racah polynomials
NASA Astrophysics Data System (ADS)
Odake, Satoru; Sasaki, Ryu
2012-09-01
As the second stage of the project multi-indexed orthogonal polynomials, we present, in the framework of ‘discrete quantum mechanics’ with real shifts in one dimension, the multi-indexed (q-)Racah polynomials. They are obtained from the (q-)Racah polynomials by the multiple application of the discrete analogue of the Darboux transformations or the Crum-Krein-Adler deletion of ‘virtual state’ vectors, in a similar way to the multi-indexed Laguerre and Jacobi polynomials reported earlier. The virtual state vectors are the ‘solutions’ of the matrix Schrödinger equation with negative ‘eigenvalues’, except for one of the two boundary points.
Convergence analysis of surrogate-based methods for Bayesian inverse problems
NASA Astrophysics Data System (ADS)
Yan, Liang; Zhang, Yuan-Xiang
2017-12-01
The major challenges in the Bayesian inverse problems arise from the need for repeated evaluations of the forward model, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. Many attempts at accelerating Bayesian inference have relied on surrogates for the forward model, typically constructed through repeated forward simulations that are performed in an offline phase. Although such approaches can be quite effective at reducing computation cost, there has been little analysis of the approximation on posterior inference. In this work, we prove error bounds on the Kullback-Leibler (KL) distance between the true posterior distribution and the approximation based on surrogate models. Our rigorous error analysis show that if the forward model approximation converges at certain rate in the prior-weighted L 2 norm, then the posterior distribution generated by the approximation converges to the true posterior at least two times faster in the KL sense. The error bound on the Hellinger distance is also provided. To provide concrete examples focusing on the use of the surrogate model based methods, we present an efficient technique for constructing stochastic surrogate models to accelerate the Bayesian inference approach. The Christoffel least squares algorithms, based on generalized polynomial chaos, are used to construct a polynomial approximation of the forward solution over the support of the prior distribution. The numerical strategy and the predicted convergence rates are then demonstrated on the nonlinear inverse problems, involving the inference of parameters appearing in partial differential equations.
Conformal Galilei algebras, symmetric polynomials and singular vectors
NASA Astrophysics Data System (ADS)
Křižka, Libor; Somberg, Petr
2018-01-01
We classify and explicitly describe homomorphisms of Verma modules for conformal Galilei algebras cga_ℓ (d,C) with d=1 for any integer value ℓ \\in N. The homomorphisms are uniquely determined by singular vectors as solutions of certain differential operators of flag type and identified with specific polynomials arising as coefficients in the expansion of a parametric family of symmetric polynomials into power sum symmetric polynomials.
Identities associated with Milne-Thomson type polynomials and special numbers.
Simsek, Yilmaz; Cakic, Nenad
2018-01-01
The purpose of this paper is to give identities and relations including the Milne-Thomson polynomials, the Hermite polynomials, the Bernoulli numbers, the Euler numbers, the Stirling numbers, the central factorial numbers, and the Cauchy numbers. By using fermionic and bosonic p -adic integrals, we derive some new relations and formulas related to these numbers and polynomials, and also the combinatorial sums.
Performance of fuzzy approach in Malaysia short-term electricity load forecasting
NASA Astrophysics Data System (ADS)
Mansor, Rosnalini; Zulkifli, Malina; Yusof, Muhammad Mat; Ismail, Mohd Isfahani; Ismail, Suzilah; Yin, Yip Chee
2014-12-01
Many activities such as economic, education and manafucturing would paralyse with limited supply of electricity but surplus contribute to high operating cost. Therefore electricity load forecasting is important in order to avoid shortage or excess. Previous finding showed festive celebration has effect on short-term electricity load forecasting. Being a multi culture country Malaysia has many major festive celebrations such as Eidul Fitri, Chinese New Year and Deepavali but they are moving holidays due to non-fixed dates on the Gregorian calendar. This study emphasis on the performance of fuzzy approach in forecasting electricity load when considering the presence of moving holidays. Autoregressive Distributed Lag model was estimated using simulated data by including model simplification concept (manual or automatic), day types (weekdays or weekend), public holidays and lags of electricity load. The result indicated that day types, public holidays and several lags of electricity load were significant in the model. Overall, model simplification improves fuzzy performance due to less variables and rules.
Balaev, Mikhail
2014-07-01
The author examines how time delayed effects of economic development, education, and gender equality influence political democracy. Literature review shows inadequate understanding of lagged effects, which raises methodological and theoretical issues with the current quantitative studies of democracy. Using country-years as a unit of analysis, the author estimates a series of OLS PCSE models for each predictor with a systematic analysis of the distributions of the lagged effects. The second set of multiple OLS PCSE regressions are estimated including all three independent variables. The results show that economic development, education, and gender have three unique trajectories of the time-delayed effects: Economic development has long-term effects, education produces continuous effects regardless of the timing, and gender equality has the most prominent immediate and short term effects. The results call for the reassessment of model specifications and theoretical setups in the quantitative studies of democracy. Copyright © 2014 Elsevier Inc. All rights reserved.
Oscillatory shear rheology measurements and Newtonian modeling of insoluble monolayers
NASA Astrophysics Data System (ADS)
Rasheed, Fayaz; Raghunandan, Aditya; Hirsa, Amir H.; Lopez, Juan M.
2017-04-01
Circular systems are advantageous for interfacial studies since they do not suffer from end effects, but their hydrodynamics is more complicated because their flows are not unidirectional. Here, we analyze the shear rheology of a harmonically driven knife-edge viscometer through experiments and computations based on the Navier-Stokes equations with a Newtonian interface. The measured distribution of phase lag in the surface velocity relative to the knife-edge speed is found to have a good signal-to-noise ratio and provides robust comparisons to the computations. For monomolecular films of stearic acid, the surface shear viscosity deduced from the model was found to be the same whether the film is driven steady or oscillatory, for an order of magnitude range in driving frequencies and amplitudes. Results show that increasing either the amplitude or forcing frequency steepens the phase lag next to the knife edge. In all cases, the phase lag is linearly proportional to the radial distance from the knife edge and scales with surface shear viscosity to the power -1 /2 .
Can the Air Pollution Index be used to communicate the health risks of air pollution?
Li, Li; Lin, Guo-Zhen; Liu, Hua-Zhang; Guo, Yuming; Ou, Chun-Quan; Chen, Ping-Yan
2015-10-01
The validity of using the Air Pollution Index (API) to assess health impacts of air pollution and potential modification by individual characteristics on air pollution effects remain uncertain. We applied distributed lag non-linear models (DLNMs) to assess associations of daily API, specific pollution indices for PM10, SO2, NO2 and the weighted combined API (APIw) with mortality during 2003-2011 in Guangzhou, China. An increase of 10 in API was associated with a 0.88% (95% confidence interval (CI): 0.50, 1.27%) increase of non-accidental mortality at lag 0-2 days. Harvesting effects appeared after 2 days' exposure. The effect estimate of API over lag 0-15 days was statistically significant and similar with those of pollutant-specific indices and APIw. Stronger associations between API and mortality were observed in the elderly, females and residents with low educational attainment. In conclusion, the API can be used to communicate health risks of air pollution. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lee, Chieh-Han; Yu, Hwa-Lung
2014-05-01
Dengue fever has been recognized as the most important widespread vector-borne infectious disease in recent decades. Over 40% of the world's population is risk from dengue and about 50-100 million people are infected world wide annually. Previous studies have found that dengue fever is highly correlated with climate covariates. Thus, the potential effects of global climate change on dengue fever are crucial to epidemic concern, in particular, the transmission of the disease. This present study investigated the nonlinearity of time-delayed impact of climate on spatio-temporal variations of dengue fever in the southern Taiwan during 1998 to 2011. A distributed lag nonlinear model (DLNM) is used to assess the nonlinear lagged effects of meteorology. The statistically significant meteorological factors are considered, including weekly minimum temperature and maximum 24-hour rainfall. The relative risk and the distribution of dengue fever then predict under various climate change scenarios. The result shows that the relative risk is similar for different scenarios. In addition, the impact of rainfall on the incidence risk is higher than temperature. Moreover, the incidence risk is associated to spatially population distribution. The results can be served as practical reference for environmental regulators for the epidemic prevention under climate change scenarios.
Derivation of GPS TEC and receiver bias for Langkawi station in Malaysia
NASA Astrophysics Data System (ADS)
Teh, W. L.; Chen, W. S.; Abdullah, M.
2017-05-01
This paper presents the polynomial-type TEC model to derive total electron content (TEC) and receiver bias for Langkawi (LGKW) station in Malaysia at geographic latitude of 6.32° and longitude of 99.85°. The model uses a polynomial function of coordinates of the ionospheric piercing point to describe the TEC distribution in space. In the model, six polynomial coefficients and a receiver bias are unknown which can be solved by the least squares method. A reasonable agreement is achieved for the derivation of TEC and receiver bias for IENG station in Italy, as compared with that derived by the IGS analysis center, CODE. We process one year of LGKW data in 2010 and show the monthly receiver bias and the seasonal TEC variation. The monthly receiver bias varies between -48 and -24 TECu (1016 electrons/m2), with the mean value at -37 TECu. Large variations happen in the monthly receiver biases due to the low data coverage of high satellite elevation angle (60° < α ≤ 90°). Post-processing TEC approach is implemented which can resolve the wavy pattern of the monthly TEC baseline resulted from the large variation of the receiver bias. The seasonal TEC variation at LGKW exhibits a semi-annual variation, where the peak occurs during equinoctial months, and the trough during summer and winter months.
Impossibility of Classically Simulating One-Clean-Qubit Model with Multiplicative Error
NASA Astrophysics Data System (ADS)
Fujii, Keisuke; Kobayashi, Hirotada; Morimae, Tomoyuki; Nishimura, Harumichi; Tamate, Shuhei; Tani, Seiichiro
2018-05-01
The one-clean-qubit model (or the deterministic quantum computation with one quantum bit model) is a restricted model of quantum computing where all but a single input qubits are maximally mixed. It is known that the probability distribution of measurement results on three output qubits of the one-clean-qubit model cannot be classically efficiently sampled within a constant multiplicative error unless the polynomial-time hierarchy collapses to the third level [T. Morimae, K. Fujii, and J. F. Fitzsimons, Phys. Rev. Lett. 112, 130502 (2014), 10.1103/PhysRevLett.112.130502]. It was open whether we can keep the no-go result while reducing the number of output qubits from three to one. Here, we solve the open problem affirmatively. We also show that the third-level collapse of the polynomial-time hierarchy can be strengthened to the second-level one. The strengthening of the collapse level from the third to the second also holds for other subuniversal models such as the instantaneous quantum polynomial model [M. Bremner, R. Jozsa, and D. J. Shepherd, Proc. R. Soc. A 467, 459 (2011), 10.1098/rspa.2010.0301] and the boson sampling model [S. Aaronson and A. Arkhipov, STOC 2011, p. 333]. We additionally study the classical simulatability of the one-clean-qubit model with further restrictions on the circuit depth or the gate types.
Approximating smooth functions using algebraic-trigonometric polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharapudinov, Idris I
2011-01-14
The problem under consideration is that of approximating classes of smooth functions by algebraic-trigonometric polynomials of the form p{sub n}(t)+{tau}{sub m}(t), where p{sub n}(t) is an algebraic polynomial of degree n and {tau}{sub m}(t)=a{sub 0}+{Sigma}{sub k=1}{sup m}a{sub k} cos k{pi}t + b{sub k} sin k{pi}t is a trigonometric polynomial of order m. The precise order of approximation by such polynomials in the classes W{sup r}{sub {infinity}(}M) and an upper bound for similar approximations in the class W{sup r}{sub p}(M) with 4/3
Parameter reduction in nonlinear state-space identification of hysteresis
NASA Astrophysics Data System (ADS)
Fakhrizadeh Esfahani, Alireza; Dreesen, Philippe; Tiels, Koen; Noël, Jean-Philippe; Schoukens, Johan
2018-05-01
Recent work on black-box polynomial nonlinear state-space modeling for hysteresis identification has provided promising results, but struggles with a large number of parameters due to the use of multivariate polynomials. This drawback is tackled in the current paper by applying a decoupling approach that results in a more parsimonious representation involving univariate polynomials. This work is carried out numerically on input-output data generated by a Bouc-Wen hysteretic model and follows up on earlier work of the authors. The current article discusses the polynomial decoupling approach and explores the selection of the number of univariate polynomials with the polynomial degree. We have found that the presented decoupling approach is able to reduce the number of parameters of the full nonlinear model up to about 50%, while maintaining a comparable output error level.
Elegant Ince-Gaussian beams in a quadratic-index medium
NASA Astrophysics Data System (ADS)
Bai, Zhi-Yong; Deng, Dong-Mei; Guo, Qi
2011-09-01
Elegant Ince—Gaussian beams, which are the exact solutions of the paraxial wave equation in a quadratic-index medium, are derived in elliptical coordinates. These kinds of beams are the alternative form of standard Ince—Gaussian beams and they display better symmetry between the Ince-polynomials and the Gaussian function in mathematics. The transverse intensity distribution and the phase of the elegant Ince—Gaussian beams are discussed.
Covariant extension of the GPD overlap representation at low Fock states
Chouika, N.; Mezrag, C.; Moutarde, H.; ...
2017-12-26
Here, we present a novel approach to compute generalized parton distributions within the lightfront wave function overlap framework. We show how to systematically extend generalized parton distributions computed within the DGLAP region to the ERBL one, fulfilling at the same time both the polynomiality and positivity conditions. We exemplify our method using pion lightfront wave functions inspired by recent results of non-perturbative continuum techniques and algebraic nucleon lightfront wave functions. We also test the robustness of our algorithm on reggeized phenomenological parameterizations. This approach paves the way to a better understanding of the nucleon structure from non-perturbative techniques and tomore » a unification of generalized parton distributions and transverse momentum dependent parton distribution functions phenomenology through lightfront wave functions.« less
Asumadu-Sarkodie, Samuel; Owusu, Phebe Asantewaa
2016-06-01
In this paper, the relationship between carbon dioxide and agriculture in Ghana was investigated by comparing a Vector Error Correction Model (VECM) and Autoregressive Distributed Lag (ARDL) Model. Ten study variables spanning from 1961 to 2012 were employed from the Food Agricultural Organization. Results from the study show that carbon dioxide emissions affect the percentage annual change of agricultural area, coarse grain production, cocoa bean production, fruit production, vegetable production, and the total livestock per hectare of the agricultural area. The vector error correction model and the autoregressive distributed lag model show evidence of a causal relationship between carbon dioxide emissions and agriculture; however, the relationship decreases periodically which may die over-time. All the endogenous variables except total primary vegetable production lead to carbon dioxide emissions, which may be due to poor agricultural practices to meet the growing food demand in Ghana. The autoregressive distributed lag bounds test shows evidence of a long-run equilibrium relationship between the percentage annual change of agricultural area, cocoa bean production, total livestock per hectare of agricultural area, total pulses production, total primary vegetable production, and carbon dioxide emissions. It is important to end hunger and ensure people have access to safe and nutritious food, especially the poor, orphans, pregnant women, and children under-5 years in order to reduce maternal and infant mortalities. Nevertheless, it is also important that the Government of Ghana institutes agricultural policies that focus on promoting a sustainable agriculture using environmental friendly agricultural practices. The study recommends an integration of climate change measures into Ghana's national strategies, policies and planning in order to strengthen the country's effort to achieving a sustainable environment.
Malloy, Elizabeth J; Morris, Jeffrey S; Adar, Sara D; Suh, Helen; Gold, Diane R; Coull, Brent A
2010-07-01
Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient is necessary for estimation. Additional issues in this setting are the possibility of exposure measurement error and the need to incorporate additional potential confounders, such as meteorological or co-pollutant measures, that themselves may have effects that vary over time. To accommodate all these features, we develop wavelet-based linear mixed distributed lag models that incorporate repeated measures of functional data as covariates into a linear mixed model. A Bayesian approach to model fitting uses wavelet shrinkage to regularize functional coefficients. We show that, as long as the exposure error induces fine-scale variability in the functional exposure profile and the distributed lag function representing the exposure effect varies smoothly in time, the model corrects for the exposure measurement error without further adjustment. Both these conditions are likely to hold in the environmental applications we consider. We examine properties of the method using simulations and apply the method to data from a study examining the association between PM, measured as hourly averages for 1-7 days, and markers of acute systemic inflammation. We use the method to fully control for the effects of confounding by other time-varying predictors, such as temperature and co-pollutants.
Huang, Jiao; Chen, Shi; Wu, Yang; Tong, Yeqing; Wang, Lei; Zhu, Min; Hu, Shuhua; Guan, Xuhua; Wei, Sheng
2018-01-31
Hand, foot and mouth disease (HFMD) is a substantial burden throughout Asia, but the effects of temperature pattern on HFMD risk are inconsistent. To quantify the effect of temperature on HFMD incidence, Wuhan was chosen as the study site because of its high temperature variability and high HFMD incidence. Daily series of HFMD counts and meteorological variables during 2010-2015 were obtained. Distributed lag non-linear models were applied to characterize the temperature-HFMD relationship and to assess its variability across different ages, genders, and types of child care. Totally, 80,219 patients of 0-5 years experienced HFMD in 2010-2015 in Wuhan. The cumulative relative risk of HFMD increased linearly with temperature over 7 days (lag0-7), while it presented as an approximately inverted V-shape over 14 days (lag0-14). The cumulative relative risk at lag0-14 peaked at 26.4 °C with value of 2.78 (95%CI: 2.08-3.72) compared with the 5 th percentile temperature (1.7 °C). Subgroup analyses revealed that children attended daycare were more vulnerable to temperature variation than those cared for at home. This study suggests that public health actions should take into consideration local weather conditions and demographic characteristics.
Xu, Y Zh; Métris, A; Stasinopoulos, D M; Forsythe, S J; Sutherland, J P
2015-02-01
The effect of heat stress and subsequent recovery temperature on the individual cellular lag of Cronobacter turicensis was analysed using optical density measurements. Low numbers of cells were obtained through serial dilution and the time to reach an optical density of 0.035 was determined. Assuming the lag of a single cell follows a shifted Gamma distribution with a fixed shape parameter, the effect of recovery temperature on the individual lag of untreated and sublethally heat treated cells of Cr. turicensis were modelled. It was found that the shift parameter (Tshift) increased asymptotically as the temperature decreased while the logarithm of the scale parameter (θ) decreased linearly with recovery temperature. To test the validity of the model in food, growth of low numbers of untreated and heat treated Cr. turicensis in artificially contaminated infant first milk was measured experimentally and compared with predictions obtained by Monte Carlo simulations. Although the model for untreated cells slightly underestimated the actual growth in first milk at low temperatures, the model for heat treated cells was in agreement with the data derived from the challenge tests and provides a basis for reliable quantitative microbiological risk assessments for Cronobacter spp. in infant milk. Copyright © 2014 Elsevier Ltd. All rights reserved.
Constructing general partial differential equations using polynomial and neural networks.
Zjavka, Ladislav; Pedrycz, Witold
2016-01-01
Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.
Learning polynomial feedforward neural networks by genetic programming and backpropagation.
Nikolaev, N Y; Iba, H
2003-01-01
This paper presents an approach to learning polynomial feedforward neural networks (PFNNs). The approach suggests, first, finding the polynomial network structure by means of a population-based search technique relying on the genetic programming paradigm, and second, further adjustment of the best discovered network weights by an especially derived backpropagation algorithm for higher order networks with polynomial activation functions. These two stages of the PFNN learning process enable us to identify networks with good training as well as generalization performance. Empirical results show that this approach finds PFNN which outperform considerably some previous constructive polynomial network algorithms on processing benchmark time series.
Quasi-kernel polynomials and convergence results for quasi-minimal residual iterations
NASA Technical Reports Server (NTRS)
Freund, Roland W.
1992-01-01
Recently, Freund and Nachtigal have proposed a novel polynominal-based iteration, the quasi-minimal residual algorithm (QMR), for solving general nonsingular non-Hermitian linear systems. Motivated by the QMR method, we have introduced the general concept of quasi-kernel polynomials, and we have shown that the QMR algorithm is based on a particular instance of quasi-kernel polynomials. In this paper, we continue our study of quasi-kernel polynomials. In particular, we derive bounds for the norms of quasi-kernel polynomials. These results are then applied to obtain convergence theorems both for the QMR method and for a transpose-free variant of QMR, the TFQMR algorithm.
NASA Astrophysics Data System (ADS)
Mironov, A.; Mkrtchyan, R.; Morozov, A.
2016-02-01
We present a universal knot polynomials for 2- and 3-strand torus knots in adjoint representation, by universalization of appropriate Rosso-Jones formula. According to universality, these polynomials coincide with adjoined colored HOMFLY and Kauffman polynomials at SL and SO/Sp lines on Vogel's plane, respectively and give their exceptional group's counterparts on exceptional line. We demonstrate that [m,n]=[n,m] topological invariance, when applicable, take place on the entire Vogel's plane. We also suggest the universal form of invariant of figure eight knot in adjoint representation, and suggest existence of such universalization for any knot in adjoint and its descendant representations. Properties of universal polynomials and applications of these results are discussed.
Zernike Basis to Cartesian Transformations
NASA Astrophysics Data System (ADS)
Mathar, R. J.
2009-12-01
The radial polynomials of the 2D (circular) and 3D (spherical) Zernike functions are tabulated as powers of the radial distance. The reciprocal tabulation of powers of the radial distance in series of radial polynomials is also given, based on projections that take advantage of the orthogonality of the polynomials over the unit interval. They play a role in the expansion of products of the polynomials into sums, which is demonstrated by some examples. Multiplication of the polynomials by the angular bases (azimuth, polar angle) defines the Zernike functions, for which we derive transformations to and from the Cartesian coordinate system centered at the middle of the circle or sphere.
Chaos, Fractals, and Polynomials.
ERIC Educational Resources Information Center
Tylee, J. Louis; Tylee, Thomas B.
1996-01-01
Discusses chaos theory; linear algebraic equations and the numerical solution of polynomials, including the use of the Newton-Raphson technique to find polynomial roots; fractals; search region and coordinate systems; convergence; and generating color fractals on a computer. (LRW)
Universal Racah matrices and adjoint knot polynomials: Arborescent knots
NASA Astrophysics Data System (ADS)
Mironov, A.; Morozov, A.
2016-04-01
By now it is well established that the quantum dimensions of descendants of the adjoint representation can be described in a universal form, independent of a particular family of simple Lie algebras. The Rosso-Jones formula then implies a universal description of the adjoint knot polynomials for torus knots, which in particular unifies the HOMFLY (SUN) and Kauffman (SON) polynomials. For E8 the adjoint representation is also fundamental. We suggest to extend the universality from the dimensions to the Racah matrices and this immediately produces a unified description of the adjoint knot polynomials for all arborescent (double-fat) knots, including twist, 2-bridge and pretzel. Technically we develop together the universality and the "eigenvalue conjecture", which expresses the Racah and mixing matrices through the eigenvalues of the quantum R-matrix, and for dealing with the adjoint polynomials one has to extend it to the previously unknown 6 × 6 case. The adjoint polynomials do not distinguish between mutants and therefore are not very efficient in knot theory, however, universal polynomials in higher representations can probably be better in this respect.
Imaging characteristics of Zernike and annular polynomial aberrations.
Mahajan, Virendra N; Díaz, José Antonio
2013-04-01
The general equations for the point-spread function (PSF) and optical transfer function (OTF) are given for any pupil shape, and they are applied to optical imaging systems with circular and annular pupils. The symmetry properties of the PSF, the real and imaginary parts of the OTF, and the modulation transfer function (MTF) of a system with a circular pupil aberrated by a Zernike circle polynomial aberration are derived. The interferograms and PSFs are illustrated for some typical polynomial aberrations with a sigma value of one wave, and 3D PSFs and MTFs are shown for 0.1 wave. The Strehl ratio is also calculated for polynomial aberrations with a sigma value of 0.1 wave, and shown to be well estimated from the sigma value. The numerical results are compared with the corresponding results in the literature. Because of the same angular dependence of the corresponding annular and circle polynomial aberrations, the symmetry properties of systems with annular pupils aberrated by an annular polynomial aberration are the same as those for a circular pupil aberrated by a corresponding circle polynomial aberration. They are also illustrated with numerical examples.
Bed conduction impact on fiber optic distributed temperature sensing water temperature measurements
NASA Astrophysics Data System (ADS)
O'Donnell Meininger, T.; Selker, J. S.
2015-02-01
Error in distributed temperature sensing (DTS) water temperature measurements may be introduced by contact of the fiber optic cable sensor with bed materials (e.g., seafloor, lakebed, streambed). Heat conduction from the bed materials can affect cable temperature and the resulting DTS measurements. In the Middle Fork John Day River, apparent water temperature measurements were influenced by cable sensor contact with aquatic vegetation and fine sediment bed materials. Affected cable segments measured a diurnal temperature range reduced by 10% and lagged by 20-40 min relative to that of ambient stream temperature. The diurnal temperature range deeper within the vegetation-sediment bed material was reduced 70% and lagged 240 min relative to ambient stream temperature. These site-specific results illustrate the potential magnitude of bed-conduction impacts with buried DTS measurements. Researchers who deploy DTS for water temperature monitoring should understand the importance of the environment into which the cable is placed on the range and phase of temperature measurements.
Applications of polynomial optimization in financial risk investment
NASA Astrophysics Data System (ADS)
Zeng, Meilan; Fu, Hongwei
2017-09-01
Recently, polynomial optimization has many important applications in optimization, financial economics and eigenvalues of tensor, etc. This paper studies the applications of polynomial optimization in financial risk investment. We consider the standard mean-variance risk measurement model and the mean-variance risk measurement model with transaction costs. We use Lasserre's hierarchy of semidefinite programming (SDP) relaxations to solve the specific cases. The results show that polynomial optimization is effective for some financial optimization problems.
A Stochastic Mixed Finite Element Heterogeneous Multiscale Method for Flow in Porous Media
2010-08-01
applicable for flow in porous media has drawn significant interest in the last few years. Several techniques like generalized polynomial chaos expansions (gPC...represents the stochastic solution as a polynomial approxima- tion. This interpolant is constructed via independent function calls to the de- terministic...of orthogonal polynomials [34,38] or sparse grid approximations [39–41]. It is well known that the global polynomial interpolation cannot resolve lo
A Set of Orthogonal Polynomials That Generalize the Racah Coefficients or 6 - j Symbols.
1978-03-01
Generalized Hypergeometric Functions, Cambridge Univ. Press, Cambridge, 1966. [11] D. Stanton, Some basic hypergeometric polynomials arising from... Some bas ic hypergeometr ic an a logues of the classical orthogonal polynomials and applications , to appear. [3] C. de Boor and G. H. Golub , The...Report #1833 A SET OF ORTHOGONAL POLYNOMIALS THAT GENERALIZE THE RACAR COEFFICIENTS OR 6 — j SYMBOLS Richard Askey and James Wilson •
Tutte polynomial in functional magnetic resonance imaging
NASA Astrophysics Data System (ADS)
García-Castillón, Marlly V.
2015-09-01
Methods of graph theory are applied to the processing of functional magnetic resonance images. Specifically the Tutte polynomial is used to analyze such kind of images. Functional Magnetic Resonance Imaging provide us connectivity networks in the brain which are represented by graphs and the Tutte polynomial will be applied. The problem of computing the Tutte polynomial for a given graph is #P-hard even for planar graphs. For a practical application the maple packages "GraphTheory" and "SpecialGraphs" will be used. We will consider certain diagram which is depicting functional connectivity, specifically between frontal and posterior areas, in autism during an inferential text comprehension task. The Tutte polynomial for the resulting neural networks will be computed and some numerical invariants for such network will be obtained. Our results show that the Tutte polynomial is a powerful tool to analyze and characterize the networks obtained from functional magnetic resonance imaging.
NASA Astrophysics Data System (ADS)
Doha, E. H.
2002-02-01
An analytical formula expressing the ultraspherical coefficients of an expansion for an infinitely differentiable function that has been integrated an arbitrary number of times in terms of the coefficients of the original expansion of the function is stated in a more compact form and proved in a simpler way than the formula suggested by Phillips and Karageorghis (27 (1990) 823). A new formula expressing explicitly the integrals of ultraspherical polynomials of any degree that has been integrated an arbitrary number of times of ultraspherical polynomials is given. The tensor product of ultraspherical polynomials is used to approximate a function of more than one variable. Formulae expressing the coefficients of differentiated expansions of double and triple ultraspherical polynomials in terms of the original expansion are stated and proved. Some applications of how to use ultraspherical polynomials for solving ordinary and partial differential equations are described.
A Protocol for Aging Anurans Using Skeletochronology
McCreary, Brome; Pearl, Christopher A.; Adams, Michael J.
2008-01-01
Age distribution information can be an important part of understanding the biology of any population. Age estimates collected from the annual growth rings found in tooth and bone cross sections, often referred to as Lines of Arrested Growth (LAGs), have been used in the study of various animals. In this manual, we describe in detail all necessary steps required to obtain estimates of age from anuran bone cross sections via skeletochronological assessment. We include comprehensive descriptions of how to fix and decalcify toe specimens (phalanges), process a phalange prior to embedding, embed the phalange in paraffin, section the phalange using a microtome, stain and mount the cross sections of the phalange and read the LAGs to obtain age estimates.
Forecasting vegetation greenness with satellite and climate data
Ji, Lei; Peters, Albert J.
2004-01-01
A new and unique vegetation greenness forecast (VGF) model was designed to predict future vegetation conditions to three months through the use of current and historical climate data and satellite imagery. The VGF model is implemented through a seasonality-adjusted autoregressive distributed-lag function, based on our finding that the normalized difference vegetation index is highly correlated with lagged precipitation and temperature. Accurate forecasts were obtained from the VGF model in Nebraska grassland and cropland. The regression R2 values range from 0.97-0.80 for 2-12 week forecasts, with higher R2 associated with a shorter prediction. An important application would be to produce real-time forecasts of greenness images.
Effective record length for the T-year event
Tasker, Gary D.
1983-01-01
The effect of serial dependence on the reliability of an estimate of the T-yr. event is of importance in hydrology because design decisions are based upon the estimate. In this paper the reliability of estimates of the T-yr. event from two common distributions is given as a function of number of observations and lag-one serial correlation coefficient for T = 2, 10, 20, 50, and 100 yr. A lag-one autoregressive model is assumed with either a normal or Pearson Type-III disturbance term. Results indicate that, if observations are serially correlated, the effective record length should be used to estimate the discharge associated with the expected exceedance probability. ?? 1983.
NASA Technical Reports Server (NTRS)
Moore, C. S.; Collins, J. H. Jr
1932-01-01
The clearance distribution in a precombustion chamber cylinder head was varied so that for a constant compression ratio of 13.5 the spherical auxiliary chambers contained 20, 35, 50, and 70 per cent of the total clearance volume. Each chamber was connected to the cylinder by a single circular passage, flared at both ends, and of a cross-sectional area proportional to the chamber volume, thereby giving the same calculated air-flow velocity through each passage. Results of engine-performance tests are presented with variations of power, fuel consumption, explosion pressure, rate of pressure rise, ignition lag, heat loss to the cooling water, and motoring characteristics. For good performance the minimum auxiliary chamber volume, with the cylinder head design used, was 35 per cent of the total clearance volume; for larger volumes the performance improves but slightly. With the auxiliary chamber that contained 35 percent of the clearance volume there were obtained the lowest explosion pressures, medium rates of pressure rise, and slightly less than the maximum power. For all clearance distributions an increase in engine speed decreased the ignition lag in seconds and increased the rate of pressure rise.
NASA Astrophysics Data System (ADS)
Recchioni, Maria Cristina
2001-12-01
This paper investigates the application of the method introduced by L. Pasquini (1989) for simultaneously approaching the zeros of polynomial solutions to a class of second-order linear homogeneous ordinary differential equations with polynomial coefficients to a particular case in which these polynomial solutions have zeros symmetrically arranged with respect to the origin. The method is based on a family of nonlinear equations which is associated with a given class of differential equations. The roots of the nonlinear equations are related to the roots of the polynomial solutions of differential equations considered. Newton's method is applied to find the roots of these nonlinear equations. In (Pasquini, 1994) the nonsingularity of the roots of these nonlinear equations is studied. In this paper, following the lines in (Pasquini, 1994), the nonsingularity of the roots of these nonlinear equations is studied. More favourable results than the ones in (Pasquini, 1994) are proven in the particular case of polynomial solutions with symmetrical zeros. The method is applied to approximate the roots of Hermite-Sobolev type polynomials and Freud polynomials. A lower bound for the smallest positive root of Hermite-Sobolev type polynomials is given via the nonlinear equation. The quadratic convergence of the method is proven. A comparison with a classical method that uses the Jacobi matrices is carried out. We show that the algorithm derived by the proposed method is sometimes preferable to the classical QR type algorithms for computing the eigenvalues of the Jacobi matrices even if these matrices are real and symmetric.
Information theoretical approach to discovering solar wind drivers of the outer radiation belt
Wing, Simon; Johnson, Jay R.; Camporeale, Enrico; ...
2016-07-29
The solar wind-magnetosphere system is nonlinear. The solar wind drivers of geosynchronous electrons with energy range of 1.8–3.5 MeV are investigated using mutual information, conditional mutual information (CMI), and transfer entropy (TE). These information theoretical tools can establish linear and nonlinear relationships as well as information transfer. The information transfer from solar wind velocity ( Vsw) to geosynchronous MeV electron flux ( Je) peaks with a lag time of 2 days. As previously reported, Je is anticorrelated with solar wind density ( nsw) with a lag of 1 day. However, this lag time and anticorrelation can be attributed at leastmore » partly to the Je( t + 2 days) correlation with Vsw( t) and nsw( t + 1 day) anticorrelation with Vsw( t). Analyses of solar wind driving of the magnetosphere need to consider the large lag times, up to 3 days, in the ( Vsw, nsw) anticorrelation. Using CMI to remove the effects of Vsw, the response of Je to nsw is 30% smaller and has a lag time < 24 h, suggesting that the MeV electron loss mechanism due to nsw or solar wind dynamic pressure has to start operating in < 24 h. nsw transfers about 36% as much information as Vsw (the primary driver) to Je. Nonstationarity in the system dynamics is investigated using windowed TE. Here, when the data are ordered according to transfer entropy value, it is possible to understand details of the triangle distribution that has been identified between Je( t + 2 days) versus Vsw( t).« less
Information theoretical approach to discovering solar wind drivers of the outer radiation belt
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wing, Simon; Johnson, Jay R.; Camporeale, Enrico
The solar wind-magnetosphere system is nonlinear. The solar wind drivers of geosynchronous electrons with energy range of 1.8–3.5 MeV are investigated using mutual information, conditional mutual information (CMI), and transfer entropy (TE). These information theoretical tools can establish linear and nonlinear relationships as well as information transfer. The information transfer from solar wind velocity ( Vsw) to geosynchronous MeV electron flux ( Je) peaks with a lag time of 2 days. As previously reported, Je is anticorrelated with solar wind density ( nsw) with a lag of 1 day. However, this lag time and anticorrelation can be attributed at leastmore » partly to the Je( t + 2 days) correlation with Vsw( t) and nsw( t + 1 day) anticorrelation with Vsw( t). Analyses of solar wind driving of the magnetosphere need to consider the large lag times, up to 3 days, in the ( Vsw, nsw) anticorrelation. Using CMI to remove the effects of Vsw, the response of Je to nsw is 30% smaller and has a lag time < 24 h, suggesting that the MeV electron loss mechanism due to nsw or solar wind dynamic pressure has to start operating in < 24 h. nsw transfers about 36% as much information as Vsw (the primary driver) to Je. Nonstationarity in the system dynamics is investigated using windowed TE. Here, when the data are ordered according to transfer entropy value, it is possible to understand details of the triangle distribution that has been identified between Je( t + 2 days) versus Vsw( t).« less
Haze is an important medium for the spread of rotavirus.
Ye, Qing; Fu, Jun-Feng; Mao, Jian-Hua; Shen, Hong-Qiang; Chen, Xue-Jun; Shao, Wen-Xia; Shang, Shi-Qiang; Wu, Yi-Feng
2016-09-01
This study investigated whether the rotavirus infection rate in children is associated with temperature and air pollutants in Hangzhou, China. This study applied a distributed lag non-linear model (DLNM) to assess the effects of daily meteorological data and air pollutants on the rotavirus positive rate among outpatient children. There was a negative correlation between temperature and the rotavirus infection rate. The impact of temperature on the detection rate of rotavirus presented an evident lag effect, the temperature change shows the greatest impact on the detection rate of rotavirus approximate at lag one day, and the maximum relative risk (RR) was approximately 1.3. In 2015, the maximum cumulative RR due to the cumulative effect caused by the temperature drop was 2.5. Particulate matter (PM) 2.5 and PM10 were the primary air pollutants in Hangzhou. The highest RR of rotavirus infection occurred at lag 1-1.5 days after the increase in the concentration of these pollutants, and the RR increased gradually with the increase in concentration. Based on the average concentrations of PM2.5 of 53.9 μg/m(3) and PM10 of 80.6 μg/m(3) in Hangzhou in 2015, the cumulative RR caused by the cumulative effect was 2.5 and 2.2, respectively. The current study suggests that temperature is an important factor impacting the rotavirus infection rate of children in Hangzhou. Air pollutants significantly increased the risk of rotavirus infection, and dosage, lag and cumulative effects were observed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Quality control mechanisms exclude incorrect polymerases from the eukaryotic replication fork
Schauer, Grant D.; O’Donnell, Michael E.
2017-01-01
The eukaryotic genome is primarily replicated by two DNA polymerases, Pol ε and Pol δ, that function on the leading and lagging strands, respectively. Previous studies have established recruitment mechanisms whereby Cdc45-Mcm2-7-GINS (CMG) helicase binds Pol ε and tethers it to the leading strand, and PCNA (proliferating cell nuclear antigen) binds tightly to Pol δ and recruits it to the lagging strand. The current report identifies quality control mechanisms that exclude the improper polymerase from a particular strand. We find that the replication factor C (RFC) clamp loader specifically inhibits Pol ε on the lagging strand, and CMG protects Pol ε against RFC inhibition on the leading strand. Previous studies show that Pol δ is slow and distributive with CMG on the leading strand. However, Saccharomyces cerevisiae Pol δ–PCNA is a rapid and processive enzyme, suggesting that CMG may bind and alter Pol δ activity or position it on the lagging strand. Measurements of polymerase binding to CMG demonstrate Pol ε binds CMG with a Kd value of 12 nM, but Pol δ binding CMG is undetectable. Pol δ, like bacterial replicases, undergoes collision release upon completing replication, and we propose Pol δ–PCNA collides with the slower CMG, and in the absence of a stabilizing Pol δ–CMG interaction, the collision release process is triggered, ejecting Pol δ on the leading strand. Hence, by eviction of incorrect polymerases at the fork, the clamp machinery directs quality control on the lagging strand and CMG enforces quality control on the leading strand. PMID:28069954
Epps, Clinton W; Keyghobadi, Nusha
2015-12-01
Landscape genetics seeks to determine the effect of landscape features on gene flow and genetic structure. Often, such analyses are intended to inform conservation and management. However, depending on the many factors that influence the time to reach equilibrium, genetic structure may more strongly represent past rather than contemporary landscapes. This well-known lag between current demographic processes and population genetic structure often makes it challenging to interpret how contemporary landscapes and anthropogenic activity shape gene flow. Here, we review the theoretical framework for factors that influence time lags, summarize approaches to address this temporal disconnect in landscape genetic studies, and evaluate ways to make inferences about landscape change and its effects on species using genetic data alone or in combination with other data. Those approaches include comparing correlation of genetic structure with historical versus contemporary landscapes, using molecular markers with different rates of evolution, contrasting metrics of genetic structure and gene flow that reflect population genetic processes operating at different temporal scales, comparing historical and contemporary samples, combining genetic data with contemporary estimates of species distribution or movement, and controlling for phylogeographic history. We recommend using simulated data sets to explore time lags in genetic structure, and argue that time lags should be explicitly considered both when designing and interpreting landscape genetic studies. We conclude that the time lag problem can be exploited to strengthen inferences about recent landscape changes and to establish conservation baselines, particularly when genetic data are combined with other data. © 2015 John Wiley & Sons Ltd.
On a Family of Multivariate Modified Humbert Polynomials
Aktaş, Rabia; Erkuş-Duman, Esra
2013-01-01
This paper attempts to present a multivariable extension of generalized Humbert polynomials. The results obtained here include various families of multilinear and multilateral generating functions, miscellaneous properties, and also some special cases for these multivariable polynomials. PMID:23935411
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lue Xing; Sun Kun; Wang Pan
In the framework of Bell-polynomial manipulations, under investigation hereby are three single-field bilinearizable equations: the (1+1)-dimensional shallow water wave model, Boiti-Leon-Manna-Pempinelli model, and (2+1)-dimensional Sawada-Kotera model. Based on the concept of scale invariance, a direct and unifying Bell-polynomial scheme is employed to achieve the Baecklund transformations and Lax pairs associated with those three soliton equations. Note that the Bell-polynomial expressions and Bell-polynomial-typed Baecklund transformations for those three soliton equations can be, respectively, cast into the bilinear equations and bilinear Baecklund transformations with symbolic computation. Consequently, it is also shown that the Bell-polynomial-typed Baecklund transformations can be linearized into the correspondingmore » Lax pairs.« less
An O(log sup 2 N) parallel algorithm for computing the eigenvalues of a symmetric tridiagonal matrix
NASA Technical Reports Server (NTRS)
Swarztrauber, Paul N.
1989-01-01
An O(log sup 2 N) parallel algorithm is presented for computing the eigenvalues of a symmetric tridiagonal matrix using a parallel algorithm for computing the zeros of the characteristic polynomial. The method is based on a quadratic recurrence in which the characteristic polynomial is constructed on a binary tree from polynomials whose degree doubles at each level. Intervals that contain exactly one zero are determined by the zeros of polynomials at the previous level which ensures that different processors compute different zeros. The exact behavior of the polynomials at the interval endpoints is used to eliminate the usual problems induced by finite precision arithmetic.
NASA Astrophysics Data System (ADS)
Wang, Huihui; Sukhomlinov, Vladimir S.; Kaganovich, Igor D.; Mustafaev, Alexander S.
2017-02-01
Using the Monte Carlo collision method, we have performed simulations of ion velocity distribution functions (IVDF) taking into account both elastic collisions and charge exchange collisions of ions with atoms in uniform electric fields for argon and helium background gases. The simulation results are verified by comparison with the experiment data of the ion mobilities and the ion transverse diffusion coefficients in argon and helium. The recently published experimental data for the first seven coefficients of the Legendre polynomial expansion of the ion energy and angular distribution functions are used to validate simulation results for IVDF. Good agreement between measured and simulated IVDFs shows that the developed simulation model can be used for accurate calculations of IVDFs.
Discrete Tchebycheff orthonormal polynomials and applications
NASA Technical Reports Server (NTRS)
Lear, W. M.
1980-01-01
Discrete Tchebycheff orthonormal polynomials offer a convenient way to make least squares polynomial fits of uniformly spaced discrete data. Computer programs to do so are simple and fast, and appear to be less affected by computer roundoff error, for the higher order fits, than conventional least squares programs. They are useful for any application of polynomial least squares fits: approximation of mathematical functions, noise analysis of radar data, and real time smoothing of noisy data, to name a few.
Bi, Yan; Yu, Weiwei; Hu, Wenbiao; Lin, Hualiang; Guo, Yuming; Zhou, Xiao-Nong; Tong, Shilu
2013-12-17
Malaria remains a public health problem in the remote and poor area of Yunnan Province, China. Yunnan faces an increasing risk of imported malaria infections from Mekong river neighboring countries. This study aimed to identify the high risk area of malaria transmission in Yunnan Province, and to estimate the effects of climatic variability on the transmission of Plasmodium vivax and Plasmodium falciparum in the identified area. We identified spatial clusters of malaria cases using spatial cluster analysis at a county level in Yunnan Province, 2005-2010, and estimated the weekly effects of climatic factors on P. vivax and P. falciparum based on a dataset of daily malaria cases and climatic variables. A distributed lag nonlinear model was used to estimate the impact of temperature, relative humidity and rainfall up to 10-week lags on both types of malaria parasite after adjusting for seasonal and long-term effects. The primary cluster area was identified along the China-Myanmar border in western Yunnan. A 1°C increase in minimum temperature was associated with a lag 4 to 9 weeks relative risk (RR), with the highest effect at lag 7 weeks for P. vivax (RR = 1.03; 95% CI, 1.01, 1.05) and 6 weeks for P. falciparum (RR = 1.07; 95% CI, 1.04, 1.11); a 10-mm increment in rainfall was associated with RRs of lags 2-4 weeks and 9-10 weeks, with the highest effect at 3 weeks for both P. vivax (RR = 1.03; 95% CI, 1.01, 1.04) and P. falciparum (RR = 1.04; 95% CI, 1.01, 1.06); and the RRs with a 10% rise in relative humidity were significant from lag 3 to 8 weeks with the highest RR of 1.24 (95% CI, 1.10, 1.41) for P. vivax at 5-week lag. Our findings suggest that the China-Myanmar border is a high risk area for malaria transmission. Climatic factors appeared to be among major determinants of malaria transmission in this area. The estimated lag effects for the association between temperature and malaria are consistent with the life cycles of both mosquito vector and malaria parasite. These findings will be useful for malaria surveillance-response systems in the Mekong river region.
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Combined sewer systems collect rainwater runoff, sewage, and industrial wastewater for transit to treatment facilities. With heavy precipitation, volumes can exceed capacity of treatment facilities, and wastewater discharges directly to receiving waters. These combined sewer over...
Adaptive Window Zero-Crossing-Based Instantaneous Frequency Estimation
NASA Astrophysics Data System (ADS)
Sekhar, S. Chandra; Sreenivas, TV
2004-12-01
We address the problem of estimating instantaneous frequency (IF) of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE). The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF) estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD)-based IF estimators for different signal-to-noise ratio (SNR).
Moisture content measurements of moss (Sphagnum spp.) using commercial sensors
Yoshikawa, K.; Overduin, P.P.; Harden, J.W.
2004-01-01
Sphagnum (spp.) is widely distributed in permafrost regions around the arctic and subarctic. The moisture content of the moss layer affects the thermal insulative capacity and preservation of permafrost. It also controls the growth and collapse history of palsas and other peat mounds, and is relevant, in general terms, to permafrost thaw (thermokarst). In this study, we test and calibrate seven different soil moisture sensors for measuring the moisture content of Sphagnum moss under laboratory conditions. The soil volume to which each probe is sensitive is one of the important parameters influencing moisture measurement, particularly in a heterogeneous medium such as moss. Each sensor has a unique response to changing moisture content levels, solution salinity, moss bulk density and to the orientation (structure) of the Sphagnum relative to the sensor. All of the probes examined here require unique polynomial calibration equations to obtain moisture content from probe output. We provide polynomial equations for dead and live Sphagnum moss (R2 > 0.99. Copyright ?? 2004 John Wiley & Sons, Ltd.
Polynomial Size Formulations for the Distance and Capacity Constrained Vehicle Routing Problem
NASA Astrophysics Data System (ADS)
Kara, Imdat; Derya, Tusan
2011-09-01
The Distance and Capacity Constrained Vehicle Routing Problem (DCVRP) is an extension of the well known Traveling Salesman Problem (TSP). DCVRP arises in distribution and logistics problems. It would be beneficial to construct new formulations, which is the main motivation and contribution of this paper. We focused on two indexed integer programming formulations for DCVRP. One node based and one arc (flow) based formulation for DCVRP are presented. Both formulations have O(n2) binary variables and O(n2) constraints, i.e., the number of the decision variables and constraints grows with a polynomial function of the nodes of the underlying graph. It is shown that proposed arc based formulation produces better lower bound than the existing one (this refers to the Water's formulation in the paper). Finally, various problems from literature are solved with the node based and arc based formulations by using CPLEX 8.0. Preliminary computational analysis shows that, arc based formulation outperforms the node based formulation in terms of linear programming relaxation.
Polynomial time blackbox identity testers for depth-3 circuits : the field doesn't matter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seshadhri, Comandur; Saxena, Nitin
Let C be a depth-3 circuit with n variables, degree d and top fanin k (called {Sigma}{Pi}{Sigma}(k, d, n) circuits) over base field F. It is a major open problem to design a deterministic polynomial time blackbox algorithm that tests if C is identically zero. Klivans & Spielman (STOC 2001) observed that the problem is open even when k is a constant. This case has been subjected to a serious study over the past few years, starting from the work of Dvir & Shpilka (STOC 2005). We give the first polynomial time blackbox algorithm for this problem. Our algorithm runsmore » in time poly(n)d{sup k}, regardless of the base field. The only field for which polynomial time algorithms were previously known is F = Q (Kayal & Saraf, FOCS 2009, and Saxena & Seshadhri, FOCS 2010). This is the first blackbox algorithm for depth-3 circuits that does not use the rank based approaches of Karnin & Shpilka (CCC 2008). We prove an important tool for the study of depth-3 identities. We design a blackbox polynomial time transformation that reduces the number of variables in a {Sigma}{Pi}{Sigma}(k, d, n) circuit to k variables, but preserves the identity structure. Polynomial identity testing (PIT) is a major open problem in theoretical computer science. The input is an arithmetic circuit that computes a polynomial p(x{sub 1}, x{sub 2},..., x{sub n}) over a base field F. We wish to check if p is the zero polynomial, or in other words, is identically zero. We may be provided with an explicit circuit, or may only have blackbox access. In the latter case, we can only evaluate the polynomial p at various domain points. The main goal is to devise a deterministic blackbox polynomial time algorithm for PIT.« less
Analysis on the misalignment errors between Hartmann-Shack sensor and 45-element deformable mirror
NASA Astrophysics Data System (ADS)
Liu, Lihui; Zhang, Yi; Tao, Jianjun; Cao, Fen; Long, Yin; Tian, Pingchuan; Chen, Shangwu
2017-02-01
Aiming at 45-element adaptive optics system, the model of 45-element deformable mirror is truly built by COMSOL Multiphysics, and every actuator's influence function is acquired by finite element method. The process of this system correcting optical aberration is simulated by making use of procedure, and aiming for Strehl ratio of corrected diffraction facula, in the condition of existing different translation and rotation error between Hartmann-Shack sensor and deformable mirror, the system's correction ability for 3-20 Zernike polynomial wave aberration is analyzed. The computed result shows: the system's correction ability for 3-9 Zernike polynomial wave aberration is higher than that of 10-20 Zernike polynomial wave aberration. The correction ability for 3-20 Zernike polynomial wave aberration does not change with misalignment error changing. With rotation error between Hartmann-Shack sensor and deformable mirror increasing, the correction ability for 3-20 Zernike polynomial wave aberration gradually goes down, and with translation error increasing, the correction ability for 3-9 Zernike polynomial wave aberration gradually goes down, but the correction ability for 10-20 Zernike polynomial wave aberration behave up-and-down depression.
Stability analysis of fuzzy parametric uncertain systems.
Bhiwani, R J; Patre, B M
2011-10-01
In this paper, the determination of stability margin, gain and phase margin aspects of fuzzy parametric uncertain systems are dealt. The stability analysis of uncertain linear systems with coefficients described by fuzzy functions is studied. A complexity reduced technique for determining the stability margin for FPUS is proposed. The method suggested is dependent on the order of the characteristic polynomial. In order to find the stability margin of interval polynomials of order less than 5, it is not always necessary to determine and check all four Kharitonov's polynomials. It has been shown that, for determining stability margin of FPUS of order five, four, and three we require only 3, 2, and 1 Kharitonov's polynomials respectively. Only for sixth and higher order polynomials, a complete set of Kharitonov's polynomials are needed to determine the stability margin. Thus for lower order systems, the calculations are reduced to a large extent. This idea has been extended to determine the stability margin of fuzzy interval polynomials. It is also shown that the gain and phase margin of FPUS can be determined analytically without using graphical techniques. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Doha, E. H.; Ahmed, H. M.
2004-08-01
A formula expressing explicitly the derivatives of Bessel polynomials of any degree and for any order in terms of the Bessel polynomials themselves is proved. Another explicit formula, which expresses the Bessel expansion coefficients of a general-order derivative of an infinitely differentiable function in terms of its original Bessel coefficients, is also given. A formula for the Bessel coefficients of the moments of one single Bessel polynomial of certain degree is proved. A formula for the Bessel coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its Bessel coefficients is also obtained. Application of these formulae for solving ordinary differential equations with varying coefficients, by reducing them to recurrence relations in the expansion coefficients of the solution, is explained. An algebraic symbolic approach (using Mathematica) in order to build and solve recursively for the connection coefficients between Bessel-Bessel polynomials is described. An explicit formula for these coefficients between Jacobi and Bessel polynomials is given, of which the ultraspherical polynomial and its consequences are important special cases. Two analytical formulae for the connection coefficients between Laguerre-Bessel and Hermite-Bessel are also developed.
Partial transpose of random quantum states: Exact formulas and meanders
NASA Astrophysics Data System (ADS)
Fukuda, Motohisa; Śniady, Piotr
2013-04-01
We investigate the asymptotic behavior of the empirical eigenvalues distribution of the partial transpose of a random quantum state. The limiting distribution was previously investigated via Wishart random matrices indirectly (by approximating the matrix of trace 1 by the Wishart matrix of random trace) and shown to be the semicircular distribution or the free difference of two free Poisson distributions, depending on how dimensions of the concerned spaces grow. Our use of Wishart matrices gives exact combinatorial formulas for the moments of the partial transpose of the random state. We find three natural asymptotic regimes in terms of geodesics on the permutation groups. Two of them correspond to the above two cases; the third one turns out to be a new matrix model for the meander polynomials. Moreover, we prove the convergence to the semicircular distribution together with its extreme eigenvalues under weaker assumptions, and show large deviation bound for the latter.
On the coefficients of differentiated expansions of ultraspherical polynomials
NASA Technical Reports Server (NTRS)
Karageorghis, Andreas; Phillips, Timothy N.
1989-01-01
A formula expressing the coefficients of an expression of ultraspherical polynomials which has been differentiated an arbitrary number of times in terms of the coefficients of the original expansion is proved. The particular examples of Chebyshev and Legendre polynomials are considered.
On Polynomial Solutions of Linear Differential Equations with Polynomial Coefficients
ERIC Educational Resources Information Center
Si, Do Tan
1977-01-01
Demonstrates a method for solving linear differential equations with polynomial coefficients based on the fact that the operators z and D + d/dz are known to be Hermitian conjugates with respect to the Bargman and Louck-Galbraith scalar products. (MLH)
NASA Astrophysics Data System (ADS)
Burtyka, Filipp
2018-01-01
The paper considers algorithms for finding diagonalizable and non-diagonalizable roots (so called solvents) of monic arbitrary unilateral second-order matrix polynomial over prime finite field. These algorithms are based on polynomial matrices (lambda-matrices). This is an extension of existing general methods for computing solvents of matrix polynomials over field of complex numbers. We analyze how techniques for complex numbers can be adapted for finite field and estimate asymptotic complexity of the obtained algorithms.
On the Analytical and Numerical Properties of the Truncated Laplace Transform I
2014-09-05
contains generalizations and conclusions. 2 2 Preliminaries 2.1 The Legendre Polynomials In this subsection we summarize some of the properties of the the...standard Legendre Polynomi - als, and restate these properties for shifted and normalized forms of the Legendre Polynomials . We define the Shifted... Legendre Polynomial of degree k = 0, 1, ..., which we will be denoting by P ∗k , by the formula P ∗k (x) = Pk(2x− 1), (5) where Pk is the Legendre
2015-08-31
following functions were used: where are the Legendre polynomials of degree . It is assumed that the coefficient standing with has the form...enforce relaxation rates of high order moments, higher order polynomial basis functions are used. The use of high order polynomials results in strong...enforced while only polynomials up to second degree were used in the representation of the collision frequency. It can be seen that the new model
Effects of Air Drag and Lunar Third-Body Perturbations on Motion Near a Reference KAM Torus
2011-03-01
body m 1) mass of satellite; 2) order of associated Legendre polynomial n 1) mean motion; 2) degree of associated Legendre polynomial n3 mean motion...physical momentum pi ith physical momentum Pmn associated Legendre polynomial of order m and degree n q̇ physical coordinate derivatives vector, [q̇1...are constants specifying the shape of the gravitational field; and Pmn are associated Legendre polynomials . When m = n = 0, the geopotential function
Luigi Gatteschi's work on asymptotics of special functions and their zeros
NASA Astrophysics Data System (ADS)
Gautschi, Walter; Giordano, Carla
2008-12-01
A good portion of Gatteschi's research publications-about 65%-is devoted to asymptotics of special functions and their zeros. Most prominently among the special functions studied figure classical orthogonal polynomials, notably Jacobi polynomials and their special cases, Laguerre polynomials, and Hermite polynomials by implication. Other important classes of special functions dealt with are Bessel functions of the first and second kind, Airy functions, and confluent hypergeometric functions, both in Tricomi's and Whittaker's form. This work is reviewed here, and organized along methodological lines.
Polynomial compensation, inversion, and approximation of discrete time linear systems
NASA Technical Reports Server (NTRS)
Baram, Yoram
1987-01-01
The least-squares transformation of a discrete-time multivariable linear system into a desired one by convolving the first with a polynomial system yields optimal polynomial solutions to the problems of system compensation, inversion, and approximation. The polynomial coefficients are obtained from the solution to a so-called normal linear matrix equation, whose coefficients are shown to be the weighting patterns of certain linear systems. These, in turn, can be used in the recursive solution of the normal equation.
Chien, Lung-Chang; Guo, Yuming; Li, Xiao; Yu, Hwa-Lung
2018-01-01
The distributed lag non-linear (DLNM) model has been frequently used in time series environmental health research. However, its functionality for assessing spatial heterogeneity is still restricted, especially in analyzing spatiotemporal data. This study proposed a solution to take a spatial function into account in the DLNM, and compared the influence with and without considering spatial heterogeneity in a case study. This research applied the DLNM to investigate non-linear lag effect up to 7 days in a case study about the spatiotemporal impact of fine particulate matter (PM 2.5 ) on preschool children's acute respiratory infection in 41 districts of northern Taiwan during 2005 to 2007. We applied two spatiotemporal methods to impute missing air pollutant data, and included the Markov random fields to analyze district boundary data in the DLNM. When analyzing the original data without a spatial function, the overall PM 2.5 effect accumulated from all lag-specific effects had a slight variation at smaller PM 2.5 measurements, but eventually decreased to relative risk significantly <1 when PM 2.5 increased. While analyzing spatiotemporal imputed data without a spatial function, the overall PM 2.5 effect did not decrease but increased in monotone as PM 2.5 increased over 20 μg/m 3 . After adding a spatial function in the DLNM, spatiotemporal imputed data conducted similar results compared with the overall effect from the original data. Moreover, the spatial function showed a clear and uneven pattern in Taipei, revealing that preschool children living in 31 districts of Taipei were vulnerable to acute respiratory infection. Our findings suggest the necessity of including a spatial function in the DLNM to make a spatiotemporal analysis available and to conduct more reliable and explainable research. This study also revealed the analytical impact if spatial heterogeneity is ignored.
NASA Astrophysics Data System (ADS)
Marin, F.; Rojas Lobos, P. A.; Hameury, J. M.; Goosmann, R. W.
2018-05-01
Context. From stars to active galactic nuclei, many astrophysical systems are surrounded by an equatorial distribution of dusty material that is, in a number of cases, spatially unresolved even with cutting edge facilities. Aims: In this paper, we investigate if and how one can determine the unresolved and heterogeneous morphology of dust distribution around a central bright source using time-resolved polarimetric observations. Methods: We used polarized radiative transfer simulations to study a sample of circumnuclear dusty morphologies. We explored a grid of geometrically variable models that are uniform, fragmented, and density stratified in the near-infrared, optical, and ultraviolet bands, and we present their distinctive time-dependent polarimetric signatures. Results: As expected, varying the structure of the obscuring equatorial disk has a deep impact on the inclination-dependent flux, polarization degree and angle, and time lags we observe. We find that stratified media are distinguishable by time-resolved polarimetric observations, and that the expected polarization is much higher in the infrared band than in the ultraviolet. However, because of the physical scales imposed by dust sublimation, the average time lags of months to years between the total and polarized fluxes are important; these time lags lengthens the observational campaigns necessary to break more sophisticated, and therefore also more degenerated, models. In the ultraviolet band, time lags are slightly shorter than in the infrared or optical bands, and, coupled to lower diluting starlight fluxes, time-resolved polarimetry in the UV appears more promising for future campaigns. Conclusions: Equatorial dusty disks differ in terms of inclination-dependent photometric, polarimetric, and timing observables, but only the coupling of these different markers can lead to inclination-independent constraints on the unresolved structures. Even though it is complex and time consuming, polarized reverberation mapping in the ultraviolet-blue band is probably the best technique to rely on in this field.
NASA Astrophysics Data System (ADS)
Ohern, J.
2016-02-01
Marine mammals are generally located in areas of enhanced surface primary productivity, though they may forage much deeper within the water column and higher on the food chain. Numerous studies over the past several decades have utilized ocean color data from remote sensing instruments (CZCS, MODIS, and others) to asses both the quantity and time scales over which surface primary productivity relates to marine mammal distribution. In areas of sustained upwelling, primary productivity may essentially grow in the secondary levels of productivity (the zooplankton and nektonic species on which marine mammals forage). However, in many open ocean habitats a simple trophic cascade does not explain relatively short time lags between enhanced surface productivity and marine mammal presence. Other dynamic features that entrain prey or attract marine mammals may be responsible for the correlations between marine mammals and ocean color. In order to investigate these features, two MODIS (moderate imaging spectroradiometer) data products, the concentration as well as the standard deviation of surface chlorophyll were used in conjunction with marine mammal sightings collected within Ecuadorian waters. Time lags between enhanced surface chlorophyll and marine mammal presence were on the order of 2-4 weeks, however correlations were much stronger when the standard deviation of spatially binned images was used, rather than the chlorophyll concentrations. Time lags also varied between Balaenopterid and Odontocete cetaceans. Overall, the standard deviation of surface chlorophyll proved a useful tool for assessing potential relationships between marine mammal sightings and surface chlorophyll.
An analogy of the charge distribution on Julia sets with the Brownian motion
NASA Astrophysics Data System (ADS)
Lopes, Artur O.
1989-09-01
A way to compute the entropy of an invariant measure of a hyperbolic rational map from the information given by a Ruelle-Perron-Frobenius operator of a generic Holder-continuous function will be shown. This result was motivated by an analogy of the Brownian motion with the dynamical system given by a rational map and the maximal measure. In the case the rational map is a polynomial, then the maximal measure is the charge distribution in the Julia set. The main theorem of this paper can be seen as a large deviation result. It is a kind of Donsker-Varadhan formula for dynamical systems.
Joint min-max distribution and Edwards-Anderson's order parameter of the circular 1/f-noise model
NASA Astrophysics Data System (ADS)
Cao, Xiangyu; Le Doussal, Pierre
2016-05-01
We calculate the joint min-max distribution and the Edwards-Anderson's order parameter for the circular model of 1/f-noise. Both quantities, as well as generalisations, are obtained exactly by combining the freezing-duality conjecture and Jack-polynomial techniques. Numerical checks come with significantly improved control of finite-size effects in the glassy phase, and the results convincingly validate the freezing-duality conjecture. Application to diffusive dynamics is discussed. We also provide a formula for the pre-factor ratio of the joint/marginal Carpentier-Le Doussal tail for minimum/maximum which applies to any logarithmic random energy model.
Polynomial fuzzy observer designs: a sum-of-squares approach.
Tanaka, Kazuo; Ohtake, Hiroshi; Seo, Toshiaki; Tanaka, Motoyasu; Wang, Hua O
2012-10-01
This paper presents a sum-of-squares (SOS) approach to polynomial fuzzy observer designs for three classes of polynomial fuzzy systems. The proposed SOS-based framework provides a number of innovations and improvements over the existing linear matrix inequality (LMI)-based approaches to Takagi-Sugeno (T-S) fuzzy controller and observer designs. First, we briefly summarize previous results with respect to a polynomial fuzzy system that is a more general representation of the well-known T-S fuzzy system. Next, we propose polynomial fuzzy observers to estimate states in three classes of polynomial fuzzy systems and derive SOS conditions to design polynomial fuzzy controllers and observers. A remarkable feature of the SOS design conditions for the first two classes (Classes I and II) is that they realize the so-called separation principle, i.e., the polynomial fuzzy controller and observer for each class can be separately designed without lack of guaranteeing the stability of the overall control system in addition to converging state-estimation error (via the observer) to zero. Although, for the last class (Class III), the separation principle does not hold, we propose an algorithm to design polynomial fuzzy controller and observer satisfying the stability of the overall control system in addition to converging state-estimation error (via the observer) to zero. All the design conditions in the proposed approach can be represented in terms of SOS and are symbolically and numerically solved via the recently developed SOSTOOLS and a semidefinite-program solver, respectively. To illustrate the validity and applicability of the proposed approach, three design examples are provided. The examples demonstrate the advantages of the SOS-based approaches for the existing LMI approaches to T-S fuzzy observer designs.
One member, two leaders: extending leader-member exchange theory to a dual leadership context.
Vidyarthi, Prajya R; Erdogan, Berrin; Anand, Smriti; Liden, Robert C; Chaudhry, Anjali
2014-05-01
In this study, we develop and test a model that extends leader-member exchange (LMX) theory to a dual leadership context. Drawing upon relative deprivation theory, we assert that when employees work for 2 leaders, each relationship exists within the context of the other relationship. Thus, the level of alignment or misalignment between the 2 relationships has implications for employees' job satisfaction and voluntary turnover. Employing polynomial regression on time-lagged data gathered from 159 information technology consultants nested in 26 client projects, we found that employee outcomes are affected by the quality of the relationship with both agency and client leaders, such that the degree of alignment between the 2 LMXs explained variance in outcomes beyond that explained by both LMXs. Results also revealed that a lack of alignment in the 2 LMXs led to asymmetric effects on outcomes, such that the relationship with agency leader mattered more than the relationship with one's client leader. Finally, frequency of communication with the agency leader determined the degree to which agency LMX affected job satisfaction in the low client LMX condition. (c) 2014 APA, all rights reserved.
Recurrence relations for orthogonal polynomials for PDEs in polar and cylindrical geometries.
Richardson, Megan; Lambers, James V
2016-01-01
This paper introduces two families of orthogonal polynomials on the interval (-1,1), with weight function [Formula: see text]. The first family satisfies the boundary condition [Formula: see text], and the second one satisfies the boundary conditions [Formula: see text]. These boundary conditions arise naturally from PDEs defined on a disk with Dirichlet boundary conditions and the requirement of regularity in Cartesian coordinates. The families of orthogonal polynomials are obtained by orthogonalizing short linear combinations of Legendre polynomials that satisfy the same boundary conditions. Then, the three-term recurrence relations are derived. Finally, it is shown that from these recurrence relations, one can efficiently compute the corresponding recurrences for generalized Jacobi polynomials that satisfy the same boundary conditions.
Gaussian quadrature for multiple orthogonal polynomials
NASA Astrophysics Data System (ADS)
Coussement, Jonathan; van Assche, Walter
2005-06-01
We study multiple orthogonal polynomials of type I and type II, which have orthogonality conditions with respect to r measures. These polynomials are connected by their recurrence relation of order r+1. First we show a relation with the eigenvalue problem of a banded lower Hessenberg matrix Ln, containing the recurrence coefficients. As a consequence, we easily find that the multiple orthogonal polynomials of type I and type II satisfy a generalized Christoffel-Darboux identity. Furthermore, we explain the notion of multiple Gaussian quadrature (for proper multi-indices), which is an extension of the theory of Gaussian quadrature for orthogonal polynomials and was introduced by Borges. In particular, we show that the quadrature points and quadrature weights can be expressed in terms of the eigenvalue problem of Ln.
Frequency domain system identification methods - Matrix fraction description approach
NASA Technical Reports Server (NTRS)
Horta, Luca G.; Juang, Jer-Nan
1993-01-01
This paper presents the use of matrix fraction descriptions for least-squares curve fitting of the frequency spectra to compute two matrix polynomials. The matrix polynomials are intermediate step to obtain a linearized representation of the experimental transfer function. Two approaches are presented: first, the matrix polynomials are identified using an estimated transfer function; second, the matrix polynomials are identified directly from the cross/auto spectra of the input and output signals. A set of Markov parameters are computed from the polynomials and subsequently realization theory is used to recover a minimum order state space model. Unevenly spaced frequency response functions may be used. Results from a simple numerical example and an experiment are discussed to highlight some of the important aspect of the algorithm.
NASA Technical Reports Server (NTRS)
Truong, T. K.; Hsu, I. S.; Eastman, W. L.; Reed, I. S.
1987-01-01
It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation needed to decode a Reed-Solomon (RS) code. A simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained, simultaneously and simply, by the Euclidean algorithm only. With this improved technique the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code.
Lu, Wenlong; Xie, Junwei; Wang, Heming; Sheng, Chuan
2016-01-01
Inspired by track-before-detection technology in radar, a novel time-frequency transform, namely polynomial chirping Fourier transform (PCFT), is exploited to extract components from noisy multicomponent signal. The PCFT combines advantages of Fourier transform and polynomial chirplet transform to accumulate component energy along a polynomial chirping curve in the time-frequency plane. The particle swarm optimization algorithm is employed to search optimal polynomial parameters with which the PCFT will achieve a most concentrated energy ridge in the time-frequency plane for the target component. The component can be well separated in the polynomial chirping Fourier domain with a narrow-band filter and then reconstructed by inverse PCFT. Furthermore, an iterative procedure, involving parameter estimation, PCFT, filtering and recovery, is introduced to extract components from a noisy multicomponent signal successively. The Simulations and experiments show that the proposed method has better performance in component extraction from noisy multicomponent signal as well as provides more time-frequency details about the analyzed signal than conventional methods.
Minimum Sobolev norm interpolation of scattered derivative data
NASA Astrophysics Data System (ADS)
Chandrasekaran, S.; Gorman, C. H.; Mhaskar, H. N.
2018-07-01
We study the problem of reconstructing a function on a manifold satisfying some mild conditions, given data of the values and some derivatives of the function at arbitrary points on the manifold. While the problem of finding a polynomial of two variables with total degree ≤n given the values of the polynomial and some of its derivatives at exactly the same number of points as the dimension of the polynomial space is sometimes impossible, we show that such a problem always has a solution in a very general situation if the degree of the polynomials is sufficiently large. We give estimates on how large the degree should be, and give explicit constructions for such a polynomial even in a far more general case. As the number of sampling points at which the data is available increases, our polynomials converge to the target function on the set where the sampling points are dense. Numerical examples in single and double precision show that this method is stable, efficient, and of high-order.
The Disparate Labor Market Impacts of Monetary Policy
ERIC Educational Resources Information Center
Carpenter, Seth B.; Rodgers, William M., III
2004-01-01
Employing two widely used approaches to identify the effects of monetary policy, this paper explores the differential impact of policy on the labor market outcomes of teenagers, minorities, out-of-school youth, and less-skilled individuals. Evidence from recursive vector autoregressions and autoregressive distributed lag models that use…
NASA Technical Reports Server (NTRS)
Pototzky, Anthony S.
2008-01-01
A simple matrix polynomial approach is introduced for approximating unsteady aerodynamics in the s-plane and ultimately, after combining matrix polynomial coefficients with matrices defining the structure, a matrix polynomial of the flutter equations of motion (EOM) is formed. A technique of recasting the matrix-polynomial form of the flutter EOM into a first order form is also presented that can be used to determine the eigenvalues near the origin and everywhere on the complex plane. An aeroservoelastic (ASE) EOM have been generalized to include the gust terms on the right-hand side. The reasons for developing the new matrix polynomial approach are also presented, which are the following: first, the "workhorse" methods such as the NASTRAN flutter analysis lack the capability to consistently find roots near the origin, along the real axis or accurately find roots farther away from the imaginary axis of the complex plane; and, second, the existing s-plane methods, such as the Roger s s-plane approximation method as implemented in ISAC, do not always give suitable fits of some tabular data of the unsteady aerodynamics. A method available in MATLAB is introduced that will accurately fit generalized aerodynamic force (GAF) coefficients in a tabular data form into the coefficients of a matrix polynomial form. The root-locus results from the NASTRAN pknl flutter analysis, the ISAC-Roger's s-plane method and the present matrix polynomial method are presented and compared for accuracy and for the number and locations of roots.
Strongdeco: Expansion of analytical, strongly correlated quantum states into a many-body basis
NASA Astrophysics Data System (ADS)
Juliá-Díaz, Bruno; Graß, Tobias
2012-03-01
We provide a Mathematica code for decomposing strongly correlated quantum states described by a first-quantized, analytical wave function into many-body Fock states. Within them, the single-particle occupations refer to the subset of Fock-Darwin functions with no nodes. Such states, commonly appearing in two-dimensional systems subjected to gauge fields, were first discussed in the context of quantum Hall physics and are nowadays very relevant in the field of ultracold quantum gases. As important examples, we explicitly apply our decomposition scheme to the prominent Laughlin and Pfaffian states. This allows for easily calculating the overlap between arbitrary states with these highly correlated test states, and thus provides a useful tool to classify correlated quantum systems. Furthermore, we can directly read off the angular momentum distribution of a state from its decomposition. Finally we make use of our code to calculate the normalization factors for Laughlin's famous quasi-particle/quasi-hole excitations, from which we gain insight into the intriguing fractional behavior of these excitations. Program summaryProgram title: Strongdeco Catalogue identifier: AELA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5475 No. of bytes in distributed program, including test data, etc.: 31 071 Distribution format: tar.gz Programming language: Mathematica Computer: Any computer on which Mathematica can be installed Operating system: Linux, Windows, Mac Classification: 2.9 Nature of problem: Analysis of strongly correlated quantum states. Solution method: The program makes use of the tools developed in Mathematica to deal with multivariate polynomials to decompose analytical strongly correlated states of bosons and fermions into a standard many-body basis. Operations with polynomials, determinants and permanents are the basic tools. Running time: The distributed notebook takes a couple of minutes to run.
Kinematics and dynamics of robotic systems with multiple closed loops
NASA Astrophysics Data System (ADS)
Zhang, Chang-De
The kinematics and dynamics of robotic systems with multiple closed loops, such as Stewart platforms, walking machines, and hybrid manipulators, are studied. In the study of kinematics, focus is on the closed-form solutions of the forward position analysis of different parallel systems. A closed-form solution means that the solution is expressed as a polynomial in one variable. If the order of the polynomial is less than or equal to four, the solution has analytical closed-form. First, the conditions of obtaining analytical closed-form solutions are studied. For a Stewart platform, the condition is found to be that one rotational degree of freedom of the output link is decoupled from the other five. Based on this condition, a class of Stewart platforms which has analytical closed-form solution is formulated. Conditions of analytical closed-form solution for other parallel systems are also studied. Closed-form solutions of forward kinematics for walking machines and multi-fingered grippers are then studied. For a parallel system with three three-degree-of-freedom subchains, there are 84 possible ways to select six independent joints among nine joints. These 84 ways can be classified into three categories: Category 3:3:0, Category 3:2:1, and Category 2:2:2. It is shown that the first category has no solutions; the solutions of the second category have analytical closed-form; and the solutions of the last category are higher order polynomials. The study is then extended to a nearly general Stewart platform. The solution is a 20th order polynomial and the Stewart platform has a maximum of 40 possible configurations. Also, the study is extended to a new class of hybrid manipulators which consists of two serially connected parallel mechanisms. In the study of dynamics, a computationally efficient method for inverse dynamics of manipulators based on the virtual work principle is developed. Although this method is comparable with the recursive Newton-Euler method for serial manipulators, its advantage is more noteworthy when applied to parallel systems. An approach of inverse dynamics of a walking machine is also developed, which includes inverse dynamic modeling, foot force distribution, and joint force/torque allocation.
Numerical solutions for Helmholtz equations using Bernoulli polynomials
NASA Astrophysics Data System (ADS)
Bicer, Kubra Erdem; Yalcinbas, Salih
2017-07-01
This paper reports a new numerical method based on Bernoulli polynomials for the solution of Helmholtz equations. The method uses matrix forms of Bernoulli polynomials and their derivatives by means of collocation points. Aim of this paper is to solve Helmholtz equations using this matrix relations.
Celeste, Ricardo; Maringolo, Milena P; Comar, Moacyr; Viana, Rommel B; Guimarães, Amanda R; Haiduke, Roberto L A; da Silva, Albérico B F
2015-10-01
Accurate Gaussian basis sets for atoms from H to Ba were obtained by means of the generator coordinate Hartree-Fock (GCHF) method based on a polynomial expansion to discretize the Griffin-Wheeler-Hartree-Fock equations (GWHF). The discretization of the GWHF equations in this procedure is based on a mesh of points not equally distributed in contrast with the original GCHF method. The results of atomic Hartree-Fock energies demonstrate the capability of these polynomial expansions in designing compact and accurate basis sets to be used in molecular calculations and the maximum error found when compared to numerical values is only 0.788 mHartree for indium. Some test calculations with the B3LYP exchange-correlation functional for N2, F2, CO, NO, HF, and HCN show that total energies within 1.0 to 2.4 mHartree compared to the cc-pV5Z basis sets are attained with our contracted bases with a much smaller number of polarization functions (2p1d and 2d1f for hydrogen and heavier atoms, respectively). Other molecular calculations performed here are also in very good accordance with experimental and cc-pV5Z results. The most important point to be mentioned here is that our generator coordinate basis sets required only a tiny fraction of the computational time when compared to B3LYP/cc-pV5Z calculations.
NASA Astrophysics Data System (ADS)
Hwang, James Ho-Jin; Duran, Adam
2016-08-01
Most of the times pyrotechnic shock design and test requirements for space systems are provided in Shock Response Spectrum (SRS) without the input time history. Since the SRS does not describe the input or the environment, a decomposition method is used to obtain the source time history. The main objective of this paper is to develop a decomposition method producing input time histories that can satisfy the SRS requirement based on the pyrotechnic shock test data measured from a mechanical impact test apparatus. At the heart of this decomposition method is the statistical representation of the pyrotechnic shock test data measured from the MIT Lincoln Laboratory (LL) designed Universal Pyrotechnic Shock Simulator (UPSS). Each pyrotechnic shock test data measured at the interface of a test unit has been analyzed to produce the temporal peak acceleration, Root Mean Square (RMS) acceleration, and the phase lag at each band center frequency. Maximum SRS of each filtered time history has been calculated to produce a relationship between the input and the response. Two new definitions are proposed as a result. The Peak Ratio (PR) is defined as the ratio between the maximum SRS and the temporal peak acceleration at each band center frequency. The ratio between the maximum SRS and the RMS acceleration is defined as the Energy Ratio (ER) at each band center frequency. Phase lag is estimated based on the time delay between the temporal peak acceleration at each band center frequency and the peak acceleration at the lowest band center frequency. This stochastic process has been applied to more than one hundred pyrotechnic shock test data to produce probabilistic definitions of the PR, ER, and the phase lag. The SRS is decomposed at each band center frequency using damped sinusoids with the PR and the decays obtained by matching the ER of the damped sinusoids to the ER of the test data. The final step in this stochastic SRS decomposition process is the Monte Carlo (MC) simulation. The MC simulation identifies combinations of the PR and decays that can meet the SRS requirement at each band center frequency. Decomposed input time histories are produced by summing the converged damped sinusoids with the MC simulation of the phase lag distribution.
Buchsbaum, Bradley R; Padmanabhan, Aarthi; Berman, Karen Faith
2011-04-01
One of the classic categorical divisions in the history of memory research is that between short-term and long-term memory. Indeed, because memory for the immediate past (a few seconds) and memory for the relatively more remote past (several seconds and beyond) are assumed to rely on distinct neural systems, more often than not, memory research has focused either on short- (or "working memory") or on long-term memory. Using an auditory-verbal continuous recognition paradigm designed for fMRI, we examined how the neural signatures of recognition memory change across an interval of time (from 2.5 to 30 sec) that spans this hypothetical division between short- and long-term memory. The results revealed that activity during successful auditory-verbal item recognition in inferior parietal cortex and the posterior superior temporal lobe was maximal for early lags, whereas, conversely, activity in the left inferior frontal gyrus increased as a function of lag. Taken together, the results reveal that as the interval between item repetitions increases, there is a shift in the distribution of memory-related activity that moves from posterior temporo-parietal cortex (lags 1-4) to inferior frontal regions (lags 5-10), indicating that as time advances, the burden of recognition memory is increasingly placed on top-down retrieval mechanisms that are mediated by structures in inferior frontal cortex.
Daily ambient temperature and renal colic incidence in Guangzhou, China: a time-series analysis
NASA Astrophysics Data System (ADS)
Yang, Changyuan; Chen, Xinyu; Chen, Renjie; Cai, Jing; Meng, Xia; Wan, Yue; Kan, Haidong
2016-08-01
Few previous studies have examined the association between temperature and renal colic in developing regions, especially in China, the largest developing country in the world. We collected daily emergency ambulance dispatches (EADs) for renal colic from Guangzhou Emergency Center from 1 January 2008 to 31 December 2012. We used a distributed-lag nonlinear model in addition to the over-dispersed generalized additive model to investigate the association between daily ambient temperature and renal colic incidence after controlling for seasonality, humidity, public holidays, and day of the week. We identified 3158 EADs for renal colic during the study period. This exposure-response curve was almost flat when the temperature was low and moderate and elevated when the temperature increased over 21 °C. For heat-related effects, the significant risk occurred on the concurrent day and diminished until lag day 7. The cumulative relative risk of hot temperatures (90th percentile) and extremely hot temperatures (99th percentile) over lag days 0-7 was 1.92 (95 % confidence interval, 1.21, 3.05) and 2.45 (95 % confidence interval, 1.50, 3.99) compared with the reference temperature of 21 °C. This time-series analysis in Guangzhou, China, suggested a nonlinear and lagged association between high outdoor temperatures and daily EADs for renal colic. Our findings might have important public health significance to prevent renal colic.
Short term effects of airborne pollen concentrations on asthma epidemic
Tobias, A; Galan, I; Banegas, J; Aranguez, E
2003-01-01
Methods: This study, based on time series analysis adjusting for meteorological factors and air pollution variables, assessed the short term effects of different types of allergenic pollen on asthma hospital emergencies in the metropolitan area of Madrid (Spain) for the period 1995–8. Results: Statistically significant associations were found for Poaceae pollen (lag of 3 days) and Plantago pollen (lag of 2 days), representing an increase in the range between the 99th and 95th percentiles of 17.1% (95% confidence interval (CI) 3.2 to 32.8) and 15.9% (95% CI 6.5 to 26.2) for Poaceae and Plantago, respectively. A positive association was also observed for Urticaceae (lag of 1 day) with an 8.4% increase (95% CI 2.8 to 14.4). Conclusions: There is an association between pollen levels and asthma related emergencies, independent of the effect of air pollutants. The marked relationship observed for Poaceae and Plantago pollens suggests their implication in the epidemic distribution of asthma during the period coinciding with their abrupt release into the environment. PMID:12885991
Matias, Fernanda S.; Carelli, Pedro V.; Mirasso, Claudio R.; Copelli, Mauro
2015-01-01
Several cognitive tasks related to learning and memory exhibit synchronization of macroscopic cortical areas together with synaptic plasticity at neuronal level. Therefore, there is a growing effort among computational neuroscientists to understand the underlying mechanisms relating synchrony and plasticity in the brain. Here we numerically study the interplay between spike-timing dependent plasticity (STDP) and anticipated synchronization (AS). AS emerges when a dominant flux of information from one area to another is accompanied by a negative time lag (or phase). This means that the receiver region pulses before the sender does. In this paper we study the interplay between different synchronization regimes and STDP at the level of three-neuron microcircuits as well as cortical populations. We show that STDP can promote auto-organized zero-lag synchronization in unidirectionally coupled neuronal populations. We also find synchronization regimes with negative phase difference (AS) that are stable against plasticity. Finally, we show that the interplay between negative phase difference and STDP provides limited synaptic weight distribution without the need of imposing artificial boundaries. PMID:26474165
Wu, Yifeng; Zhao, Fengmin; Qian, Xujun; Xu, Guozhang; He, Tianfeng; Shen, Yueping; Cai, Yibiao
2015-07-01
To describe the daily average concentration of sulfur dioxide (SO2) in Ningbo, and to analysis the health impacts it caused in upper respiratory disease. With outpatients log and air pollutants monitoring data matched in 2011-2013, the distributed lag non-linear models were used to analysis the relative risk of the number of upper respiratory patients associated with SO2, and also excessive risk, and the inferred number of patients due to SO2 pollution. The daily average concentration of SO2 didn't exceed the limit value of second class area. The coefficient of upper respiratory outpatient number and daily average concentration of SO2 matched was 0.44,with the excessive risk was 10% to 18%, the lag of most SO2 concentrations was 4 to 6 days. It could be estimated that about 30% of total upper respiratory outpatients were caused by SO2 pollution. Although the daily average concentration of SO2 didn't exceed the standard in 3 years, the health impacts still be caused with lag effect.
Ambient air pollution, temperature and out-of-hospital coronary deaths in Shanghai, China.
Dai, Jinping; Chen, Renjie; Meng, Xia; Yang, Changyuan; Zhao, Zhuohui; Kan, Haidong
2015-08-01
Few studies have evaluated the effects of ambient air pollution and temperature in triggering out-of-hospital coronary deaths (OHCDs) in China. We evaluated the associations of air pollution and temperature with daily OHCDs in Shanghai, China from 2006 to 2011. We applied an over-dispersed generalized additive model and a distributed lag nonlinear model to analyze the effects of air pollution and temperature, respectively. A 10 μg/m(3) increase in the present-day PM10, PM2.5, SO2, NO2 and CO were associated with increases in OHCD mortality of 0.49%, 0.68%, 0.88%, 1.60% and 0.08%, respectively. A 1 °C decrease below the minimum-mortality temperature corresponded to a 3.81% increase in OHCD mortality on lags days 0-21, and a 1 °C increase above minimum-mortality temperature corresponded to a 4.61% increase over lag days 0-3. No effects were found for in-hospital coronary deaths. This analysis suggests that air pollution, low temperature and high temperature may increase the risk of OHCDs. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Deen, David A.; Storm, David F.; Scott Katzer, D.; Bass, R.; Meyer, David J.
2016-08-01
A dual-channel AlN/GaN high electron mobility transistor (HEMT) architecture is demonstrated that leverages ultra-thin epitaxial layers to suppress surface-related gate lag. Two high-density two-dimensional electron gas (2DEG) channels are utilized in an AlN/GaN/AlN/GaN heterostructure wherein the top 2DEG serves as a quasi-equipotential that screens potential fluctuations resulting from distributed surface and interface states. The bottom channel serves as the transistor's modulated channel. Dual-channel AlN/GaN heterostructures were grown by molecular beam epitaxy on free-standing hydride vapor phase epitaxy GaN substrates. HEMTs fabricated with 300 nm long recessed gates demonstrated a gate lag ratio (GLR) of 0.88 with no degradation in drain current after bias stressed in subthreshold. These structures additionally achieved small signal metrics ft/fmax of 27/46 GHz. These performance results are contrasted with the non-recessed gate dual-channel HEMT with a GLR of 0.74 and 82 mA/mm current collapse with ft/fmax of 48/60 GHz.
NASA Astrophysics Data System (ADS)
Yuan, Manman; Wang, Weiping; Luo, Xiong; Li, Lixiang; Kurths, Jürgen; Wang, Xiao
2018-03-01
This paper is concerned with the exponential lag function projective synchronization of memristive multidirectional associative memory neural networks (MMAMNNs). First, we propose a new model of MMAMNNs with mixed time-varying delays. In the proposed approach, the mixed delays include time-varying discrete delays and distributed time delays. Second, we design two kinds of hybrid controllers. Traditional control methods lack the capability of reflecting variable synaptic weights. In this paper, the controllers are carefully designed to confirm the process of different types of synchronization in the MMAMNNs. Third, sufficient criteria guaranteeing the synchronization of system are derived based on the derive-response concept. Finally, the effectiveness of the proposed mechanism is validated with numerical experiments.
Translation of Bernstein Coefficients Under an Affine Mapping of the Unit Interval
NASA Technical Reports Server (NTRS)
Alford, John A., II
2012-01-01
We derive an expression connecting the coefficients of a polynomial expanded in the Bernstein basis to the coefficients of an equivalent expansion of the polynomial under an affine mapping of the domain. The expression may be useful in the calculation of bounds for multi-variate polynomials.
On polynomial selection for the general number field sieve
NASA Astrophysics Data System (ADS)
Kleinjung, Thorsten
2006-12-01
The general number field sieve (GNFS) is the asymptotically fastest algorithm for factoring large integers. Its runtime depends on a good choice of a polynomial pair. In this article we present an improvement of the polynomial selection method of Montgomery and Murphy which has been used in recent GNFS records.
Graphical Solution of Polynomial Equations
ERIC Educational Resources Information Center
Grishin, Anatole
2009-01-01
Graphing utilities, such as the ubiquitous graphing calculator, are often used in finding the approximate real roots of polynomial equations. In this paper the author offers a simple graphing technique that allows one to find all solutions of a polynomial equation (1) of arbitrary degree; (2) with real or complex coefficients; and (3) possessing…
Evaluation of more general integrals involving universal associated Legendre polynomials
NASA Astrophysics Data System (ADS)
You, Yuan; Chen, Chang-Yuan; Tahir, Farida; Dong, Shi-Hai
2017-05-01
We find that the solution of the polar angular differential equation can be written as the universal associated Legendre polynomials. We present a popular integral formula which includes universal associated Legendre polynomials and we also evaluate some important integrals involving the product of two universal associated Legendre polynomials Pl' m'(x ) , Pk' n'(x ) and x2 a(1-x2 ) -p -1, xb(1±x2 ) -p, and xc(1-x2 ) -p(1±x ) -1, where l'≠k' and m'≠n'. Their selection rules are also mentioned.
Neck curve polynomials in neck rupture model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurniadi, Rizal; Perkasa, Yudha S.; Waris, Abdul
2012-06-06
The Neck Rupture Model is a model that explains the scission process which has smallest radius in liquid drop at certain position. Old fashion of rupture position is determined randomly so that has been called as Random Neck Rupture Model (RNRM). The neck curve polynomials have been employed in the Neck Rupture Model for calculation the fission yield of neutron induced fission reaction of {sup 280}X{sub 90} with changing of order of polynomials as well as temperature. The neck curve polynomials approximation shows the important effects in shaping of fission yield curve.
More on rotations as spin matrix polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtright, Thomas L.
2015-09-15
Any nonsingular function of spin j matrices always reduces to a matrix polynomial of order 2j. The challenge is to find a convenient form for the coefficients of the matrix polynomial. The theory of biorthogonal systems is a useful framework to meet this challenge. Central factorial numbers play a key role in the theoretical development. Explicit polynomial coefficients for rotations expressed either as exponentials or as rational Cayley transforms are considered here. Structural features of the results are discussed and compared, and large j limits of the coefficients are examined.
Robust stability of fractional order polynomials with complicated uncertainty structure
Şenol, Bilal; Pekař, Libor
2017-01-01
The main aim of this article is to present a graphical approach to robust stability analysis for families of fractional order (quasi-)polynomials with complicated uncertainty structure. More specifically, the work emphasizes the multilinear, polynomial and general structures of uncertainty and, moreover, the retarded quasi-polynomials with parametric uncertainty are studied. Since the families with these complex uncertainty structures suffer from the lack of analytical tools, their robust stability is investigated by numerical calculation and depiction of the value sets and subsequent application of the zero exclusion condition. PMID:28662173
Application of polynomial su(1, 1) algebra to Pöschl-Teller potentials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hong-Biao, E-mail: zhanghb017@nenu.edu.cn; Lu, Lu
2013-12-15
Two novel polynomial su(1, 1) algebras for the physical systems with the first and second Pöschl-Teller (PT) potentials are constructed, and their specific representations are presented. Meanwhile, these polynomial su(1, 1) algebras are used as an algebraic technique to solve eigenvalues and eigenfunctions of the Hamiltonians associated with the first and second PT potentials. The algebraic approach explores an appropriate new pair of raising and lowing operators K-circumflex{sub ±} of polynomial su(1, 1) algebra as a pair of shift operators of our Hamiltonians. In addition, two usual su(1, 1) algebras associated with the first and second PT potentials are derivedmore » naturally from the polynomial su(1, 1) algebras built by us.« less
Polynomials to model the growth of young bulls in performance tests.
Scalez, D C B; Fragomeni, B O; Passafaro, T L; Pereira, I G; Toral, F L B
2014-03-01
The use of polynomial functions to describe the average growth trajectory and covariance functions of Nellore and MA (21/32 Charolais+11/32 Nellore) young bulls in performance tests was studied. The average growth trajectories and additive genetic and permanent environmental covariance functions were fit with Legendre (linear through quintic) and quadratic B-spline (with two to four intervals) polynomials. In general, the Legendre and quadratic B-spline models that included more covariance parameters provided a better fit with the data. When comparing models with the same number of parameters, the quadratic B-spline provided a better fit than the Legendre polynomials. The quadratic B-spline with four intervals provided the best fit for the Nellore and MA groups. The fitting of random regression models with different types of polynomials (Legendre polynomials or B-spline) affected neither the genetic parameters estimates nor the ranking of the Nellore young bulls. However, fitting different type of polynomials affected the genetic parameters estimates and the ranking of the MA young bulls. Parsimonious Legendre or quadratic B-spline models could be used for genetic evaluation of body weight of Nellore young bulls in performance tests, whereas these parsimonious models were less efficient for animals of the MA genetic group owing to limited data at the extreme ages.
Cylinder surface test with Chebyshev polynomial fitting method
NASA Astrophysics Data System (ADS)
Yu, Kui-bang; Guo, Pei-ji; Chen, Xi
2017-10-01
Zernike polynomials fitting method is often applied in the test of optical components and systems, used to represent the wavefront and surface error in circular domain. Zernike polynomials are not orthogonal in rectangular region which results in its unsuitable for the test of optical element with rectangular aperture such as cylinder surface. Applying the Chebyshev polynomials which are orthogonal among the rectangular area as an substitution to the fitting method, can solve the problem. Corresponding to a cylinder surface with diameter of 50 mm and F number of 1/7, a measuring system has been designed in Zemax based on Fizeau Interferometry. The expressions of the two-dimensional Chebyshev polynomials has been given and its relationship with the aberration has been presented. Furthermore, Chebyshev polynomials are used as base items to analyze the rectangular aperture test data. The coefficient of different items are obtained from the test data through the method of least squares. Comparing the Chebyshev spectrum in different misalignment, it show that each misalignment is independence and has a certain relationship with the certain Chebyshev terms. The simulation results show that, through the Legendre polynomials fitting method, it will be a great improvement in the efficient of the detection and adjustment of the cylinder surface test.
Generating the patterns of variation with GeoGebra: the case of polynomial approximations
NASA Astrophysics Data System (ADS)
Attorps, Iiris; Björk, Kjell; Radic, Mirko
2016-01-01
In this paper, we report a teaching experiment regarding the theory of polynomial approximations at the university mathematics teaching in Sweden. The experiment was designed by applying Variation theory and by using the free dynamic mathematics software GeoGebra. The aim of this study was to investigate if the technology-assisted teaching of Taylor polynomials compared with traditional way of work at the university level can support the teaching and learning of mathematical concepts and ideas. An engineering student group (n = 19) was taught Taylor polynomials with the assistance of GeoGebra while a control group (n = 18) was taught in a traditional way. The data were gathered by video recording of the lectures, by doing a post-test concerning Taylor polynomials in both groups and by giving one question regarding Taylor polynomials at the final exam for the course in Real Analysis in one variable. In the analysis of the lectures, we found Variation theory combined with GeoGebra to be a potentially powerful tool for revealing some critical aspects of Taylor Polynomials. Furthermore, the research results indicated that applying Variation theory, when planning the technology-assisted teaching, supported and enriched students' learning opportunities in the study group compared with the control group.
Impact of meteorological factors on the spatiotemporal patterns of dengue fever incidence.
Chien, Lung-Chang; Yu, Hwa-Lung
2014-12-01
Dengue fever is one of the most widespread vector-borne diseases and has caused more than 50 million infections annually over the world. For the purposes of disease prevention and climate change health impact assessment, it is crucial to understand the weather-disease associations for dengue fever. This study investigated the nonlinear delayed impact of meteorological conditions on the spatiotemporal variations of dengue fever in southern Taiwan during 1998-2011. We present a novel integration of a distributed lag nonlinear model and Markov random fields to assess the nonlinear lagged effects of weather variables on temporal dynamics of dengue fever and to account for the geographical heterogeneity. This study identified the most significant meteorological measures to dengue fever variations, i.e., weekly minimum temperature, and the weekly maximum 24-hour rainfall, by obtaining the relative risk (RR) with respect to disease counts and a continuous 20-week lagged time. Results show that RR increased as minimum temperature increased, especially for the lagged period 5-18 weeks, and also suggest that the time to high disease risks can be decreased. Once the occurrence of maximum 24-hour rainfall is >50 mm, an associated increased RR lasted for up to 15 weeks. A temporary one-month decrease in the RR of dengue fever is noted following the extreme rain. In addition, the elevated incidence risk is identified in highly populated areas. Our results highlight the high nonlinearity of temporal lagged effects and magnitudes of temperature and rainfall on dengue fever epidemics. The results can be a practical reference for the early warning of dengue fever. Copyright © 2014 Elsevier Ltd. All rights reserved.
Chen, Gongbo; Zhang, Wenyi; Li, Shanshan; Williams, Gail; Liu, Chao; Morgan, Geoffrey G; Jaakkola, Jouni J K; Guo, Yuming
2017-07-01
China's rapid economic development has resulted in severe particulate matter (PM) air pollution and the control and prevention of infectious disease is an ongoing priority. This study examined the relationships between short-term exposure to ambient particles with aerodynamic diameter ≤2.5µm (PM 2.5 ) and measles incidence in China. Data on daily numbers of new measles cases and concentrations of ambient PM 2.5 were collected from 21 cities in China during Oct 2013 and Dec 2014. Poisson regression was used to examine city-specific associations of PM 2.5 and measles, with a constrained distributed lag model, after adjusting for seasonality, day of the week, and weather conditions. Then, the effects at the national scale were pooled with a random-effect meta-analysis. A 10µg/m 3 increase in PM 2.5 at lag 1day, lag 2day and lag 3day was significantly associated with increased measles incidence [relative risk (RR) and 95% confidence interval (CI) were 1.010 (1.003, 1.018), 1.010 (1.003, 1.016) and 1.006 (1.000, 1.012), respectively]. The cumulative relative risk of measles associated with PM 2.5 at lag 1-3 days was 1.029 (95% CI: 1.010, 1.048). Stratified analyses by meteorological factors showed that the PM 2.5 and measles associations were stronger on days with high temperature, low humidity, and high wind speed. We provide new evidence that measles incidence is associated with exposure to ambient PM 2.5 in China. Effective policies to reduce air pollution may also reduce measles incidence. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Lam, Holly Ching-yu; Chan, Emily Ying-yang; Goggins, William Bernard
2018-05-01
Pneumonia and chronic obstructive pulmonary diseases (COPD) are the commonest causes of respiratory hospitalization among older adults. Both diseases have been reported to be associated with ambient temperature, but the associations have not been compared between the diseases. Their associations with other meteorological variables have also not been well studied. This study aimed to evaluate the associations between meteorological variables, pneumonia, and COPD hospitalization among adults over 60 and to compare these associations between the diseases. Daily cause-specific hospitalization counts in Hong Kong during 2004-2011 were regressed on daily meteorological variables using distributed lag nonlinear models. Associations were compared between diseases by ratio of relative risks. Analyses were stratified by season and age group (60-74 vs. ≥ 75). In hot season, high temperature (> 28 °C) and high relative humidity (> 82%) were statistically significantly associated with more pneumonia in lagged 0-2 and lagged 0-10 days, respectively. Pneumonia hospitalizations among the elderly (≥ 75) also increased with high solar radiation and high wind speed. During the cold season, consistent hockey-stick associations with temperature and relative humidity were found for both admissions and both age groups. The minimum morbidity temperature and relative humidity were at about 21-22 °C and 82%. The lagged effects of low temperature were comparable for both diseases (lagged 0-20 days). The low-temperature-admissions associations with COPD were stronger and were strongest among the elderly. This study found elevated pneumonia and COPD admissions risks among adults ≥ 60 during periods of extreme weather conditions, and the associations varied by season and age group. Vulnerable groups should be advised to avoid exposures, such as staying indoor and maintaining satisfactory indoor conditions, to minimize risks.
Gu, Bing; Burgess, Diane J
2015-11-10
Hydrophobic drug release from poly (lactic-co-glycolic acid) (PLGA) microspheres typically exhibits a tri-phasic profile with a burst release phase followed by a lag phase and a secondary release phase. High burst release can be associated with adverse effects and the efficacy of the formulation cannot be ensured during a long lag phase. Accordingly, the development of a long-acting microsphere product requires optimization of all drug release phases. The purpose of the current study was to investigate whether a blend of low and high molecular weight polymers can be used to reduce the burst release and eliminate/minimize the lag phase. A single emulsion solvent evaporation method was used to prepare microspheres using blends of two PLGA polymers (PLGA5050 (25 kDa) and PLGA9010 (113 kDa)). A central composite design approach was applied to investigate the effect of formulation composition on dexamethasone release from these microspheres. Mathematical models obtained from this design of experiments study were utilized to generate a design space with maximized microsphere drug loading and reduced burst release. Specifically, a drug loading close to 15% can be achieved and a burst release less than 10% when a composition of 80% PLGA9010 and 90 mg of dexamethasone is used. In order to better describe the lag phase, a heat map was generated based on dexamethasone release from the PLGA microsphere/PVA hydrogel composite coatings. Using the heat map an optimized formulation with minimum lag phase was selected. The microspheres were also characterized for particle size/size distribution, thermal properties and morphology. The particle size was demonstrated to be related to the polymer concentration and the ratio of the two polymers but not to the dexamethasone concentration. Copyright © 2015 Elsevier B.V. All rights reserved.
Connectivity Restoration in Wireless Sensor Networks via Space Network Coding.
Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing
2017-04-20
The problem of finding the number and optimal positions of relay nodes for restoring the network connectivity in partitioned Wireless Sensor Networks (WSNs) is Non-deterministic Polynomial-time hard (NP-hard) and thus heuristic methods are preferred to solve it. This paper proposes a novel polynomial time heuristic algorithm, namely, Relay Placement using Space Network Coding (RPSNC), to solve this problem, where Space Network Coding, also called Space Information Flow (SIF), is a new research paradigm that studies network coding in Euclidean space, in which extra relay nodes can be introduced to reduce the cost of communication. Unlike contemporary schemes that are often based on Minimum Spanning Tree (MST), Euclidean Steiner Minimal Tree (ESMT) or a combination of MST with ESMT, RPSNC is a new min-cost multicast space network coding approach that combines Delaunay triangulation and non-uniform partitioning techniques for generating a number of candidate relay nodes, and then linear programming is applied for choosing the optimal relay nodes and computing their connection links with terminals. Subsequently, an equilibrium method is used to refine the locations of the optimal relay nodes, by moving them to balanced positions. RPSNC can adapt to any density distribution of relay nodes and terminals, as well as any density distribution of terminals. The performance and complexity of RPSNC are analyzed and its performance is validated through simulation experiments.
NASA Astrophysics Data System (ADS)
Morgenstern, Uwe; Daughney, Christopher J.; Stewart, Michael K.; McDonnell, Jeffrey J.
2013-04-01
The transit time distribution of streamflow is a fundamental descriptor of the flowpaths of water through a catchment and the storage of water within it, controlling its response to landuse change, pollution, ecological degradation, and climate change. Significant time lags (catchment memory) in the responses of streams to these stressors and their amelioration or restoration have been observed. Lag time can be quantified via water transit time of the catchment discharge. Mean transit times can be in the order of years and decades (Stewart et al 2012, Morgenstern et al., 2010). If the water passes through large groundwater reservoirs, it is difficult to quantify and predict the lag time. A pulse shaped tracer that moves with the water can allow quantification of the mean transit time. Environmental tritium is the ideal tracer of the water cycle. Tritium is part of the water molecule, is not affected by chemical reactions in the aquifer, and the bomb tritium from the atmospheric nuclear weapons testing represents a pulse shaped tracer input that allows for very accurate measurement of the age distribution parameters of the water in the catchment discharge. Tritium time series data from all catchment discharges (streams and springs) into Lake Rotorua, New Zealand, allow for accurate determination of the age distribution parameters. The Lake Rotorua catchment tritium data from streams and springs are unique, with high-quality tritium data available over more than four decades, encompassing the time when the bomb-tritium moved through the groundwater system, and from a very high number of streams and springs. Together with the well-defined tritium input into the Rotorua catchment, this data set allows for the best understanding of the water dynamics through a large scale catchment, including validation of complicated water mixing models. Mean transit times of the main streams into the lake range between 27 and 170 years. With such old water discharging into the lake, most of the water inflows into the lake are not yet fully representing the nitrate loading in their sub-catchments from current land use practises. These water inflows are still 'diluted' by pristine old water, but over time, the full amount of nitrate load will arrive at the lake. With the age distribution parameters, it is possible to predict the increase in nitrate load to the lake via the groundwater discharges. All sub-catchments have different mean transit times. The mean transit times are not necessarily correlated with observable hydrogeologic properties like hydraulic conductivity and catchment size. Without such age tracer data, it is therefore difficult to predict mean transit times (lag times, memory) of water transfer through catchments. References: Stewart, M.K., Morgenstern, U., McDonnell, J.J., Pfister, L. (2012). The 'hidden streamflow' challenge in catchment hydrology: A call to action for streamwater transit time analysis. Hydrol. Process. 26,2061-2066, Invited commentary. DOI: 10.1002/hyp.9262 Morgenstern, U., Stewart, M.K., and Stenger, R. (2010) Dating of streamwater using tritium in a post nuclear bomb pulse world: continuous variation of mean transit time with streamflow, Hydrol. Earth Syst. Sci, 14, 2289-2301
Lin, Weiwei; Huang, Wei; Hu, Min; Brunekreef, Bert; Zhang, Yuanhang; Liu, Xingang; Cheng, Hong; Gehring, Ulrike; Li, Chengcai; Tang, Xiaoyan
2011-01-01
Background: Epidemiologic evidence for a causative association between black carbon (BC) and health outcomes is limited. Objectives: We estimated associations and exposure–response relationships between acute respiratory inflammation in schoolchildren and concentrations of BC and particulate matter with an aerodynamic diameter of ≤ 2.5 μm (PM2.5) in ambient air before and during the air pollution intervention for the 2008 Beijing Olympics. Methods: We measured exhaled nitric oxide (eNO) as an acute respiratory inflammation biomarker and hourly mean air pollutant concentrations to estimate BC and PM2.5 exposure. We used 1,581 valid observations of 36 subjects over five visits in 2 years to estimate associations of eNO with BC and PM2.5 according to generalized estimating equations with polynomial distributed-lag models, controlling for body mass index, asthma, temperature, and relative humidity. We also assessed the relative importance of BC and PM2.5 with two-pollutant models. Results: Air pollution concentrations and eNO were clearly lower during the 2008 Olympics. BC and PM2.5 concentrations averaged over 0–24 hr were strongly associated with eNO, which increased by 16.6% [95% confidence interval (CI), 14.1–19.2%] and 18.7% (95% CI, 15.0–22.5%) per interquartile range (IQR) increase in BC (4.0 μg/m3) and PM2.5 (149 μg/m3), respectively. In the two-pollutant model, estimated effects of BC were robust, but associations between PM2.5 and eNO decreased with adjustment for BC. We found that eNO was associated with IQR increases in hourly BC concentrations up to 10 hr after exposure, consistent with effects primarily in the first hours after exposure. Conclusions: Recent exposure to BC was associated with acute respiratory inflammation in schoolchildren in Beijing. Lower air pollution levels during the 2008 Olympics also were associated with reduced eNO. PMID:21642045
A general U-block model-based design procedure for nonlinear polynomial control systems
NASA Astrophysics Data System (ADS)
Zhu, Q. M.; Zhao, D. Y.; Zhang, Jianhua
2016-10-01
The proposition of U-model concept (in terms of 'providing concise and applicable solutions for complex problems') and a corresponding basic U-control design algorithm was originated in the first author's PhD thesis. The term of U-model appeared (not rigorously defined) for the first time in the first author's other journal paper, which established a framework for using linear polynomial control system design approaches to design nonlinear polynomial control systems (in brief, linear polynomial approaches → nonlinear polynomial plants). This paper represents the next milestone work - using linear state-space approaches to design nonlinear polynomial control systems (in brief, linear state-space approaches → nonlinear polynomial plants). The overall aim of the study is to establish a framework, defined as the U-block model, which provides a generic prototype for using linear state-space-based approaches to design the control systems with smooth nonlinear plants/processes described by polynomial models. For analysing the feasibility and effectiveness, sliding mode control design approach is selected as an exemplary case study. Numerical simulation studies provide a user-friendly step-by-step procedure for the readers/users with interest in their ad hoc applications. In formality, this is the first paper to present the U-model-oriented control system design in a formal way and to study the associated properties and theorems. The previous publications, in the main, have been algorithm-based studies and simulation demonstrations. In some sense, this paper can be treated as a landmark for the U-model-based research from intuitive/heuristic stage to rigour/formal/comprehensive studies.
Two-dimensional orthonormal trend surfaces for prospecting
NASA Astrophysics Data System (ADS)
Sarma, D. D.; Selvaraj, J. B.
Orthonormal polynomials have distinct advantages over conventional polynomials: the equations for evaluating trend coefficients are not ill-conditioned and the convergence power of this method is greater compared to the least-squares approximation and therefore the approach by orthonormal functions provides a powerful alternative to the least-squares method. In this paper, orthonormal polynomials in two dimensions are obtained using the Gram-Schmidt method for a polynomial series of the type: Z = 1 + x + y + x2 + xy + y2 + … + yn, where x and y are the locational coordinates and Z is the value of the variable under consideration. Trend-surface analysis, which has wide applications in prospecting, has been carried out using the orthonormal polynomial approach for two sample sets of data from India concerned with gold accumulation from the Kolar Gold Field, and gravity data. A comparison of the orthonormal polynomial trend surfaces with those obtained by the classical least-squares method has been made for the two data sets. In both the situations, the orthonormal polynomial surfaces gave an improved fit to the data. A flowchart and a FORTRAN-IV computer program for deriving orthonormal polynomials of any order and for using them to fit trend surfaces is included. The program has provision for logarithmic transformation of the Z variable. If log-transformation is performed the predicted Z values are reconverted to the original units and the trend-surface map generated for use. The illustration of gold assay data related to the Champion lode system of Kolar Gold Fields, for which a 9th-degree orthonormal trend surface was fit, could be used for further prospecting the area.
A method for modeling finite-core vortices in wake-flow calculations
NASA Technical Reports Server (NTRS)
Stremel, P. M.
1984-01-01
A numerical method for computing nonplanar vortex wakes represented by finite-core vortices is presented. The approach solves for the velocity on an Eulerian grid, using standard finite-difference techniques; the vortex wake is tracked by Lagrangian methods. In this method, the distribution of continuous vorticity in the wake is replaced by a group of discrete vortices. An axially symmetric distribution of vorticity about the center of each discrete vortex is used to represent the finite-core model. Two distributions of vorticity, or core models, are investigated: a finite distribution of vorticity represented by a third-order polynomial, and a continuous distribution of vorticity throughout the wake. The method provides for a vortex-core model that is insensitive to the mesh spacing. Results for a simplified case are presented. Computed results for the roll-up of a vortex wake generated by wings with different spanwise load distributions are presented; contour plots of the flow-field velocities are included; and comparisons are made of the computed flow-field velocities with experimentally measured velocities.
Animating Nested Taylor Polynomials to Approximate a Function
ERIC Educational Resources Information Center
Mazzone, Eric F.; Piper, Bruce R.
2010-01-01
The way that Taylor polynomials approximate functions can be demonstrated by moving the center point while keeping the degree fixed. These animations are particularly nice when the Taylor polynomials do not intersect and form a nested family. We prove a result that shows when this nesting occurs. The animations can be shown in class or…
ERIC Educational Resources Information Center
Young, Forrest W.
A model permitting construction of algorithms for the polynomial conjoint analysis of similarities is presented. This model, which is based on concepts used in nonmetric scaling, permits one to obtain the best approximate solution. The concepts used to construct nonmetric scaling algorithms are reviewed. Finally, examples of algorithmic models for…
Dual exponential polynomials and linear differential equations
NASA Astrophysics Data System (ADS)
Wen, Zhi-Tao; Gundersen, Gary G.; Heittokangas, Janne
2018-01-01
We study linear differential equations with exponential polynomial coefficients, where exactly one coefficient is of order greater than all the others. The main result shows that a nontrivial exponential polynomial solution of such an equation has a certain dual relationship with the maximum order coefficient. Several examples illustrate our results and exhibit possibilities that can occur.
Polynomial Graphs and Symmetry
ERIC Educational Resources Information Center
Goehle, Geoff; Kobayashi, Mitsuo
2013-01-01
Most quadratic functions are not even, but every parabola has symmetry with respect to some vertical line. Similarly, every cubic has rotational symmetry with respect to some point, though most cubics are not odd. We show that every polynomial has at most one point of symmetry and give conditions under which the polynomial has rotational or…
Why the Faulhaber Polynomials Are Sums of Even or Odd Powers of (n + 1/2)
ERIC Educational Resources Information Center
Hersh, Reuben
2012-01-01
By extending Faulhaber's polynomial to negative values of n, the sum of the p'th powers of the first n integers is seen to be an even or odd polynomial in (n + 1/2) and therefore expressible in terms of the sum of the first n integers.
ERIC Educational Resources Information Center
Withers, Christopher S.; Nadarajah, Saralees
2012-01-01
We show that there are exactly four quadratic polynomials, Q(x) = x [superscript 2] + ax + b, such that (x[superscript 2] + ax + b) (x[superscript 2] - ax + b) = (x[superscript 4] + ax[superscript 2] + b). For n = 1, 2, ..., these quadratic polynomials can be written as the product of N = 2[superscript n] quadratic polynomials in x[superscript…
NASA Astrophysics Data System (ADS)
Lei, Hanlun; Xu, Bo; Circi, Christian
2018-05-01
In this work, the single-mode motions around the collinear and triangular libration points in the circular restricted three-body problem are studied. To describe these motions, we adopt an invariant manifold approach, which states that a suitable pair of independent variables are taken as modal coordinates and the remaining state variables are expressed as polynomial series of them. Based on the invariant manifold approach, the general procedure on constructing polynomial expansions up to a certain order is outlined. Taking the Earth-Moon system as the example dynamical model, we construct the polynomial expansions up to the tenth order for the single-mode motions around collinear libration points, and up to order eight and six for the planar and vertical-periodic motions around triangular libration point, respectively. The application of the polynomial expansions constructed lies in that they can be used to determine the initial states for the single-mode motions around equilibrium points. To check the validity, the accuracy of initial states determined by the polynomial expansions is evaluated.
Orbifold E-functions of dual invertible polynomials
NASA Astrophysics Data System (ADS)
Ebeling, Wolfgang; Gusein-Zade, Sabir M.; Takahashi, Atsushi
2016-08-01
An invertible polynomial is a weighted homogeneous polynomial with the number of monomials coinciding with the number of variables and such that the weights of the variables and the quasi-degree are well defined. In the framework of the search for mirror symmetric orbifold Landau-Ginzburg models, P. Berglund and M. Henningson considered a pair (f , G) consisting of an invertible polynomial f and an abelian group G of its symmetries together with a dual pair (f ˜ , G ˜) . We consider the so-called orbifold E-function of such a pair (f , G) which is a generating function for the exponents of the monodromy action on an orbifold version of the mixed Hodge structure on the Milnor fibre of f. We prove that the orbifold E-functions of Berglund-Henningson dual pairs coincide up to a sign depending on the number of variables and a simple change of variables. The proof is based on a relation between monomials (say, elements of a monomial basis of the Milnor algebra of an invertible polynomial) and elements of the whole symmetry group of the dual polynomial.
Knowledge-based system for detailed blade design of turbines
NASA Astrophysics Data System (ADS)
Goel, Sanjay; Lamson, Scott
1994-03-01
A design optimization methodology that couples optimization techniques to CFD analysis for design of airfoils is presented. This technique optimizes 2D airfoil sections of a blade by minimizing the deviation of the actual Mach number distribution on the blade surface from a smooth fit of the distribution. The airfoil is not reverse engineered by specification of a precise distribution of the desired Mach number plot, only general desired characteristics of the distribution are specified for the design. Since the Mach number distribution is very complex, and cannot be conveniently represented by a single polynomial, it is partitioned into segments, each of which is characterized by a different order polynomial. The sum of the deviation of all the segments is minimized during optimization. To make intelligent changes to the airfoil geometry, it needs to be associated with features observed in the Mach number distribution. Associating the geometry parameters with independent features of the distribution is a fairly complex task. Also, for different optimization techniques to work efficiently the airfoil geometry needs to be parameterized into independent parameters, with enough degrees of freedom for adequate geometry manipulation. A high-pressure, low reaction steam turbine blade section was optimized using this methodology. The Mach number distribution was partitioned into pressure and suction surfaces and the suction surface distribution was further subdivided into leading edge, mid section and trailing edge sections. Two different airfoil representation schemes were used for defining the design variables of the optimization problem. The optimization was performed by using a combination of heuristic search and numerical optimization. The optimization results for the two schemes are discussed in the paper. The results are also compared to a manual design improvement study conducted independently by an experienced airfoil designer. The turbine blade optimization system (TBOS) is developed using the described methodology of coupling knowledge engineering with multiple search techniques for blade shape optimization. TBOS removes a major bottleneck in the design cycle by performing multiple design optimizations in parallel, and improves design quality at the same time. TBOS not only improves the design but also the designers' quality of work by taking the mundane repetitive task of design iterations away and leaving them more time for innovative design.
Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan
2012-01-01
Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.
Symmetric polynomials in information theory: Entropy and subentropy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jozsa, Richard; Mitchison, Graeme
2015-06-15
Entropy and other fundamental quantities of information theory are customarily expressed and manipulated as functions of probabilities. Here we study the entropy H and subentropy Q as functions of the elementary symmetric polynomials in the probabilities and reveal a series of remarkable properties. Derivatives of all orders are shown to satisfy a complete monotonicity property. H and Q themselves become multivariate Bernstein functions and we derive the density functions of their Levy-Khintchine representations. We also show that H and Q are Pick functions in each symmetric polynomial variable separately. Furthermore, we see that H and the intrinsically quantum informational quantitymore » Q become surprisingly closely related in functional form, suggesting a special significance for the symmetric polynomials in quantum information theory. Using the symmetric polynomials, we also derive a series of further properties of H and Q.« less
Recursive approach to the moment-based phase unwrapping method.
Langley, Jason A; Brice, Robert G; Zhao, Qun
2010-06-01
The moment-based phase unwrapping algorithm approximates the phase map as a product of Gegenbauer polynomials, but the weight function for the Gegenbauer polynomials generates artificial singularities along the edge of the phase map. A method is presented to remove the singularities inherent to the moment-based phase unwrapping algorithm by approximating the phase map as a product of two one-dimensional Legendre polynomials and applying a recursive property of derivatives of Legendre polynomials. The proposed phase unwrapping algorithm is tested on simulated and experimental data sets. The results are then compared to those of PRELUDE 2D, a widely used phase unwrapping algorithm, and a Chebyshev-polynomial-based phase unwrapping algorithm. It was found that the proposed phase unwrapping algorithm provides results that are comparable to those obtained by using PRELUDE 2D and the Chebyshev phase unwrapping algorithm.
A new sampling scheme for developing metamodels with the zeros of Chebyshev polynomials
NASA Astrophysics Data System (ADS)
Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing
2015-09-01
The accuracy of metamodelling is determined by both the sampling and approximation. This article proposes a new sampling method based on the zeros of Chebyshev polynomials to capture the sampling information effectively. First, the zeros of one-dimensional Chebyshev polynomials are applied to construct Chebyshev tensor product (CTP) sampling, and the CTP is then used to construct high-order multi-dimensional metamodels using the 'hypercube' polynomials. Secondly, the CTP sampling is further enhanced to develop Chebyshev collocation method (CCM) sampling, to construct the 'simplex' polynomials. The samples of CCM are randomly and directly chosen from the CTP samples. Two widely studied sampling methods, namely the Smolyak sparse grid and Hammersley, are used to demonstrate the effectiveness of the proposed sampling method. Several numerical examples are utilized to validate the approximation accuracy of the proposed metamodel under different dimensions.
NASA Astrophysics Data System (ADS)
Burtyka, Filipp
2018-03-01
The paper firstly considers the problem of finding solvents for arbitrary unilateral polynomial matrix equations with second-order matrices over prime finite fields from the practical point of view: we implement the solver for this problem. The solver’s algorithm has two step: the first is finding solvents, having Jordan Normal Form (JNF), the second is finding solvents among the rest matrices. The first step reduces to the finding roots of usual polynomials over finite fields, the second is essentially exhaustive search. The first step’s algorithms essentially use the polynomial matrices theory. We estimate the practical duration of computations using our software implementation (for example that one can’t construct unilateral matrix polynomial over finite field, having any predefined number of solvents) and answer some theoretically-valued questions.
Polynomial reduction and evaluation of tree- and loop-level CHY amplitudes
Zlotnikov, Michael
2016-08-24
We develop a polynomial reduction procedure that transforms any gauge fixed CHY amplitude integrand for n scattering particles into a σ-moduli multivariate polynomial of what we call the standard form. We show that a standard form polynomial must have a specific ladder type monomial structure, which has finite size at any n, with highest multivariate degree given by (n – 3)(n – 4)/2. This set of monomials spans a complete basis for polynomials with rational coefficients in kinematic data on the support of scattering equations. Subsequently, at tree and one-loop level, we employ the global residue theorem to derive amore » prescription that evaluates any CHY amplitude by means of collecting simple residues at infinity only. Furthermore, the prescription is then applied explicitly to some tree and one-loop amplitude examples.« less
Gabor-based kernel PCA with fractional power polynomial models for face recognition.
Liu, Chengjun
2004-05-01
This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power polynomial models, the Gabor wavelet-based PCA method, and the Gabor wavelet-based kernel PCA method with polynomial kernels.
A polynomial based model for cell fate prediction in human diseases.
Ma, Lichun; Zheng, Jie
2017-12-21
Cell fate regulation directly affects tissue homeostasis and human health. Research on cell fate decision sheds light on key regulators, facilitates understanding the mechanisms, and suggests novel strategies to treat human diseases that are related to abnormal cell development. In this study, we proposed a polynomial based model to predict cell fate. This model was derived from Taylor series. As a case study, gene expression data of pancreatic cells were adopted to test and verify the model. As numerous features (genes) are available, we employed two kinds of feature selection methods, i.e. correlation based and apoptosis pathway based. Then polynomials of different degrees were used to refine the cell fate prediction function. 10-fold cross-validation was carried out to evaluate the performance of our model. In addition, we analyzed the stability of the resultant cell fate prediction model by evaluating the ranges of the parameters, as well as assessing the variances of the predicted values at randomly selected points. Results show that, within both the two considered gene selection methods, the prediction accuracies of polynomials of different degrees show little differences. Interestingly, the linear polynomial (degree 1 polynomial) is more stable than others. When comparing the linear polynomials based on the two gene selection methods, it shows that although the accuracy of the linear polynomial that uses correlation analysis outcomes is a little higher (achieves 86.62%), the one within genes of the apoptosis pathway is much more stable. Considering both the prediction accuracy and the stability of polynomial models of different degrees, the linear model is a preferred choice for cell fate prediction with gene expression data of pancreatic cells. The presented cell fate prediction model can be extended to other cells, which may be important for basic research as well as clinical study of cell development related diseases.
Education and Economic Growth in Pakistan: A Cointegration and Causality Analysis
ERIC Educational Resources Information Center
Afzal, Muhammad; Rehman, Hafeez Ur; Farooq, Muhammad Shahid; Sarwar, Kafeel
2011-01-01
This study explored the cointegration and causality between education and economic growth in Pakistan by using time series data on real gross domestic product (RGDP), labour force, physical capital and education from 1970-1971 to 2008-2009 were used. Autoregressive Distributed Lag (ARDL) Model of Cointegration and the Augmented Granger Causality…
Aging in mortal superdiffusive Lévy walkers.
Stage, Helena
2017-12-01
A growing body of literature examines the effects of superdiffusive subballistic movement premeasurement (aging or time lag) on observations arising from single-particle tracking. A neglected aspect is the finite lifetime of these Lévy walkers, be they proteins, cells, or larger structures. We examine the effects of aging on the motility of mortal walkers, and discuss the means by which permanent stopping of walkers may be categorized as arising from "natural" death or experimental artifacts such as low photostability or radiation damage. This is done by comparison of the walkers' mean squared displacement (MSD) with the front velocity of propagation of a group of walkers, which is found to be invariant under time lags. For any running time distribution of a mortal random walker, the MSD is tempered by the stopping rate θ. This provides a physical interpretation for truncated heavy-tailed diffusion processes and serves as a tool by which to better classify the underlying running time distributions of random walkers. Tempering of aged MSDs raises the issue of misinterpreting superdiffusive motion which appears Brownian or subdiffusive over certain time scales.
The WS transform for the Kuramoto model with distributed amplitudes, phase lag and time delay
NASA Astrophysics Data System (ADS)
Lohe, M. A.
2017-12-01
We apply the Watanabe-Strogatz (WS) transform to a generalized Kuramoto model with distributed parameters describing the amplitude of oscillation, phase lag, and time delay at each node of the system. The model has global coupling and identical frequencies, but allows for repulsive interactions at arbitrary nodes leading to conformist-contrarian phenomena together with variable amplitude and time-delay effects. We show how to determine the initial values of the WS system for any initial conditions for the Kuramoto system, and investigate the asymptotic behaviour of the WS variables. For the case of zero time delay the possible asymptotic configurations are determined by the sign of a single parameter μ which measures whether or not the attractive nodes dominate the repulsive nodes. If μ>0 the system completely synchronizes from general initial conditions, whereas if μ<0 one of two types of phase-locked synchronization occurs, depending on the initial values, while for μ=0 periodic solutions can occur. For the case of arbitrary non-uniform time delays we derive a stability condition for completely synchronized solutions.
Aging in mortal superdiffusive Lévy walkers
NASA Astrophysics Data System (ADS)
Stage, Helena
2017-12-01
A growing body of literature examines the effects of superdiffusive subballistic movement premeasurement (aging or time lag) on observations arising from single-particle tracking. A neglected aspect is the finite lifetime of these Lévy walkers, be they proteins, cells, or larger structures. We examine the effects of aging on the motility of mortal walkers, and discuss the means by which permanent stopping of walkers may be categorized as arising from "natural" death or experimental artifacts such as low photostability or radiation damage. This is done by comparison of the walkers' mean squared displacement (MSD) with the front velocity of propagation of a group of walkers, which is found to be invariant under time lags. For any running time distribution of a mortal random walker, the MSD is tempered by the stopping rate θ . This provides a physical interpretation for truncated heavy-tailed diffusion processes and serves as a tool by which to better classify the underlying running time distributions of random walkers. Tempering of aged MSDs raises the issue of misinterpreting superdiffusive motion which appears Brownian or subdiffusive over certain time scales.
Gamma-Ray Bursts and Cosmology
NASA Technical Reports Server (NTRS)
Norris, Jay P.
2003-01-01
The unrivalled, extreme luminosities of gamma-ray bursts (GRBs) make them the favored beacons for sampling the high redshift Universe. To employ GRBs to study the cosmic terrain -- e.g., star and galaxy formation history -- GRB luminosities must be calibrated, and the luminosity function versus redshift must be measured or inferred. Several nascent relationships between gamma-ray temporal or spectral indicators and luminosity or total energy have been reported. These measures promise to further our understanding of GRBs once the connections between the luminosity indicators and GRB jets and emission mechanisms are better elucidated. The current distribution of 33 redshifts determined from host galaxies and afterglows peaks near z $\\sim$ 1, whereas for the full BATSE sample of long bursts, the lag-luminosity relation predicts a broad peak z $\\sim$ 1--4 with a tail to z $\\sim$ 20, in rough agreement with theoretical models based on star formation considerations. For some GRB subclasses and apparently related phenomena -- short bursts, long-lag bursts, and X-ray flashes -- the present information on their redshift distributions is sparse or entirely lacking, and progress is expected in Swift era when prompt alerts become numerous.
NASA Astrophysics Data System (ADS)
Yang, Zhong; Zhang, BoMing; Zhao, Lin; Sun, XinYang
2011-02-01
A shear-lag model is applied to study the stress transfer around a broken fiber within unidirectional fiber-reinforced composites (FRC) subjected to uniaxial tensile loading along the fiber direction. The matrix damage and interfacial debonding, which are the main failure modes, are considered in the model. The maximum stress criterion with the linear damage evolution theory is used for the matrix. The slipping friction stress is considered in the interfacial debonding region using Coulomb friction theory, in which interfacial clamping stress comes from radial residual stress and mismatch of Poisson's ratios of constituents (fiber and matrix). The stress distributions in the fiber and matrix are obtained by the shear-lag theory added with boundary conditions, which includes force continuity and displacement compatibility constraints in the broken and neighboring intact fibers. The result gives axial stress distribution in fibers and shear stress in the interface and compares the theory reasonably well with the measurement by a polarized light microscope. The relation curves between damage, debonding and ineffective region lengths with external strain loading are obtained.
Kumar, Sanjay
2010-01-01
The widespread use of the herbicides for weed control and crop productivity in modern agriculture exert a threat on economically important crops by way of cytological damage to the cells of the crop plant or side effects, if any, induced by the herbicides. In the present communication, author describes the effects of 2,4-D and Isoproturon on chromosomal morphology in mitotic cells of Triticum aestivum L. The wheat seedlings were treated with range of concentrations (50-1200 ppm) of 2,4-D and Isoproturon for 72 h at room temperature. In the mitotic cells, twelve distinct chromosome structure abnormalities were observed over control. The observed irregularities were stickiness, c-mitosis, multipolar chromosomes with or without spindles, fragments and bridges, lagging chromosomes, unequal distribution of chromosomes, over contracted chromosomes, unoriented chromosomes, star shaped arrangement of the chromosomes, increased cell size and failure of cell plate formation. The abnormalities like stickiness, fragments, bridges, lagging or dysjunction, unequal distribution and over contracted chromosomes meet frequently.
Generalized Freud's equation and level densities with polynomial potential
NASA Astrophysics Data System (ADS)
Boobna, Akshat; Ghosh, Saugata
2013-08-01
We study orthogonal polynomials with weight $\\exp[-NV(x)]$, where $V(x)=\\sum_{k=1}^{d}a_{2k}x^{2k}/2k$ is a polynomial of order 2d. We derive the generalised Freud's equations for $d=3$, 4 and 5 and using this obtain $R_{\\mu}=h_{\\mu}/h_{\\mu -1}$, where $h_{\\mu}$ is the normalization constant for the corresponding orthogonal polynomials. Moments of the density functions, expressed in terms of $R_{\\mu}$, are obtained using Freud's equation and using this, explicit results of level densities as $N\\rightarrow\\infty$ are derived.
NASA Technical Reports Server (NTRS)
Arbuckle, P. D.; Sliwa, S. M.; Roy, M. L.; Tiffany, S. H.
1985-01-01
A computer program for interactively developing least-squares polynomial equations to fit user-supplied data is described. The program is characterized by the ability to compute the polynomial equations of a surface fit through data that are a function of two independent variables. The program utilizes the Langley Research Center graphics packages to display polynomial equation curves and data points, facilitating a qualitative evaluation of the effectiveness of the fit. An explanation of the fundamental principles and features of the program, as well as sample input and corresponding output, are included.
First Instances of Generalized Expo-Rational Finite Elements on Triangulations
NASA Astrophysics Data System (ADS)
Dechevsky, Lubomir T.; Zanaty, Peter; Laksa˚, Arne; Bang, Børre
2011-12-01
In this communication we consider a construction of simplicial finite elements on triangulated two-dimensional polygonal domains. This construction is, in some sense, dual to the construction of generalized expo-rational B-splines (GERBS). The main result is in the obtaining of new polynomial simplicial patches of the first several lowest possible total polynomial degrees which exhibit Hermite interpolatory properties. The derivation of these results is based on the theory of piecewise polynomial GERBS called Euler Beta-function B-splines. We also provide 3-dimensional visualization of the graphs of the new polynomial simplicial patches and their control polygons.
The Translated Dowling Polynomials and Numbers.
Mangontarum, Mahid M; Macodi-Ringia, Amila P; Abdulcarim, Normalah S
2014-01-01
More properties for the translated Whitney numbers of the second kind such as horizontal generating function, explicit formula, and exponential generating function are proposed. Using the translated Whitney numbers of the second kind, we will define the translated Dowling polynomials and numbers. Basic properties such as exponential generating functions and explicit formula for the translated Dowling polynomials and numbers are obtained. Convexity, integral representation, and other interesting identities are also investigated and presented. We show that the properties obtained are generalizations of some of the known results involving the classical Bell polynomials and numbers. Lastly, we established the Hankel transform of the translated Dowling numbers.
NASA Astrophysics Data System (ADS)
Xia, Xintao; Wang, Zhongyu
2008-10-01
For some methods of stability analysis of a system using statistics, it is difficult to resolve the problems of unknown probability distribution and small sample. Therefore, a novel method is proposed in this paper to resolve these problems. This method is independent of probability distribution, and is useful for small sample systems. After rearrangement of the original data series, the order difference and two polynomial membership functions are introduced to estimate the true value, the lower bound and the supper bound of the system using fuzzy-set theory. Then empirical distribution function is investigated to ensure confidence level above 95%, and the degree of similarity is presented to evaluate stability of the system. Cases of computer simulation investigate stable systems with various probability distribution, unstable systems with linear systematic errors and periodic systematic errors and some mixed systems. The method of analysis for systematic stability is approved.
Accurate Estimation of Solvation Free Energy Using Polynomial Fitting Techniques
Shyu, Conrad; Ytreberg, F. Marty
2010-01-01
This report details an approach to improve the accuracy of free energy difference estimates using thermodynamic integration data (slope of the free energy with respect to the switching variable λ) and its application to calculating solvation free energy. The central idea is to utilize polynomial fitting schemes to approximate the thermodynamic integration data to improve the accuracy of the free energy difference estimates. Previously, we introduced the use of polynomial regression technique to fit thermodynamic integration data (Shyu and Ytreberg, J Comput Chem 30: 2297–2304, 2009). In this report we introduce polynomial and spline interpolation techniques. Two systems with analytically solvable relative free energies are used to test the accuracy of the interpolation approach. We also use both interpolation and regression methods to determine a small molecule solvation free energy. Our simulations show that, using such polynomial techniques and non-equidistant λ values, the solvation free energy can be estimated with high accuracy without using soft-core scaling and separate simulations for Lennard-Jones and partial charges. The results from our study suggest these polynomial techniques, especially with use of non-equidistant λ values, improve the accuracy for ΔF estimates without demanding additional simulations. We also provide general guidelines for use of polynomial fitting to estimate free energy. To allow researchers to immediately utilize these methods, free software and documentation is provided via http://www.phys.uidaho.edu/ytreberg/software. PMID:20623657
NASA Astrophysics Data System (ADS)
Alhaidari, A. D.; Taiwo, T. J.
2017-02-01
Using a recent formulation of quantum mechanics without a potential function, we present a four-parameter system associated with the Wilson and Racah polynomials. The continuum scattering states are written in terms of the Wilson polynomials whose asymptotics give the scattering amplitude and phase shift. On the other hand, the finite number of discrete bound states are associated with the Racah polynomials.
On the Waring problem for polynomial rings
Fröberg, Ralf; Ottaviani, Giorgio; Shapiro, Boris
2012-01-01
In this note we discuss an analog of the classical Waring problem for . Namely, we show that a general homogeneous polynomial of degree divisible by k≥2 can be represented as a sum of at most kn k-th powers of homogeneous polynomials in . Noticeably, kn coincides with the number obtained by naive dimension count. PMID:22460787
On computation of Gröbner bases for linear difference systems
NASA Astrophysics Data System (ADS)
Gerdt, Vladimir P.
2006-04-01
In this paper, we present an algorithm for computing Gröbner bases of linear ideals in a difference polynomial ring over a ground difference field. The input difference polynomials generating the ideal are also assumed to be linear. The algorithm is an adaptation to difference ideals of our polynomial algorithm based on Janet-like reductions.
Precision measurement of the η → π + π - π 0 Dalitz plot distribution with the KLOE detector
NASA Astrophysics Data System (ADS)
Anastasi, A.; Babusci, D.; Bencivenni, G.; Berlowski, M.; Bloise, C.; Bossi, F.; Branchini, P.; Budano, A.; Caldeira Balkeståhl, L.; Cao, B.; Ceradini, F.; Ciambrone, P.; Curciarello, F.; Czerwinski, E.; D'Agostini, G.; Danè, E.; De Leo, V.; De Lucia, E.; De Santis, A.; De Simone, P.; Di Cicco, A.; Di Domenico, A.; Di Salvo, R.; Domenici, D.; D'Uffizi, A.; Fantini, A.; Felici, G.; Fiore, S.; Gajos, A.; Gauzzi, P.; Giardina, G.; Giovannella, S.; Graziani, E.; Happacher, F.; Heijkenskjöld, L.; Ikegami Andersson, W.; Johansson, T.; Kaminska, D.; Krzemien, W.; Kupsc, A.; Loffredo, S.; Mandaglio, G.; Martini, M.; Mascolo, M.; Messi, R.; Miscetti, S.; Morello, G.; Moricciani, D.; Moskal, P.; Papenbrock, M.; Passeri, A.; Patera, V.; Perez del Rio, E.; Ranieri, A.; Santangelo, P.; Sarra, I.; Schioppa, M.; Silarski, M.; Sirghi, F.; Tortora, L.; Venanzoni, G.; Wislicki, W.; Wolke, M.
2016-05-01
Using 1.6 fb-1 of e + e - → ϕ → ηγ data collected with the KLOE detector at DAΦNE, the Dalitz plot distribution for the η → π + π - π 0 decay is studied with the world's largest sample of ˜ 4 .7 · 106 events. The Dalitz plot density is parametrized as a polynomial expansion up to cubic terms in the normalized dimensionless variables X and Y . The experiment is sensitive to all charge conjugation conserving terms of the expansion, including a gX 2 Y term. The statistical uncertainty of all parameters is improved by a factor two with respect to earlier measurements.
Taj, Tahir; Jakobsson, Kristina; Stroh, Emilie; Oudin, Anna
2016-05-01
Air pollution can increase the symptoms of asthma and has an acute effect on the number of emergency room visits and hospital admissions because of asthma, but little is known about the effect of air pollution on the number of primary health care (PHC) visits for asthma. To investigate the association between air pollution and the number of PHC visits for asthma in Scania, southern Sweden. Data on daily PHC visits for asthma were obtained from a regional healthcare database in Scania, which covers approximately half a million people. Air pollution data from 2005 to 2010 were obtained from six urban background stations. We used a case-crossover study design and a distributed lag non-linear model in the analysis. The air pollution levels were generally within the EU air quality guidelines. The mean number of daily PHC visits for asthma was 34. The number of PHC visits increased by 5% (95% confidence interval (CI): 3.91-6.25%) with every 10µg m(-3) increase in daily mean NO2 lag (0-15), suggesting that daily air pollution levels are associated with PHC visits for asthma. Even though the air quality in Scania between 2005 and 2010 was within EU's guidelines, the number of PHC visits for asthma increased with increasing levels of air pollution. This suggests that as well as increasing hospital and emergency room visits, air pollution increases the burden on PHC due to milder symptoms of asthma. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Miller, W., Jr.; Li, Q.
2015-04-01
The Wilson and Racah polynomials can be characterized as basis functions for irreducible representations of the quadratic symmetry algebra of the quantum superintegrable system on the 2-sphere, HΨ = EΨ, with generic 3-parameter potential. Clearly, the polynomials are expansion coefficients for one eigenbasis of a symmetry operator L2 of H in terms of an eigenbasis of another symmetry operator L1, but the exact relationship appears not to have been made explicit. We work out the details of the expansion to show, explicitly, how the polynomials arise and how the principal properties of these functions: the measure, 3-term recurrence relation, 2nd order difference equation, duality of these relations, permutation symmetry, intertwining operators and an alternate derivation of Wilson functions - follow from the symmetry of this quantum system. This paper is an exercise to show that quantum mechancal concepts and recurrence relations for Gausian hypergeometrc functions alone suffice to explain these properties; we make no assumptions about the structure of Wilson polynomial/functions, but derive them from quantum principles. There is active interest in the relation between multivariable Wilson polynomials and the quantum superintegrable system on the n-sphere with generic potential, and these results should aid in the generalization. Contracting function space realizations of irreducible representations of this quadratic algebra to the other superintegrable systems one can obtain the full Askey scheme of orthogonal hypergeometric polynomials. All of these contractions of superintegrable systems with potential are uniquely induced by Wigner Lie algebra contractions of so(3, C) and e(2,C). All of the polynomials produced are interpretable as quantum expansion coefficients. It is important to extend this process to higher dimensions.
Piecewise polynomial representations of genomic tracks.
Tarabichi, Maxime; Detours, Vincent; Konopka, Tomasz
2012-01-01
Genomic data from micro-array and sequencing projects consist of associations of measured values to chromosomal coordinates. These associations can be thought of as functions in one dimension and can thus be stored, analyzed, and interpreted as piecewise-polynomial curves. We present a general framework for building piecewise polynomial representations of genome-scale signals and illustrate some of its applications via examples. We show that piecewise constant segmentation, a typical step in copy-number analyses, can be carried out within this framework for both array and (DNA) sequencing data offering advantages over existing methods in each case. Higher-order polynomial curves can be used, for example, to detect trends and/or discontinuities in transcription levels from RNA-seq data. We give a concrete application of piecewise linear functions to diagnose and quantify alignment quality at exon borders (splice sites). Our software (source and object code) for building piecewise polynomial models is available at http://sourceforge.net/projects/locsmoc/.
Optimal control and Galois theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zelikin, M I; Kiselev, D D; Lokutsievskiy, L V
2013-11-30
An important role is played in the solution of a class of optimal control problems by a certain special polynomial of degree 2(n−1) with integer coefficients. The linear independence of a family of k roots of this polynomial over the field Q implies the existence of a solution of the original problem with optimal control in the form of an irrational winding of a k-dimensional Clifford torus, which is passed in finite time. In the paper, we prove that for n≤15 one can take an arbitrary positive integer not exceeding [n/2] for k. The apparatus developed in the paper is applied to the systems ofmore » Chebyshev-Hermite polynomials and generalized Chebyshev-Laguerre polynomials. It is proved that for such polynomials of degree 2m every subsystem of [(m+1)/2] roots with pairwise distinct squares is linearly independent over the field Q. Bibliography: 11 titles.« less
Lifting q-difference operators for Askey-Wilson polynomials and their weight function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atakishiyeva, M. K.; Atakishiyev, N. M., E-mail: natig_atakishiyev@hotmail.com
2011-06-15
We determine an explicit form of a q-difference operator that transforms the continuous q-Hermite polynomials H{sub n}(x | q) of Rogers into the Askey-Wilson polynomials p{sub n}(x; a, b, c, d | q) on the top level in the Askey q-scheme. This operator represents a special convolution-type product of four one-parameter q-difference operators of the form {epsilon}{sub q}(c{sub q}D{sub q}) (where c{sub q} are some constants), defined as Exton's q-exponential function {epsilon}{sub q}(z) in terms of the Askey-Wilson divided q-difference operator D{sub q}. We also determine another q-difference operator that lifts the orthogonality weight function for the continuous q-Hermite polynomialsH{submore » n}(x | q) up to the weight function, associated with the Askey-Wilson polynomials p{sub n}(x; a, b, c, d | q).« less
Abd-Elhameed, W. M.
2014-01-01
This paper is concerned with deriving some new formulae expressing explicitly the high-order derivatives of Jacobi polynomials whose parameters difference is one or two of any degree and of any order in terms of their corresponding Jacobi polynomials. The derivatives formulae for Chebyshev polynomials of third and fourth kinds of any degree and of any order in terms of their corresponding Chebyshev polynomials are deduced as special cases. Some new reduction formulae for summing some terminating hypergeometric functions of unit argument are also deduced. As an application, and with the aid of the new introduced derivatives formulae, an algorithm for solving special sixth-order boundary value problems are implemented with the aid of applying Galerkin method. A numerical example is presented hoping to ascertain the validity and the applicability of the proposed algorithms. PMID:25386599
A recursive algorithm for Zernike polynomials
NASA Technical Reports Server (NTRS)
Davenport, J. W.
1982-01-01
The analysis of a function defined on a rotationally symmetric system, with either a circular or annular pupil is discussed. In order to numerically analyze such systems it is typical to expand the given function in terms of a class of orthogonal polynomials. Because of their particular properties, the Zernike polynomials are especially suited for numerical calculations. Developed is a recursive algorithm that can be used to generate the Zernike polynomials up to a given order. The algorithm is recursively defined over J where R(J,N) is the Zernike polynomial of degree N obtained by orthogonalizing the sequence R(J), R(J+2), ..., R(J+2N) over (epsilon, 1). The terms in the preceding row - the (J-1) row - up to the N+1 term is needed for generating the (J,N)th term. Thus, the algorith generates an upper left-triangular table. This algorithm was placed in the computer with the necessary support program also included.
Muslimov, Eduard; Hugot, Emmanuel; Jahn, Wilfried; Vives, Sebastien; Ferrari, Marc; Chambion, Bertrand; Henry, David; Gaschet, Christophe
2017-06-26
In the recent years a significant progress was achieved in the field of design and fabrication of optical systems based on freeform optical surfaces. They provide a possibility to build fast, wide-angle and high-resolution systems, which are very compact and free of obscuration. However, the field of freeform surfaces design techniques still remains underexplored. In the present paper we use the mathematical apparatus of orthogonal polynomials defined over a square aperture, which was developed before for the tasks of wavefront reconstruction, to describe shape of a mirror surface. Two cases, namely Legendre polynomials and generalization of the Zernike polynomials on a square, are considered. The potential advantages of these polynomials sets are demonstrated on example of a three-mirror unobscured telescope with F/# = 2.5 and FoV = 7.2x7.2°. In addition, we discuss possibility of use of curved detectors in such a design.
Inequalities for a polynomial and its derivative
NASA Astrophysics Data System (ADS)
Chanam, Barchand; Dewan, K. K.
2007-12-01
Let , 1[less-than-or-equals, slant][mu][less-than-or-equals, slant]n, be a polynomial of degree n such that p(z)[not equal to]0 in z
Quantization of gauge fields, graph polynomials and graph homology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kreimer, Dirk, E-mail: kreimer@physik.hu-berlin.de; Sars, Matthias; Suijlekom, Walter D. van
2013-09-15
We review quantization of gauge fields using algebraic properties of 3-regular graphs. We derive the Feynman integrand at n loops for a non-abelian gauge theory quantized in a covariant gauge from scalar integrands for connected 3-regular graphs, obtained from the two Symanzik polynomials. The transition to the full gauge theory amplitude is obtained by the use of a third, new, graph polynomial, the corolla polynomial. This implies effectively a covariant quantization without ghosts, where all the relevant signs of the ghost sector are incorporated in a double complex furnished by the corolla polynomial–we call it cycle homology–and by graph homology.more » -- Highlights: •We derive gauge theory Feynman from scalar field theory with 3-valent vertices. •We clarify the role of graph homology and cycle homology. •We use parametric renormalization and the new corolla polynomial.« less
An algorithmic approach to solving polynomial equations associated with quantum circuits
NASA Astrophysics Data System (ADS)
Gerdt, V. P.; Zinin, M. V.
2009-12-01
In this paper we present two algorithms for reducing systems of multivariate polynomial equations over the finite field F 2 to the canonical triangular form called lexicographical Gröbner basis. This triangular form is the most appropriate for finding solutions of the system. On the other hand, the system of polynomials over F 2 whose variables also take values in F 2 (Boolean polynomials) completely describes the unitary matrix generated by a quantum circuit. In particular, the matrix itself can be computed by counting the number of solutions (roots) of the associated polynomial system. Thereby, efficient construction of the lexicographical Gröbner bases over F 2 associated with quantum circuits gives a method for computing their circuit matrices that is alternative to the direct numerical method based on linear algebra. We compare our implementation of both algorithms with some other software packages available for computing Gröbner bases over F 2.
Recurrence approach and higher order polynomial algebras for superintegrable monopole systems
NASA Astrophysics Data System (ADS)
Hoque, Md Fazlul; Marquette, Ian; Zhang, Yao-Zhong
2018-05-01
We revisit the MIC-harmonic oscillator in flat space with monopole interaction and derive the polynomial algebra satisfied by the integrals of motion and its energy spectrum using the ad hoc recurrence approach. We introduce a superintegrable monopole system in a generalized Taub-Newman-Unti-Tamburino (NUT) space. The Schrödinger equation of this model is solved in spherical coordinates in the framework of Stäckel transformation. It is shown that wave functions of the quantum system can be expressed in terms of the product of Laguerre and Jacobi polynomials. We construct ladder and shift operators based on the corresponding wave functions and obtain the recurrence formulas. By applying these recurrence relations, we construct higher order algebraically independent integrals of motion. We show that the integrals form a polynomial algebra. We construct the structure functions of the polynomial algebra and obtain the degenerate energy spectra of the model.
Polynomial interpolation and sums of powers of integers
NASA Astrophysics Data System (ADS)
Cereceda, José Luis
2017-02-01
In this note, we revisit the problem of polynomial interpolation and explicitly construct two polynomials in n of degree k + 1, Pk(n) and Qk(n), such that Pk(n) = Qk(n) = fk(n) for n = 1, 2,… , k, where fk(1), fk(2),… , fk(k) are k arbitrarily chosen (real or complex) values. Then, we focus on the case that fk(n) is given by the sum of powers of the first n positive integers Sk(n) = 1k + 2k + ṡṡṡ + nk, and show that Sk(n) admits the polynomial representations Sk(n) = Pk(n) and Sk(n) = Qk(n) for all n = 1, 2,… , and k ≥ 1, where the first representation involves the Eulerian numbers, and the second one the Stirling numbers of the second kind. Finally, we consider yet another polynomial formula for Sk(n) alternative to the well-known formula of Bernoulli.
Polynomial elimination theory and non-linear stability analysis for the Euler equations
NASA Technical Reports Server (NTRS)
Kennon, S. R.; Dulikravich, G. S.; Jespersen, D. C.
1986-01-01
Numerical methods are presented that exploit the polynomial properties of discretizations of the Euler equations. It is noted that most finite difference or finite volume discretizations of the steady-state Euler equations produce a polynomial system of equations to be solved. These equations are solved using classical polynomial elimination theory, with some innovative modifications. This paper also presents some preliminary results of a new non-linear stability analysis technique. This technique is applicable to determining the stability of polynomial iterative schemes. Results are presented for applying the elimination technique to a one-dimensional test case. For this test case, the exact solution is computed in three iterations. The non-linear stability analysis is applied to determine the optimal time step for solving Burgers' equation using the MacCormack scheme. The estimated optimal time step is very close to the time step that arises from a linear stability analysis.
NASA Astrophysics Data System (ADS)
Botti, Lorenzo; Di Pietro, Daniele A.
2018-10-01
We propose and validate a novel extension of Hybrid High-Order (HHO) methods to meshes featuring curved elements. HHO methods are based on discrete unknowns that are broken polynomials on the mesh and its skeleton. We propose here the use of physical frame polynomials over mesh elements and reference frame polynomials over mesh faces. With this choice, the degree of face unknowns must be suitably selected in order to recover on curved meshes the same convergence rates as on straight meshes. We provide an estimate of the optimal face polynomial degree depending on the element polynomial degree and on the so-called effective mapping order. The estimate is numerically validated through specifically crafted numerical tests. All test cases are conducted considering two- and three-dimensional pure diffusion problems, and include comparisons with discontinuous Galerkin discretizations. The extension to agglomerated meshes with curved boundaries is also considered.
NASA Astrophysics Data System (ADS)
Gassara, H.; El Hajjaji, A.; Chaabane, M.
2017-07-01
This paper investigates the problem of observer-based control for two classes of polynomial fuzzy systems with time-varying delay. The first class concerns a special case where the polynomial matrices do not depend on the estimated state variables. The second one is the general case where the polynomial matrices could depend on unmeasurable system states that will be estimated. For the last case, two design procedures are proposed. The first one gives the polynomial fuzzy controller and observer gains in two steps. In the second procedure, the designed gains are obtained using a single-step approach to overcome the drawback of a two-step procedure. The obtained conditions are presented in terms of sum of squares (SOS) which can be solved via the SOSTOOLS and a semi-definite program solver. Illustrative examples show the validity and applicability of the proposed results.
Multi-indexed Meixner and little q-Jacobi (Laguerre) polynomials
NASA Astrophysics Data System (ADS)
Odake, Satoru; Sasaki, Ryu
2017-04-01
As the fourth stage of the project multi-indexed orthogonal polynomials, we present the multi-indexed Meixner and little q-Jacobi (Laguerre) polynomials in the framework of ‘discrete quantum mechanics’ with real shifts defined on the semi-infinite lattice in one dimension. They are obtained, in a similar way to the multi-indexed Laguerre and Jacobi polynomials reported earlier, from the quantum mechanical systems corresponding to the original orthogonal polynomials by multiple application of the discrete analogue of the Darboux transformations or the Crum-Krein-Adler deletion of virtual state vectors. The virtual state vectors are the solutions of the matrix Schrödinger equation on all the lattice points having negative energies and infinite norm. This is in good contrast to the (q-)Racah systems defined on a finite lattice, in which the ‘virtual state’ vectors satisfy the matrix Schrödinger equation except for one of the two boundary points.
Using Tutte polynomials to analyze the structure of the benzodiazepines
NASA Astrophysics Data System (ADS)
Cadavid Muñoz, Juan José
2014-05-01
Graph theory in general and Tutte polynomials in particular, are implemented for analyzing the chemical structure of the benzodiazepines. Similarity analysis are used with the Tutte polynomials for finding other molecules that are similar to the benzodiazepines and therefore that might show similar psycho-active actions for medical purpose, in order to evade the drawbacks associated to the benzodiazepines based medicine. For each type of benzodiazepines, Tutte polynomials are computed and some numeric characteristics are obtained, such as the number of spanning trees and the number of spanning forests. Computations are done using the computer algebra Maple's GraphTheory package. The obtained analytical results are of great importance in pharmaceutical engineering. As a future research line, the usage of the chemistry computational program named Spartan, will be used to extent and compare it with the obtained results from the Tutte polynomials of benzodiazepines.
A Spectral Algorithm for Solving the Relativistic Vlasov-Maxwell Equations
NASA Technical Reports Server (NTRS)
Shebalin, John V.
2001-01-01
A spectral method algorithm is developed for the numerical solution of the full six-dimensional Vlasov-Maxwell system of equations. Here, the focus is on the electron distribution function, with positive ions providing a constant background. The algorithm consists of a Jacobi polynomial-spherical harmonic formulation in velocity space and a trigonometric formulation in position space. A transform procedure is used to evaluate nonlinear terms. The algorithm is suitable for performing moderate resolution simulations on currently available supercomputers for both scientific and engineering applications.
NASA Technical Reports Server (NTRS)
Poole, L. R.
1975-01-01
A study of the effects of using different methods for approximating bottom topography in a wave-refraction computer model was conducted. Approximation techniques involving quadratic least squares, cubic least squares, and constrained bicubic polynomial interpolation were compared for computed wave patterns and parameters in the region of Saco Bay, Maine. Although substantial local differences can be attributed to use of the different approximation techniques, results indicated that overall computed wave patterns and parameter distributions were quite similar.
Waypoints Following Guidance for Surface-to-Surface Missiles
NASA Astrophysics Data System (ADS)
Zhou, Hao; Khalil, Elsayed M.; Rahman, Tawfiqur; Chen, Wanchun
2018-04-01
The paper proposes waypoints following guidance law. In this method an optimal trajectory is first generated which is then represented through a set of waypoints that are distributed from the starting point up to the final target point using a polynomial. The guidance system then works by issuing guidance command needed to move from one waypoint to the next one. Here the method is applied for a surface-to-surface missile. The results show that the method is feasible for on-board application.
Evolutionary lag times and recent origin of the biota of an ancient desert (Atacama-Sechura).
Guerrero, Pablo C; Rosas, Marcelo; Arroyo, Mary T K; Wiens, John J
2013-07-09
The assembly of regional biotas and organismal responses to anthropogenic climate change both depend on the capacity of organisms to adapt to novel ecological conditions. Here we demonstrate the concept of evolutionary lag time, the time between when a climatic regime or habitat develops in a region and when it is colonized by a given clade. We analyzed the time of colonization of four clades (three plant genera and one lizard genus) into the Atacama-Sechura Desert of South America, one of Earth's driest and oldest deserts. We reconstructed time-calibrated phylogenies for each clade and analyzed the timing of shifts in climatic distributions and biogeography and compared these estimates to independent geological estimates of the time of origin of these deserts. Chaetanthera and Malesherbia (plants) and Liolaemus (animal) invaded arid regions of the Atacama-Sechura Desert in the last 10 million years, some 20 million years after the initial onset of aridity in the region. There are also major lag times between when these clades colonized the region and when they invaded arid habitats within the region (typically 4-14 million years). Similarly, hyperarid climates developed ∼8 million years ago, but the most diverse plant clade in these habitats (Nolana) only colonized them ∼2 million years ago. Similar evolutionary lag times may occur in other organisms and habitats, but these results are important in suggesting that many lineages may require very long time scales to adapt to modern desertification and climatic change.
Physiological Equivalent Temperature Index and mortality in Tabriz (The northwest of Iran).
Sharafkhani, Rahim; Khanjani, Narges; Bakhtiari, Bahram; Jahani, Yunes; Sadegh Tabrizi, Jafar
2018-01-01
There are few epidemiological studies about climate change and the effect of temperature variation on health using human thermal indices such as the Physiological Equivalent Temperature (PET) Index in Iran. This study was conducted in Tabriz, the northwest of Iran and Distributed Lag Non-linear Models (DLNM) combined with quasi-Poisson regression models were used to assess the impacts of PET on mortality by using the DLNM Package in R Software. The effect of air pollutants, time trend, day of the week and holidays were controlled as confounders. There was a significant relation between high (30°C, 27°C) and low (-0.8°C, -9.2°C and -14.2°C) PET and total (non-accidental) mortality; and a significant increase in respiratory and cardiovascular deaths in high PET values. Heat stress increased Cumulative Relative Risk (CRR) for total (non-accidental), respiratory and cardiovascular mortality significantly (CRR Non Accidental Death, PET=30°C, lag 0-30 =1.67, 95%CI: 1.31-2.13; CRR Respiratory Death, PET=30°C, lag 0-13 =1.88, 95%CI: 1.30-2.72; CRR Cardiovascular Death, PET=30°C, lag0-30 =1.67 95%CI: 1.16-2.40). Heat stress increases the risk of total (non-accidental), respiratory mortality, but cold stress decreases the risk of total (non-accidental) mortality in Tabriz which is one of the cold cities of Iran. Copyright © 2017 Elsevier Ltd. All rights reserved.
A global study of type B quasi-periodic oscillation in black hole X-ray binaries
NASA Astrophysics Data System (ADS)
Gao, H. Q.; Zhang, Liang; Chen, Yupeng; Zhang, Zhen; Chen, Li; Zhang, Shuang-Nan; Zhang, Shu; Ma, Xiang; Li, Zi-Jian; Bu, Qing-Cui; Qu, JinLu
2017-04-01
We performed a global study on the timing and spectral properties of type-B quasi-periodic oscillations (QPOs) in the outbursts of black hole X-ray binaries. The sample is built based on the observations of Rossi X-ray Timing Explorer (RXTE), via searching in the literature in RXTE era for all the identified type-B QPOs. To enlarge the sample, we also investigated some type-B QPOs that are reported but not yet fully identified. Regarding to the time lag and hard/soft flux ratio, we found that the sources with type-B QPOs behave in two subgroups. In one subgroup, type-B QPO shows a hard time lag that first decreases and then reverse into a soft time lag along with softening of the energy spectrum. In the other subgroup, type-B QPOs distribute only in a small region with hard time lag and relatively soft hardness. These findings may be understood with a diversity of the homogeneity showing up for the hot inner flow of different sources. We confirm the universality of a positive relation between the type-B QPO frequency and the hard component luminosity in different sources. We explain the results by considering that the type-B QPO photons are produced in the inner accretion flow around the central black hole, under a local Eddington limit. Using this relationship, we derived a mass estimation of 9.3-27.1 M⊙ for the black hole in H 1743-322.
ERIC Educational Resources Information Center
Schweizer, Karl
2006-01-01
A model with fixed relations between manifest and latent variables is presented for investigating choice reaction time data. The numbers for fixation originate from the polynomial function. Two options are considered: the component-based (1 latent variable for each component of the polynomial function) and composite-based options (1 latent…
2015-03-26
depicting the CSE implementation for use with CV Domes data. . . 88 B.1 Validation results for N = 1 observation at 1.0 interval. Legendre polynomial of... Legendre polynomial of order Nl = 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 B.3 Validation results for N = 1 observation at...0.01 interval. Legendre polynomial of order Nl = 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 B.4 Validation results for N
Some Curious Properties and Loci Problems Associated with Cubics and Other Polynomials
ERIC Educational Resources Information Center
de Alwis, Amal
2012-01-01
The article begins with a well-known property regarding tangent lines to a cubic polynomial that has distinct, real zeros. We were then able to generalize this property to any polynomial with distinct, real zeros. We also considered a certain family of cubics with two fixed zeros and one variable zero, and explored the loci of centroids of…
Generalized clustering conditions of Jack polynomials at negative Jack parameter {alpha}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernevig, B. Andrei; Department of Physics, Princeton University, Princeton, New Jersey 08544; Haldane, F. D. M.
We present several conjectures on the behavior and clustering properties of Jack polynomials at a negative parameter {alpha}=-(k+1/r-1), with partitions that violate the (k,r,N)- admissibility rule of [Feigin et al. [Int. Math. Res. Notices 23, 1223 (2002)]. We find that the ''highest weight'' Jack polynomials of specific partitions represent the minimum degree polynomials in N variables that vanish when s distinct clusters of k+1 particles are formed, where s and k are positive integers. Explicit counting formulas are conjectured. The generalized clustering conditions are useful in a forthcoming description of fractional quantum Hall quasiparticles.
NASA Astrophysics Data System (ADS)
Karthiga, S.; Chithiika Ruby, V.; Senthilvelan, M.; Lakshmanan, M.
2017-10-01
In position dependent mass (PDM) problems, the quantum dynamics of the associated systems have been understood well in the literature for particular orderings. However, no efforts seem to have been made to solve such PDM problems for general orderings to obtain a global picture. In this connection, we here consider the general ordered quantum Hamiltonian of an interesting position dependent mass problem, namely, the Mathews-Lakshmanan oscillator, and try to solve the quantum problem for all possible orderings including Hermitian and non-Hermitian ones. The other interesting point in our study is that for all possible orderings, although the Schrödinger equation of this Mathews-Lakshmanan oscillator is uniquely reduced to the associated Legendre differential equation, their eigenfunctions cannot be represented in terms of the associated Legendre polynomials with integral degree and order. Rather the eigenfunctions are represented in terms of associated Legendre polynomials with non-integral degree and order. We here explore such polynomials and represent the discrete and continuum states of the system. We also exploit the connection between associated Legendre polynomials with non-integral degree with other orthogonal polynomials such as Jacobi and Gegenbauer polynomials.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Degroote, M.; Henderson, T. M.; Zhao, J.
We present a similarity transformation theory based on a polynomial form of a particle-hole pair excitation operator. In the weakly correlated limit, this polynomial becomes an exponential, leading to coupled cluster doubles. In the opposite strongly correlated limit, the polynomial becomes an extended Bessel expansion and yields the projected BCS wavefunction. In between, we interpolate using a single parameter. The e ective Hamiltonian is non-hermitian and this Polynomial Similarity Transformation Theory follows the philosophy of traditional coupled cluster, left projecting the transformed Hamiltonian onto subspaces of the Hilbert space in which the wave function variance is forced to be zero.more » Similarly, the interpolation parameter is obtained through minimizing the next residual in the projective hierarchy. We rationalize and demonstrate how and why coupled cluster doubles is ill suited to the strongly correlated limit whereas the Bessel expansion remains well behaved. The model provides accurate wave functions with energy errors that in its best variant are smaller than 1% across all interaction stengths. The numerical cost is polynomial in system size and the theory can be straightforwardly applied to any realistic Hamiltonian.« less
Aregay, Mehreteab; Shkedy, Ziv; Molenberghs, Geert; David, Marie-Pierre; Tibaldi, Fabián
2013-01-01
In infectious diseases, it is important to predict the long-term persistence of vaccine-induced antibodies and to estimate the time points where the individual titers are below the threshold value for protection. This article focuses on HPV-16/18, and uses a so-called fractional-polynomial model to this effect, derived in a data-driven fashion. Initially, model selection was done from among the second- and first-order fractional polynomials on the one hand and from the linear mixed model on the other. According to a functional selection procedure, the first-order fractional polynomial was selected. Apart from the fractional polynomial model, we also fitted a power-law model, which is a special case of the fractional polynomial model. Both models were compared using Akaike's information criterion. Over the observation period, the fractional polynomials fitted the data better than the power-law model; this, of course, does not imply that it fits best over the long run, and hence, caution ought to be used when prediction is of interest. Therefore, we point out that the persistence of the anti-HPV responses induced by these vaccines can only be ascertained empirically by long-term follow-up analysis.
NASA Astrophysics Data System (ADS)
Soare, S.; Yoon, J. W.; Cazacu, O.
2007-05-01
With few exceptions, non-quadratic homogeneous polynomials have received little attention as possible candidates for yield functions. One reason might be that not every such polynomial is a convex function. In this paper we show that homogeneous polynomials can be used to develop powerful anisotropic yield criteria, and that imposing simple constraints on the identification process leads, aposteriori, to the desired convexity property. It is shown that combinations of such polynomials allow for modeling yielding properties of metallic materials with any crystal structure, i.e. both cubic and hexagonal which display strength differential effects. Extensions of the proposed criteria to 3D stress states are also presented. We apply these criteria to the description of the aluminum alloy AA2090T3. We prove that a sixth order orthotropic homogeneous polynomial is capable of a satisfactory description of this alloy. Next, applications to the deep drawing of a cylindrical cup are presented. The newly proposed criteria were implemented as UMAT subroutines into the commercial FE code ABAQUS. We were able to predict six ears on the AA2090T3 cup's profile. Finally, we show that a tension/compression asymmetry in yielding can have an important effect on the earing profile.
Li, Kevin; Vandermeer, John H; Perfecto, Ivette
2016-05-01
Spatial patterns in ecology can be described as reflective of environmental heterogeneity (exogenous), or emergent from dynamic relationships between interacting species (endogenous), but few empirical studies focus on the combination. The spatial distribution of the nests of Azteca sericeasur, a keystone tropical arboreal ant, is thought to form endogenous spatial patterns among the shade trees of a coffee plantation through self-regulating interactions with controlling agents (i.e. natural enemies). Using inhomogeneous point process models, we found evidence for both types of processes in the spatial distribution of A. sericeasur. Each year's nest distribution was determined mainly by a density-dependent relationship with the previous year's lagged nest density; but using a novel application of a Thomas cluster process to account for the effects of nest clustering, we found that nest distribution also correlated significantly with tree density in the later years of the study. This coincided with the initiation of agricultural intensification and tree felling on the coffee farm. The emergence of this significant exogenous effect, along with the changing character of the density-dependent effect of lagged nest density, provides clues to the mechanism behind a unique phenomenon observed in the plot, that of an increase in nest population despite resource limitation in nest sites. Our results have implications in coffee agroecological management, as this system provides important biocontrol ecosystem services. Further research is needed, however, to understand the effective scales at which these relationships occur.
Network Reliability: The effect of local network structure on diffusive processes
Youssef, Mina; Khorramzadeh, Yasamin; Eubank, Stephen
2014-01-01
This paper re-introduces the network reliability polynomial – introduced by Moore and Shannon in 1956 – for studying the effect of network structure on the spread of diseases. We exhibit a representation of the polynomial that is well-suited for estimation by distributed simulation. We describe a collection of graphs derived from Erdős-Rényi and scale-free-like random graphs in which we have manipulated assortativity-by-degree and the number of triangles. We evaluate the network reliability for all these graphs under a reliability rule that is related to the expected size of a connected component. Through these extensive simulations, we show that for positively or neutrally assortative graphs, swapping edges to increase the number of triangles does not increase the network reliability. Also, positively assortative graphs are more reliable than neutral or disassortative graphs with the same number of edges. Moreover, we show the combined effect of both assortativity-by-degree and the presence of triangles on the critical point and the size of the smallest subgraph that is reliable. PMID:24329321