Sample records for nonparametric cointegration analysis

  1. Towards homoscedastic nonlinear cointegration for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Zolna, Konrad; Dao, Phong B.; Staszewski, Wieslaw J.; Barszcz, Tomasz

    2016-06-01

    The paper presents the homoscedastic nonlinear cointegration. The method leads to stable variances in nonlinear cointegration residuals. The adapted Breusch-Pagan test procedure is developed to test for the presence of heteroscedasticity (or homoscedasticity) in the cointegration residuals obtained from the nonlinear cointegration analysis. Three different time series - i.e. one with a nonlinear quadratic deterministic trend, simulated vibration data and experimental wind turbine data - are used to illustrate the application of the proposed method. The proposed approach can be used for effective removal of nonlinear trends from various types of data and for reliable structural damage detection based on data that are corrupted by environmental and/or operational nonlinear trends.

  2. Education and Economic Growth in Pakistan: A Cointegration and Causality Analysis

    ERIC Educational Resources Information Center

    Afzal, Muhammad; Rehman, Hafeez Ur; Farooq, Muhammad Shahid; Sarwar, Kafeel

    2011-01-01

    This study explored the cointegration and causality between education and economic growth in Pakistan by using time series data on real gross domestic product (RGDP), labour force, physical capital and education from 1970-1971 to 2008-2009 were used. Autoregressive Distributed Lag (ARDL) Model of Cointegration and the Augmented Granger Causality…

  3. Approaches to nonlinear cointegration with a view towards applications in SHM

    NASA Astrophysics Data System (ADS)

    Cross, E. J.; Worden, K.

    2011-07-01

    One of the major problems confronting the application of Structural Health Monitoring (SHM) to real structures is that of divorcing the effect of environmental changes from those imposed by damage. A recent development in this area is the import of the technique of cointegration from the field of econometrics. While cointegration is a mature technology within economics, its development has been largely concerned with linear time-series analysis and this places a severe constraint on its application - particularly in the new context of SHM where damage can often make a given structure nonlinear. The objective of the current paper is to introduce two possible approaches to nonlinear cointegration: the first is an optimisation-based method; the second is a variation of the established Johansen procedure based on the use of an augmented basis. Finally, the ideas of nonlinear cointegration will be explored through application to real SHM data from the benchmark project on the Z24 Highway Bridge.

  4. Cointegration analysis and influence rank—A network approach to global stock markets

    NASA Astrophysics Data System (ADS)

    Yang, Chunxia; Chen, Yanhua; Niu, Lei; Li, Qian

    2014-04-01

    In this paper, cointegration relationships among 26 global stock market indices over the periods of sub-prime and European debt crisis and their influence rank are investigated by constructing and analyzing directed and weighted cointegration networks. The obtained results are shown as follows: the crises have changed cointegration relationships among stock market indices, their cointegration relationship increased after the Lehman Brothers collapse, while the degree of cointegration gradually decreased from the sub-prime to European debt crisis. The influence of US, Japan and China market indices are entirely distinguished over different periods. Before European debt crisis US stock market is a ‘global factor’ which leads the developed and emerging markets, while the influence of US stock market decreased evidently during the European debt crisis. Before sub-prime crisis, there is no significant evidence to show that other stock markets co-move with China stock market, while it becomes more integrated with other markets during the sub-prime and European debt crisis. Among developed and emerging stock markets, the developed stock markets lead the world stock markets before European debt crisis, while due to the shock of sub-prime and European debt crisis, their influences decreased and emerging stock markets replaced them to lead global stock markets.

  5. Cointegration of output, capital, labor, and energy

    NASA Astrophysics Data System (ADS)

    Stresing, R.; Lindenberger, D.; Kã¼mmel, R.

    2008-11-01

    Cointegration analysis is applied to the linear combinations of the time series of (the logarithms of) output, capital, labor, and energy for Germany, Japan, and the USA since 1960. The computed cointegration vectors represent the output elasticities of the aggregate energy-dependent Cobb-Douglas function. The output elasticities give the economic weights of the production factors capital, labor, and energy. We find that they are for labor much smaller and for energy much larger than the cost shares of these factors. In standard economic theory output elasticities equal cost shares. Our heterodox findings support results obtained with LINEX production functions.

  6. Modelling cointegration and Granger causality network to detect long-term equilibrium and diffusion paths in the financial system.

    PubMed

    Gao, Xiangyun; Huang, Shupei; Sun, Xiaoqi; Hao, Xiaoqing; An, Feng

    2018-03-01

    Microscopic factors are the basis of macroscopic phenomena. We proposed a network analysis paradigm to study the macroscopic financial system from a microstructure perspective. We built the cointegration network model and the Granger causality network model based on econometrics and complex network theory and chose stock price time series of the real estate industry and its upstream and downstream industries as empirical sample data. Then, we analysed the cointegration network for understanding the steady long-term equilibrium relationships and analysed the Granger causality network for identifying the diffusion paths of the potential risks in the system. The results showed that the influence from a few key stocks can spread conveniently in the system. The cointegration network and Granger causality network are helpful to detect the diffusion path between the industries. We can also identify and intervene in the transmission medium to curb risk diffusion.

  7. Modelling cointegration and Granger causality network to detect long-term equilibrium and diffusion paths in the financial system

    PubMed Central

    Huang, Shupei; Sun, Xiaoqi; Hao, Xiaoqing; An, Feng

    2018-01-01

    Microscopic factors are the basis of macroscopic phenomena. We proposed a network analysis paradigm to study the macroscopic financial system from a microstructure perspective. We built the cointegration network model and the Granger causality network model based on econometrics and complex network theory and chose stock price time series of the real estate industry and its upstream and downstream industries as empirical sample data. Then, we analysed the cointegration network for understanding the steady long-term equilibrium relationships and analysed the Granger causality network for identifying the diffusion paths of the potential risks in the system. The results showed that the influence from a few key stocks can spread conveniently in the system. The cointegration network and Granger causality network are helpful to detect the diffusion path between the industries. We can also identify and intervene in the transmission medium to curb risk diffusion. PMID:29657804

  8. Cigarette taxes and respiratory cancers: new evidence from panel co-integration analysis.

    PubMed

    Liu, Echu; Yu, Wei-Choun; Hsieh, Hsin-Ling

    2011-01-01

    Using a set of state-level longitudinal data from 1954 through 2005, this study investigates the "long-run equilibrium" relationship between cigarette excise taxes and the mortality rates of respiratory cancers in the United States. Statistical tests show that both cigarette excise taxes in real terms and mortality rates from respiratory cancers contain unit roots and are co-integrated. Estimates of co-integrating vectors indicated that a 10 percent increase in real cigarette excise tax rate leads to a 2.5 percent reduction in respiratory cancer mortality rate, implying a decline of 3,922 deaths per year, on a national level in the long run. These effects are statistically significant at the one percent level. Moreover, estimates of co-integrating vectors show that higher cigarette excise tax rates lead to lower mortality rates in most states; however, this relationship does not hold for Alaska, Florida, Hawaii, and Texas.

  9. Higher Education and Unemployment: A Cointegration and Causality Analysis of the Case of Turkey

    ERIC Educational Resources Information Center

    Erdem, Ekrem; Tugcu, Can Tansel

    2012-01-01

    This article analyses the short and the long-term relations between higher education and unemployment in Turkey for the period 1960-2007. It chooses the recently developed ARDL cointegration and Granger causality of Dolado and Lutkepohl (1996) methods. While the proxy of unemployment is total unemployment rate, higher education graduates were…

  10. Cointegration and Nonstationarity in the Context of Multiresolution Analysis

    NASA Astrophysics Data System (ADS)

    Worden, K.; Cross, E. J.; Kyprianou, A.

    2011-07-01

    Cointegration has established itself as a powerful means of projecting out long-term trends from time-series data in the context of econometrics. Recent work by the current authors has further established that cointegration can be applied profitably in the context of structural health monitoring (SHM), where it is desirable to project out the effects of environmental and operational variations from data in order that they do not generate false positives in diagnostic tests. The concept of cointegration is partly built on a clear understanding of the ideas of stationarity and nonstationarity for time-series. Nonstationarity in this context is 'traditionally' established through the use of statistical tests, e.g. the hypothesis test based on the augmented Dickey-Fuller statistic. However, it is important to understand the distinction in this case between 'trend' stationarity and stationarity of the AR models typically fitted as part of the analysis process. The current paper will discuss this distinction in the context of SHM data and will extend the discussion by the introduction of multi-resolution (discrete wavelet) analysis as a means of characterising the time-scales on which nonstationarity manifests itself. The discussion will be based on synthetic data and also on experimental data for the guided-wave SHM of a composite plate.

  11. A dynamic analysis of S&P 500, FTSE 100 and EURO STOXX 50 indices under different exchange rates.

    PubMed

    Chen, Yanhua; Mantegna, Rosario N; Pantelous, Athanasios A; Zuev, Konstantin M

    2018-01-01

    In this study, we assess the dynamic evolution of short-term correlation, long-term cointegration and Error Correction Model (hereafter referred to as ECM)-based long-term Granger causality between each pair of US, UK, and Eurozone stock markets from 1980 to 2015 using the rolling-window technique. A comparative analysis of pairwise dynamic integration and causality of stock markets, measured in common and domestic currency terms, is conducted to evaluate comprehensively how exchange rate fluctuations affect the time-varying integration among the S&P 500, FTSE 100 and EURO STOXX 50 indices. The results obtained show that the dynamic correlation, cointegration and ECM-based long-run Granger causality vary significantly over the whole sample period. The degree of dynamic correlation and cointegration between pairs of stock markets rises in periods of high volatility and uncertainty, especially under the influence of economic, financial and political shocks. Meanwhile, we observe the weaker and decreasing correlation and cointegration among the three developed stock markets during the recovery periods. Interestingly, the most persistent and significant cointegration among the three developed stock markets exists during the 2007-09 global financial crisis. Finally, the exchange rate fluctuations, also influence the dynamic integration and causality between all pairs of stock indices, with that influence increasing under the local currency terms. Our results suggest that the potential for diversifying risk by investing in the US, UK and Eurozone stock markets is limited during the periods of economic, financial and political shocks.

  12. A dynamic analysis of S&P 500, FTSE 100 and EURO STOXX 50 indices under different exchange rates

    PubMed Central

    Chen, Yanhua; Mantegna, Rosario N.; Zuev, Konstantin M.

    2018-01-01

    In this study, we assess the dynamic evolution of short-term correlation, long-term cointegration and Error Correction Model (hereafter referred to as ECM)-based long-term Granger causality between each pair of US, UK, and Eurozone stock markets from 1980 to 2015 using the rolling-window technique. A comparative analysis of pairwise dynamic integration and causality of stock markets, measured in common and domestic currency terms, is conducted to evaluate comprehensively how exchange rate fluctuations affect the time-varying integration among the S&P 500, FTSE 100 and EURO STOXX 50 indices. The results obtained show that the dynamic correlation, cointegration and ECM-based long-run Granger causality vary significantly over the whole sample period. The degree of dynamic correlation and cointegration between pairs of stock markets rises in periods of high volatility and uncertainty, especially under the influence of economic, financial and political shocks. Meanwhile, we observe the weaker and decreasing correlation and cointegration among the three developed stock markets during the recovery periods. Interestingly, the most persistent and significant cointegration among the three developed stock markets exists during the 2007–09 global financial crisis. Finally, the exchange rate fluctuations, also influence the dynamic integration and causality between all pairs of stock indices, with that influence increasing under the local currency terms. Our results suggest that the potential for diversifying risk by investing in the US, UK and Eurozone stock markets is limited during the periods of economic, financial and political shocks. PMID:29529092

  13. Affordability of alcohol as a key driver of alcohol demand in New Zealand: a co-integration analysis.

    PubMed

    Wall, Martin; Casswell, Sally

    2013-01-01

    To investigate whether affordability of alcohol is an important determinant of alcohol consumption along with price. This will inform effective tax policy to influence consumption. Co-integration analysis was used to analyse relationship between real price, affordability and consumption. Changes in retail availability of wine in 1990 and beer in 1999 were also included in the models. The econometric approach taken allows identification of short- and long-term responses. Separate analyses were performed for wine, beer, spirits and ready-to-drinks (spirits based pre-mixed drinks). New Zealand 1988-2011. Quarterly data on price and alcohol available for consumption for wine, beer, spirits and ready-to-drinks. Price data were analysed as: real price (own price of alcohol relative to the price of other goods) and affordability (average earnings relative to own price). There was strong evidence for co-integration between wine and beer consumption and affordability. There was weaker evidence for co-integration between consumption and real price. The affordability of alcohol is more important than real price in determining consumption of alcohol. This suggests that affordability needs to be considered by policy makers when determining tax and pricing policies to reduce alcohol-related harm. © 2012 The Authors, Addiction © 2012 Society for the Study of Addiction.

  14. Are international securitized property markets converging or diverging?

    NASA Astrophysics Data System (ADS)

    Hui, Eddie C. M.; Chen, Jia; Chan, Ka Kwan Kevin

    2016-03-01

    This study establishes a new framework which combines the recursive model with the Fractionally Integrated Vector Error Correction Model (FIVECM) to investigate the cointegration relationship among 9 securitized real estate indices, which are divided into three groups: Asian, European and North American groups. Our new combined framework has the advantage of reflecting the changes in cointegration dynamics over a period of time instead of a single result for the whole period. The results show that the three groups of markets follow a similar cointegration trend: the cointegration relationship gradually increases before the global financial crisis, reaches a peak during the crisis, and dies down gradually after the crisis. However, cointegration among Asian and European countries occurs at a much later time than cointegration among North American countries does, showing that North America is the source of cointegration, while Asia and Europe are the recipients. This study has important implications to investors and related authorities that investors can adjust their portfolio according to the test results to reduce their risk, while related authorities can take appropriate measures to stabilize the economy and mitigate the effects of financial crises.

  15. Cointegration analysis for rice production in the states of Perlis and Johor, Malaysia

    NASA Astrophysics Data System (ADS)

    Shitan, Mahendran; Ng, Yung Lerd; Karmokar, Provash Kumar

    2015-02-01

    Rice is ranked the third most important crop in Malaysia after rubber and palm oil in terms of production. Unlike the industrial crops, although its contribution to Malaysia's economy is minimal, it plays a pivotal role in the country's food security as rice is consumed by almost everyone in Malaysia. Rice production is influenced by factors such as geographical location, temperature, rainfall, soil fertility, farming practices, etc. and hence the productivity of rice may differ in different state. In this study, our particular interest is to investigate the interrelationship between the rice production of Perlis and Johor. Data collected from Department of Agriculture, Government of Malaysia are tested for unit roots by Augmented Dickey-Fuller (ADF) unit root test while Engle-Granger (EG) procedure is used in the cointegration analysis. Our study shows that cointegrating relationship exists among the rice production in both states. The speed of adjustment coefficient of the error correction model (ECM) of Perlis is 0.611 indicating that approximately 61.1% of any deviation from the long-run path is corrected within a year by the production of rice in Johor.

  16. A regime-switching cointegration approach for removing environmental and operational variations in structural health monitoring

    NASA Astrophysics Data System (ADS)

    Shi, Haichen; Worden, Keith; Cross, Elizabeth J.

    2018-03-01

    Cointegration is now extensively used to model the long term common trends among economic variables in the field of econometrics. Recently, cointegration has been successfully implemented in the context of structural health monitoring (SHM), where it has been used to remove the confounding influences of environmental and operational variations (EOVs) that can often mask the signature of structural damage. However, restrained by its linear nature, the conventional cointegration approach has limited power in modelling systems where measurands are nonlinearly related; this occurs, for example, in the benchmark study of the Z24 Bridge, where nonlinear relationships between natural frequencies were induced during a period of very cold temperatures. To allow the removal of EOVs from SHM data with nonlinear relationships like this, this paper extends the well-established cointegration method to a nonlinear context, which is to allow a breakpoint in the cointegrating vector. In a novel approach, the augmented Dickey-Fuller (ADF) statistic is used to find which position is most appropriate for inserting a breakpoint, the Johansen procedure is then utilised for the estimation of cointegrating vectors. The proposed approach is examined with a simulated case and real SHM data from the Z24 Bridge, demonstrating that the EOVs can be neatly eliminated.

  17. Health Care Expenditure and GDP in Oil Exporting Countries: Evidence From OPEC Data, 1995-2012.

    PubMed

    Fazaeli, Ali Akbar; Ghaderi, Hossein; Salehi, Masoud; Fazaeli, Ali Reza

    2015-06-11

    There is a large body of literature examining income in relation to health expenditures. The share of expenditures in health sector from GDP in developed countries is often larger than in non-developed countries, suggesting that as the level of economic growth increases, health spending increase, too. This paper estimates long-run relationships between health expenditures and GDP based on panel data of a sample of 12 countries of the Organization of the Petroleum Exporting Countries (OPEC), using data for the period 1995-2012. We use panel data unit root tests, cointegration analysis and ECM model to find long-run and short-run relation. This study examines whether health is a luxury or a necessity for OPEC countries within a unit root and cointegration framework. Panel data analysis indicates that health expenditures and GDP are co-integrated and have Engle and Granger causality. In addition, in oil countries that have oil export income, the share of government expenditures in the health sector is often greater than in private health expenditures similar developed countries. The findings verify that health care is not a luxury good and income has a robust relationship to health expenditures in OPEC countries.

  18. Selecting Single Model in Combination Forecasting Based on Cointegration Test and Encompassing Test

    PubMed Central

    Jiang, Chuanjin; Zhang, Jing; Song, Fugen

    2014-01-01

    Combination forecasting takes all characters of each single forecasting method into consideration, and combines them to form a composite, which increases forecasting accuracy. The existing researches on combination forecasting select single model randomly, neglecting the internal characters of the forecasting object. After discussing the function of cointegration test and encompassing test in the selection of single model, supplemented by empirical analysis, the paper gives the single model selection guidance: no more than five suitable single models can be selected from many alternative single models for a certain forecasting target, which increases accuracy and stability. PMID:24892061

  19. Selecting single model in combination forecasting based on cointegration test and encompassing test.

    PubMed

    Jiang, Chuanjin; Zhang, Jing; Song, Fugen

    2014-01-01

    Combination forecasting takes all characters of each single forecasting method into consideration, and combines them to form a composite, which increases forecasting accuracy. The existing researches on combination forecasting select single model randomly, neglecting the internal characters of the forecasting object. After discussing the function of cointegration test and encompassing test in the selection of single model, supplemented by empirical analysis, the paper gives the single model selection guidance: no more than five suitable single models can be selected from many alternative single models for a certain forecasting target, which increases accuracy and stability.

  20. Micromachined Thin-Film Sensors for SOI-CMOS Co-Integration

    NASA Astrophysics Data System (ADS)

    Laconte, Jean; Flandre, D.; Raskin, Jean-Pierre

    Co-integration of sensors with their associated electronics on a single silicon chip may provide many significant benefits regarding performance, reliability, miniaturization and process simplicity without significantly increasing the total cost. Micromachined Thin-Film Sensors for SOI-CMOS Co-integration covers the challenges and interests and demonstrates the successful co-integration of gas flow sensors on dielectric membrane, with their associated electronics, in CMOS-SOI technology. We firstly investigate the extraction of residual stress in thin layers and in their stacking and the release, in post-processing, of a 1 μm-thick robust and flat dielectric multilayered membrane using Tetramethyl Ammonium Hydroxide (TMAH) silicon micromachining solution.

  1. Health Care Expenditure and GDP in Oil Exporting Countries: Evidence from OPEC Data, 1995-2012

    PubMed Central

    Fazaeli, Ali Akbar; Ghaderi, Hossein; Salehi, Masoud; Fazaeli, Ali Reza

    2016-01-01

    Background: There is a large body of literature examining income in relation to health expenditures. The share of expenditures in health sector from GDP in developed countries is often larger than in non-developed countries, suggesting that as the level of economic growth increases, health spending increase, too. Objectives: This paper estimates long-run relationships between health expenditures and GDP based on panel data of a sample of 12 countries of the Organization of the Petroleum Exporting Countries (OPEC), using data for the period 1995-2012. Patients & Methods: We use panel data unit root tests, cointegration analysis and ECM model to find long-run and short-run relation. This study examines whether health is a luxury or a necessity for OPEC countries within a unit root and cointegration framework. Results: Panel data analysis indicates that health expenditures and GDP are co-integrated and have Engle and Granger causality. In addition, in oil countries that have oil export income, the share of government expenditures in the health sector is often greater than in private health expenditures similar developed countries. Conclusions: The findings verify that health care is not a luxury good and income has a robust relationship to health expenditures in OPEC countries. PMID:26383195

  2. A nonlinear cointegration approach with applications to structural health monitoring

    NASA Astrophysics Data System (ADS)

    Shi, H.; Worden, K.; Cross, E. J.

    2016-09-01

    One major obstacle to the implementation of structural health monitoring (SHM) is the effect of operational and environmental variabilities, which may corrupt the signal of structural degradation. Recently, an approach inspired from the community of econometrics, called cointegration, has been employed to eliminate the adverse influence from operational and environmental changes and still maintain sensitivity to structural damage. However, the linear nature of cointegration may limit its application when confronting nonlinear relations between system responses. This paper proposes a nonlinear cointegration method based on Gaussian process regression (GPR); the method is constructed under the Engle-Granger framework, and tests for unit root processes are conducted both before and after the GPR is applied. The proposed approach is examined with real engineering data from the monitoring of the Z24 Bridge.

  3. Causality and cointegration analysis between macroeconomic variables and the Bovespa.

    PubMed

    da Silva, Fabiano Mello; Coronel, Daniel Arruda; Vieira, Kelmara Mendes

    2014-01-01

    The aim of this study is to analyze the causality relationship among a set of macroeconomic variables, represented by the exchange rate, interest rate, inflation (CPI), industrial production index as a proxy for gross domestic product in relation to the index of the São Paulo Stock Exchange (Bovespa). The period of analysis corresponded to the months from January 1995 to December 2010, making a total of 192 observations for each variable. Johansen tests, through the statistics of the trace and of the maximum eigenvalue, indicated the existence of at least one cointegration vector. In the analysis of Granger (1988) causality tests via error correction, it was found that a short-term causality existed between the CPI and the Bovespa. Regarding the Granger (1988) long-term causality, the results indicated a long-term behaviour among the macroeconomic variables with the BOVESPA. The results of the long-term normalized vector for the Bovespa variable showed that most signals of the cointegration equation parameters are in accordance with what is suggested by the economic theory. In other words, there was a positive behaviour of the GDP and a negative behaviour of the inflation and of the exchange rate (expected to be a positive relationship) in relation to the Bovespa, with the exception of the Selic rate, which was not significant with that index. The variance of the Bovespa was explained by itself in over 90% at the twelfth month, followed by the country risk, with less than 5%.

  4. Causality and Cointegration Analysis between Macroeconomic Variables and the Bovespa

    PubMed Central

    da Silva, Fabiano Mello; Coronel, Daniel Arruda; Vieira, Kelmara Mendes

    2014-01-01

    The aim of this study is to analyze the causality relationship among a set of macroeconomic variables, represented by the exchange rate, interest rate, inflation (CPI), industrial production index as a proxy for gross domestic product in relation to the index of the São Paulo Stock Exchange (Bovespa). The period of analysis corresponded to the months from January 1995 to December 2010, making a total of 192 observations for each variable. Johansen tests, through the statistics of the trace and of the maximum eigenvalue, indicated the existence of at least one cointegration vector. In the analysis of Granger (1988) causality tests via error correction, it was found that a short-term causality existed between the CPI and the Bovespa. Regarding the Granger (1988) long-term causality, the results indicated a long-term behaviour among the macroeconomic variables with the BOVESPA. The results of the long-term normalized vector for the Bovespa variable showed that most signals of the cointegration equation parameters are in accordance with what is suggested by the economic theory. In other words, there was a positive behaviour of the GDP and a negative behaviour of the inflation and of the exchange rate (expected to be a positive relationship) in relation to the Bovespa, with the exception of the Selic rate, which was not significant with that index. The variance of the Bovespa was explained by itself in over 90% at the twelth month, followed by the country risk, with less than 5%. PMID:24587019

  5. FBST for Cointegration Problems

    NASA Astrophysics Data System (ADS)

    Diniz, M.; Pereira, C. A. B.; Stern, J. M.

    2008-11-01

    In order to estimate causal relations, the time series econometrics has to be aware of spurious correlation, a problem first mentioned by Yule [21]. To solve the problem, one can work with differenced series or use multivariate models like VAR or VEC models. In this case, the analysed series are going to present a long run relation i.e. a cointegration relation. Even though the Bayesian literature about inference on VAR/VEC models is quite advanced, Bauwens et al. [2] highlight that "the topic of selecting the cointegrating rank has not yet given very useful and convincing results." This paper presents the Full Bayesian Significance Test applied to cointegration rank selection tests in multivariate (VAR/VEC) time series models and shows how to implement it using available in the literature and simulated data sets. A standard non-informative prior is assumed.

  6. Cointegration as a data normalization tool for structural health monitoring applications

    NASA Astrophysics Data System (ADS)

    Harvey, Dustin Y.; Todd, Michael D.

    2012-04-01

    The structural health monitoring literature has shown an abundance of features sensitive to various types of damage in laboratory tests. However, robust feature extraction in the presence of varying operational and environmental conditions has proven to be one of the largest obstacles in the development of practical structural health monitoring systems. Cointegration, a technique adapted from the field of econometrics, has recently been introduced to the SHM field as one solution to the data normalization problem. Response measurements and feature histories often show long-run nonstationarity due to fluctuating temperature, load conditions, or other factors that leads to the occurrence of false positives. Cointegration theory allows nonstationary trends common to two or more time series to be modeled and subsequently removed. Thus, the residual retains sensitivity to damage with dependence on operational and environmental variability removed. This study further explores the use of cointegration as a data normalization tool for structural health monitoring applications.

  7. Comparison between goal programming and cointegration approaches in enhanced index tracking

    NASA Astrophysics Data System (ADS)

    Lam, Weng Siew; Jamaan, Saiful Hafizah Hj.

    2013-04-01

    Index tracking is a popular form of passive fund management in stock market. Passive management is a buy-and-hold strategy that aims to achieve rate of return similar to the market return. Index tracking problem is a problem of reproducing the performance of a stock market index, without purchasing all of the stocks that make up the index. This can be done by establishing an optimal portfolio that minimizes risk or tracking error. An improved index tracking (enhanced index tracking) is a dual-objective optimization problem, a trade-off between maximizing the mean return and minimizing the tracking error. Enhanced index tracking aims to generate excess return over the return achieved by the index. The objective of this study is to compare the portfolio compositions and performances by using two different approaches in enhanced index tracking problem, which are goal programming and cointegration. The result of this study shows that the optimal portfolios for both approaches are able to outperform the Malaysia market index which is Kuala Lumpur Composite Index. Both approaches give different optimal portfolio compositions. Besides, the cointegration approach outperforms the goal programming approach because the cointegration approach gives higher mean return and lower risk or tracking error. Therefore, the cointegration approach is more appropriate for the investors in Malaysia.

  8. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    PubMed

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Revisiting the emissions-energy-trade nexus: evidence from the newly industrializing countries.

    PubMed

    Ahmed, Khalid; Shahbaz, Muhammad; Kyophilavong, Phouphet

    2016-04-01

    This paper applies Pedroni's panel cointegration approach to explore the causal relationship between trade openness, carbon dioxide emissions, energy consumption, and economic growth for the panel of newly industrialized economies (i.e., Brazil, India, China, and South Africa) over the period of 1970-2013. Our panel cointegration estimation results found majority of the variables cointegrated and confirm the long-run association among the variables. The Granger causality test indicates bidirectional causality between carbon dioxide emissions and energy consumption. A unidirectional causality is found running from trade openness to carbon dioxide emission and energy consumption and economic growth to carbon dioxide emissions. The results of causality analysis suggest that the trade liberalization in newly industrialized economies induces higher energy consumption and carbon dioxide emissions. Furthermore, the causality results are checked using an innovative accounting approach which includes forecast-error variance decomposition test and impulse response function. The long-run coefficients are estimated using fully modified ordinary least square (FMOLS) method, and results conclude that the trade openness and economic growth reduce carbon dioxide emissions in the long run. The results of FMOLS test sound the existence of environmental Kuznets curve hypothesis. It means that trade liberalization induces carbon dioxide emission with increased national output, but it offsets that impact in the long run with reduced level of carbon dioxide emissions.

  10. Nonstationary time series analysis of surface water microbial pathogen population dynamics using cointegration methods

    EPA Science Inventory

    Background/Question/Methods Bacterial pathogens in surface water present disease risks to aquatic communities and for human recreational activities. Sources of these pathogens include runoff from urban, suburban, and agricultural point and non-point sources, but hazardous micr...

  11. The co-integration analysis of relationship between urban infrastructure and urbanization - A case of Shanghai

    NASA Astrophysics Data System (ADS)

    Wang, Qianlu

    2017-10-01

    Urban infrastructure and urbanization influence each other, and quantitative analysis of the relationship between them will play a significant role in promoting the social development. The paper based on the data of infrastructure and the proportion of urban population in Shanghai from 1988 to 2013, use the econometric analysis of co-integration test, error correction model and Granger causality test method, and empirically analyze the relationship between Shanghai's infrastructure and urbanization. The results show that: 1) Shanghai Urban infrastructure has a positive effect for the development of urbanization and narrowing the population gap; 2) when the short-term fluctuations deviate from long-term equilibrium, the system will pull the non-equilibrium state back to equilibrium with an adjust intensity 0.342670. And hospital infrastructure is not only an important variable for urban development in short-term, but also a leading infrastructure in the process of urbanization in Shanghai; 3) there has Granger causality between road infrastructure and urbanization; and there is no Granger causality between water infrastructure and urbanization, hospital and school infrastructures of social infrastructure have unidirectional Granger causality with urbanization.

  12. Cointegration and why it works for SHM

    NASA Astrophysics Data System (ADS)

    Cross, Elizabeth J.; Worden, Keith

    2012-08-01

    One of the most fundamental problems in Structural Health Monitoring (SHM) is that of projecting out operational and environmental variations from measured feature data. The reason for this is that algorithms used for SHM to detect changes in structural condition should not raise alarms if the structure of interest changes because of benign operational or environmental variations. This is sometimes called the data normalisation problem. Many solutions to this problem have been proposed over the years, but a new approach that uses cointegration, a concept from the field of econometrics, appears to provide a very promising solution. The theory of cointegration is mathematically complex and its use is based on the holding of a number of assumptions on the time series to which it is applied. An interesting observation that has emerged from its applications to SHM data is that the approach works very well even though the aforementioned assumptions do not hold in general. The objective of the current paper is to discuss how the cointegration assumptions break down individually in the context of SHM and to explain why this does not invalidate the application of the algorithm.

  13. CADDIS Volume 4. Data Analysis: PECBO Appendix - R Scripts for Non-Parametric Regressions

    EPA Pesticide Factsheets

    Script for computing nonparametric regression analysis. Overview of using scripts to infer environmental conditions from biological observations, statistically estimating species-environment relationships, statistical scripts.

  14. Nonparametric analysis of Minnesota spruce and aspen tree data and LANDSAT data

    NASA Technical Reports Server (NTRS)

    Scott, D. W.; Jee, R.

    1984-01-01

    The application of nonparametric methods in data-intensive problems faced by NASA is described. The theoretical development of efficient multivariate density estimators and the novel use of color graphics workstations are reviewed. The use of nonparametric density estimates for data representation and for Bayesian classification are described and illustrated. Progress in building a data analysis system in a workstation environment is reviewed and preliminary runs presented.

  15. A historical analysis of natural gas demand

    NASA Astrophysics Data System (ADS)

    Dalbec, Nathan Richard

    This thesis analyzes demand in the US energy market for natural gas, oil, and coal over the period of 1918-2013 and examines their price relationship over the period of 2007-2013. Diagnostic tests for time series were used; Augmented Dickey-Fuller, Kwiatkowski-Phillips-Schmidt-Shin, Johansen cointegration, Granger Causality and weak exogeneity tests. Directed acyclic graphs were used as a complimentary test for endogeneity. Due to the varied results in determining endogeneity, a seemingly unrelated regression model was used which assumes all right hand side variables in the three demand equations were exogenous. A number of factors were significant in determining demand for natural gas including its own price, lagged demand, a number of structural break dummies, and trend, while oil indicate some substitutability with natural gas. An error correction model was used to examine the price relationships. Natural gas price was found not to have a significant cointegrating vector.

  16. A non-stationary panel data investigation of the unemployment-crime relationship.

    PubMed

    Blomquist, Johan; Westerlund, Joakim

    2014-03-01

    Many empirical studies of the economics of crime focus solely on the determinants thereof, and do not consider the dynamic and cross-sectional properties of their data. As a response to this, the current paper offers an in-depth analysis of this issue using data covering 21 Swedish counties from 1975 to 2010. The results suggest that the crimes considered are non-stationary, and that this cannot be attributed to county-specific disparities alone, but that there are also a small number of common stochastic trends to which groups of counties tend to revert. In an attempt to explain these common stochastic trends, we look for a long-run cointegrated relationship between unemployment and crime. Overall, the results do not support cointegration, and suggest that previous findings of a significant unemployment-crime relationship might be spurious. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Is the U.S. shale gas boom having an effect on the European gas market?

    NASA Astrophysics Data System (ADS)

    Yao, Isaac

    This thesis focuses on the impact of the American shale gas boom on the European natural gas market. The study presents different tests in order to analyze the dynamics of natural gas prices in the U.S., U.K. and German natural gas market. The question of cointegration between these different markets are analyzed using several tests. More specifically, the ADF tests for the presence of a unit root. The error correction model test and the Johansen cointegration procedure are applied in order to accept or reject the hypothesis of an integrated market. The results suggest no evidence of cointegration between these markets. There currently is no evidence of an impact of the U.S. shale gas boom on the European market.

  18. [Cointegration test and variance decomposition for the relationship between economy and environment based on material flow analysis in Tangshan City Hebei China].

    PubMed

    2015-12-01

    The material flow account of Tangshan City was established by material flow analysis (MFA) method to analyze the periodical characteristics of material input and output in the operation of economy-environment system, and the impact of material input and output intensities on economic development. Using econometric model, the long-term interaction mechanism and relationship among the indexes of gross domestic product (GDP) , direct material input (DMI), domestic processed output (DPO) were investigated after unit root hypothesis test, Johansen cointegration test, vector error correction model, impulse response function and variance decomposition. The results showed that during 1992-2011, DMI and DPO both increased, and the growth rate of DMI was higher than that of DPO. The input intensity of DMI increased, while the intensity of DPO fell in volatility. Long-term stable cointegration relationship existed between GDP, DMI and DPO. Their interaction relationship showed a trend from fluctuation to gradual ste adiness. DMI and DPO had strong, positive impacts on economic development in short-term, but the economy-environment system gradually weakened these effects by short-term dynamically adjusting indicators inside and outside of the system. Ultimately, the system showed a long-term equilibrium relationship. The effect of economic scale on economy was gradually increasing. After decomposing the contribution of each index to GDP, it was found that DMI's contribution grew, GDP's contribution declined, DPO's contribution changed little. On the whole, the economic development of Tangshan City has followed the traditional production path of resource-based city, mostly depending on the material input which caused high energy consumption and serous environmental pollution.

  19. EEG Correlates of Fluctuation in Cognitive Performance in an Air Traffic Control Task

    DTIC Science & Technology

    2014-11-01

    using non-parametric statistical analysis to identify neurophysiological patterns due to the time-on-task effect. Significant changes in EEG power...EEG, Cognitive Performance, Power Spectral Analysis , Non-Parametric Analysis Document is available to the public through the Internet...3 Performance Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 EEG

  20. A Causality Analysis of the Link between Higher Education and Economic Development.

    ERIC Educational Resources Information Center

    De Meulemeester, Jean-Luc; Rochat, Denis

    1995-01-01

    Summarizes a study exploring the relationship between higher education and economic development, using cointegration and Granger-causality tests. Results show a significant causality from higher education efforts in Sweden, United Kingdom, Japan, and France. However, a similar causality link has not been found for Italy or Australia. (68…

  1. Time Series Analysis for Spatial Node Selection in Environment Monitoring Sensor Networks

    PubMed Central

    Bhandari, Siddhartha; Jurdak, Raja; Kusy, Branislav

    2017-01-01

    Wireless sensor networks are widely used in environmental monitoring. The number of sensor nodes to be deployed will vary depending on the desired spatio-temporal resolution. Selecting an optimal number, position and sampling rate for an array of sensor nodes in environmental monitoring is a challenging question. Most of the current solutions are either theoretical or simulation-based where the problems are tackled using random field theory, computational geometry or computer simulations, limiting their specificity to a given sensor deployment. Using an empirical dataset from a mine rehabilitation monitoring sensor network, this work proposes a data-driven approach where co-integrated time series analysis is used to select the number of sensors from a short-term deployment of a larger set of potential node positions. Analyses conducted on temperature time series show 75% of sensors are co-integrated. Using only 25% of the original nodes can generate a complete dataset within a 0.5 °C average error bound. Our data-driven approach to sensor position selection is applicable for spatiotemporal monitoring of spatially correlated environmental parameters to minimize deployment cost without compromising data resolution. PMID:29271880

  2. Temperature rise, sea level rise and increased radiative forcing - an application of cointegration methods

    NASA Astrophysics Data System (ADS)

    Schmith, Torben; Thejll, Peter; Johansen, Søren

    2016-04-01

    We analyse the statistical relationship between changes in global temperature, global steric sea level and radiative forcing in order to reveal causal relationships. There are in this, however, potential pitfalls due to the trending nature of the time series. We therefore apply a statistical method called cointegration analysis, originating from the field of econometrics, which is able to correctly handle the analysis of series with trends and other long-range dependencies. Further, we find a relationship between steric sea level and temperature and find that temperature causally depends on the steric sea level, which can be understood as a consequence of the large heat capacity of the ocean. This result is obtained both when analyzing observed data and data from a CMIP5 historical model run. Finally, we find that in the data from the historical run, the steric sea level, in turn, is driven by the external forcing. Finally, we demonstrate that combining these two results can lead to a novel estimate of radiative forcing back in time based on observations.

  3. Estimating short-run and long-run interaction mechanisms in interictal state.

    PubMed

    Ozkaya, Ata; Korürek, Mehmet

    2010-04-01

    We address the issue of analyzing electroencephalogram (EEG) from seizure patients in order to test, model and determine the statistical properties that distinguish between EEG states (interictal, pre-ictal, ictal) by introducing a new class of time series analysis methods. In the present study: firstly, we employ statistical methods to determine the non-stationary behavior of focal interictal epileptiform series within very short time intervals; secondly, for such intervals that are deemed non-stationary we suggest the concept of Autoregressive Integrated Moving Average (ARIMA) process modelling, well known in time series analysis. We finally address the queries of causal relationships between epileptic states and between brain areas during epileptiform activity. We estimate the interaction between different EEG series (channels) in short time intervals by performing Granger-causality analysis and also estimate such interaction in long time intervals by employing Cointegration analysis, both analysis methods are well-known in econometrics. Here we find: first, that the causal relationship between neuronal assemblies can be identified according to the duration and the direction of their possible mutual influences; second, that although the estimated bidirectional causality in short time intervals yields that the neuronal ensembles positively affect each other, in long time intervals neither of them is affected (increasing amplitudes) from this relationship. Moreover, Cointegration analysis of the EEG series enables us to identify whether there is a causal link from the interictal state to ictal state.

  4. Long term economic relationships from cointegration maps

    NASA Astrophysics Data System (ADS)

    Vicente, Renato; Pereira, Carlos de B.; Leite, Vitor B. P.; Caticha, Nestor

    2007-07-01

    We employ the Bayesian framework to define a cointegration measure aimed to represent long term relationships between time series. For visualization of these relationships we introduce a dissimilarity matrix and a map based on the sorting points into neighborhoods (SPIN) technique, which has been previously used to analyze large data sets from DNA arrays. We exemplify the technique in three data sets: US interest rates (USIR), monthly inflation rates and gross domestic product (GDP) growth rates.

  5. Local kernel nonparametric discriminant analysis for adaptive extraction of complex structures

    NASA Astrophysics Data System (ADS)

    Li, Quanbao; Wei, Fajie; Zhou, Shenghan

    2017-05-01

    The linear discriminant analysis (LDA) is one of popular means for linear feature extraction. It usually performs well when the global data structure is consistent with the local data structure. Other frequently-used approaches of feature extraction usually require linear, independence, or large sample condition. However, in real world applications, these assumptions are not always satisfied or cannot be tested. In this paper, we introduce an adaptive method, local kernel nonparametric discriminant analysis (LKNDA), which integrates conventional discriminant analysis with nonparametric statistics. LKNDA is adept in identifying both complex nonlinear structures and the ad hoc rule. Six simulation cases demonstrate that LKNDA have both parametric and nonparametric algorithm advantages and higher classification accuracy. Quartic unilateral kernel function may provide better robustness of prediction than other functions. LKNDA gives an alternative solution for discriminant cases of complex nonlinear feature extraction or unknown feature extraction. At last, the application of LKNDA in the complex feature extraction of financial market activities is proposed.

  6. Essays in energy economics: The electricity industry

    NASA Astrophysics Data System (ADS)

    Martinez-Chombo, Eduardo

    Electricity demand analysis using cointegration and error-correction models with time varying parameters: The Mexican case. In this essay we show how some flexibility can be allowed in modeling the parameters of the electricity demand function by employing the time varying coefficient (TVC) cointegrating model developed by Park and Hahn (1999). With the income elasticity of electricity demand modeled as a TVC, we perform tests to examine the adequacy of the proposed model against the cointegrating regression with fixed coefficients, as well as against the spuriousness of the regression with TVC. The results reject the specification of the model with fixed coefficients and favor the proposed model. We also show how some flexibility is gained in the specification of the error correction model based on the proposed TVC cointegrating model, by including more lags of the error correction term as predetermined variables. Finally, we present the results of some out-of-sample forecast comparison among competing models. Electricity demand and supply in Mexico. In this essay we present a simplified model of the Mexican electricity transmission network. We use the model to approximate the marginal cost of supplying electricity to consumers in different locations and at different times of the year. We examine how costs and system operations will be affected by proposed investments in generation and transmission capacity given a forecast of growth in regional electricity demands. Decomposing electricity prices with jumps. In this essay we propose a model that decomposes electricity prices into two independent stochastic processes: one that represents the "normal" pattern of electricity prices and the other that captures temporary shocks, or "jumps", with non-lasting effects in the market. Each contains specific mean reverting parameters to estimate. In order to identify such components we specify a state-space model with regime switching. Using Kim's (1994) filtering algorithm we estimate the parameters of the model, the transition probabilities and the unobservable components for the mean adjusted series of New South Wales' electricity prices. Finally, bootstrap simulations were performed to estimate the expected contribution of each of the components in the overall electricity prices.

  7. Using a DEA Management Tool through a Nonparametric Approach: An Examination of Urban-Rural Effects on Thai School Efficiency

    ERIC Educational Resources Information Center

    Kantabutra, Sangchan

    2009-01-01

    This paper examines urban-rural effects on public upper-secondary school efficiency in northern Thailand. In the study, efficiency was measured by a nonparametric technique, data envelopment analysis (DEA). Urban-rural effects were examined through a Mann-Whitney nonparametric statistical test. Results indicate that urban schools appear to have…

  8. The relationship between health and GDP in OECD countries in the very long run.

    PubMed

    Swift, Robyn

    2011-03-01

    This paper uses Johansen multivariate cointegration analysis to examine the relationship between health and GDP for 13 OECD countries over the last two centuries, for periods ranging from 1820-2001 to 1921-2001. A similar, long run, cointegrating relationship between life expectancy and both total GDP and GDP per capita was found for all the countries estimated. The relationships have a significant influence on both total GDP and GPD per capita in most of the countries estimated, with 1% increase in life expectancy resulting in an average 6% increase in total GDP in the long run, and 5% increase in GDP per capita. Total GDP and GDP per capita also have a significant influence on life expectancy for most countries. There is no evidence of changes in the relationships for any country over the periods estimated, indicating that shifts in the major causes of illness and death over time do not appear to have influenced the link between health and economic growth. Copyright © 2010 John Wiley & Sons, Ltd.

  9. Nonparametric functional data estimation applied to ozone data: prediction and extreme value analysis.

    PubMed

    Quintela-del-Río, Alejandro; Francisco-Fernández, Mario

    2011-02-01

    The study of extreme values and prediction of ozone data is an important topic of research when dealing with environmental problems. Classical extreme value theory is usually used in air-pollution studies. It consists in fitting a parametric generalised extreme value (GEV) distribution to a data set of extreme values, and using the estimated distribution to compute return levels and other quantities of interest. Here, we propose to estimate these values using nonparametric functional data methods. Functional data analysis is a relatively new statistical methodology that generally deals with data consisting of curves or multi-dimensional variables. In this paper, we use this technique, jointly with nonparametric curve estimation, to provide alternatives to the usual parametric statistical tools. The nonparametric estimators are applied to real samples of maximum ozone values obtained from several monitoring stations belonging to the Automatic Urban and Rural Network (AURN) in the UK. The results show that nonparametric estimators work satisfactorily, outperforming the behaviour of classical parametric estimators. Functional data analysis is also used to predict stratospheric ozone concentrations. We show an application, using the data set of mean monthly ozone concentrations in Arosa, Switzerland, and the results are compared with those obtained by classical time series (ARIMA) analysis. Copyright © 2010 Elsevier Ltd. All rights reserved.

  10. An Instructional Module on Mokken Scale Analysis

    ERIC Educational Resources Information Center

    Wind, Stefanie A.

    2017-01-01

    Mokken scale analysis (MSA) is a probabilistic-nonparametric approach to item response theory (IRT) that can be used to evaluate fundamental measurement properties with less strict assumptions than parametric IRT models. This instructional module provides an introduction to MSA as a probabilistic-nonparametric framework in which to explore…

  11. Nonparametric Residue Analysis of Dynamic PET Data With Application to Cerebral FDG Studies in Normals.

    PubMed

    O'Sullivan, Finbarr; Muzi, Mark; Spence, Alexander M; Mankoff, David M; O'Sullivan, Janet N; Fitzgerald, Niall; Newman, George C; Krohn, Kenneth A

    2009-06-01

    Kinetic analysis is used to extract metabolic information from dynamic positron emission tomography (PET) uptake data. The theory of indicator dilutions, developed in the seminal work of Meier and Zierler (1954), provides a probabilistic framework for representation of PET tracer uptake data in terms of a convolution between an arterial input function and a tissue residue. The residue is a scaled survival function associated with tracer residence in the tissue. Nonparametric inference for the residue, a deconvolution problem, provides a novel approach to kinetic analysis-critically one that is not reliant on specific compartmental modeling assumptions. A practical computational technique based on regularized cubic B-spline approximation of the residence time distribution is proposed. Nonparametric residue analysis allows formal statistical evaluation of specific parametric models to be considered. This analysis needs to properly account for the increased flexibility of the nonparametric estimator. The methodology is illustrated using data from a series of cerebral studies with PET and fluorodeoxyglucose (FDG) in normal subjects. Comparisons are made between key functionals of the residue, tracer flux, flow, etc., resulting from a parametric (the standard two-compartment of Phelps et al. 1979) and a nonparametric analysis. Strong statistical evidence against the compartment model is found. Primarily these differences relate to the representation of the early temporal structure of the tracer residence-largely a function of the vascular supply network. There are convincing physiological arguments against the representations implied by the compartmental approach but this is the first time that a rigorous statistical confirmation using PET data has been reported. The compartmental analysis produces suspect values for flow but, notably, the impact on the metabolic flux, though statistically significant, is limited to deviations on the order of 3%-4%. The general advantage of the nonparametric residue analysis is the ability to provide a valid kinetic quantitation in the context of studies where there may be heterogeneity or other uncertainty about the accuracy of a compartmental model approximation of the tissue residue.

  12. An Analysis on the Unemployment Rate in the Philippines: A Time Series Data Approach

    NASA Astrophysics Data System (ADS)

    Urrutia, J. D.; Tampis, R. L.; E Atienza, JB

    2017-03-01

    This study aims to formulate a mathematical model for forecasting and estimating unemployment rate in the Philippines. Also, factors which can predict the unemployment is to be determined among the considered variables namely Labor Force Rate, Population, Inflation Rate, Gross Domestic Product, and Gross National Income. Granger-causal relationship and integration among the dependent and independent variables are also examined using Pairwise Granger-causality test and Johansen Cointegration Test. The data used were acquired from the Philippine Statistics Authority, National Statistics Office, and Bangko Sentral ng Pilipinas. Following the Box-Jenkins method, the formulated model for forecasting the unemployment rate is SARIMA (6, 1, 5) × (0, 1, 1)4 with a coefficient of determination of 0.79. The actual values are 99 percent identical to the predicted values obtained through the model, and are 72 percent closely relative to the forecasted ones. According to the results of the regression analysis, Labor Force Rate and Population are the significant factors of unemployment rate. Among the independent variables, Population, GDP, and GNI showed to have a granger-causal relationship with unemployment. It is also found that there are at least four cointegrating relations between the dependent and independent variables.

  13. Assessment of the interactions between economic growth and industrial wastewater discharges using co-integration analysis: a case study for China's Hunan Province.

    PubMed

    Xiao, Qiang; Gao, Yang; Hu, Dan; Tan, Hong; Wang, Tianxiang

    2011-07-01

    We have investigated the interactions between economic growth and industrial wastewater discharge from 1978 to 2007 in China's Hunan Province using co-integration theory and an error-correction model. Two main economic growth indicators and four representative industrial wastewater pollutants were selected to demonstrate the interaction mechanism. We found a long-term equilibrium relationship between economic growth and the discharge of industrial pollutants in wastewater between 1978 and 2007 in Hunan Province. The error-correction mechanism prevented the variable expansion for long-term relationship at quantity and scale, and the size of the error-correction parameters reflected short-term adjustments that deviate from the long-term equilibrium. When economic growth changes within a short term, the discharge of pollutants will constrain growth because the values of the parameters in the short-term equation are smaller than those in the long-term co-integrated regression equation, indicating that a remarkable long-term influence of economic growth on the discharge of industrial wastewater pollutants and that increasing pollutant discharge constrained economic growth. Economic growth is the main driving factor that affects the discharge of industrial wastewater pollutants in Hunan Province. On the other hand, the discharge constrains economic growth by producing external pressure on growth, although this feedback mechanism has a lag effect. Economic growth plays an important role in explaining the predicted decomposition of the variance in the discharge of industrial wastewater pollutants, but this discharge contributes less to predictions of the variations in economic growth.

  14. Assessment of the Interactions between Economic Growth and Industrial Wastewater Discharges Using Co-integration Analysis: A Case Study for China’s Hunan Province

    PubMed Central

    Xiao, Qiang; Gao, Yang; Hu, Dan; Tan, Hong; Wang, Tianxiang

    2011-01-01

    We have investigated the interactions between economic growth and industrial wastewater discharge from 1978 to 2007 in China’s Hunan Province using co-integration theory and an error-correction model. Two main economic growth indicators and four representative industrial wastewater pollutants were selected to demonstrate the interaction mechanism. We found a long-term equilibrium relationship between economic growth and the discharge of industrial pollutants in wastewater between 1978 and 2007 in Hunan Province. The error-correction mechanism prevented the variable expansion for long-term relationship at quantity and scale, and the size of the error-correction parameters reflected short-term adjustments that deviate from the long-term equilibrium. When economic growth changes within a short term, the discharge of pollutants will constrain growth because the values of the parameters in the short-term equation are smaller than those in the long-term co-integrated regression equation, indicating that a remarkable long-term influence of economic growth on the discharge of industrial wastewater pollutants and that increasing pollutant discharge constrained economic growth. Economic growth is the main driving factor that affects the discharge of industrial wastewater pollutants in Hunan Province. On the other hand, the discharge constrains economic growth by producing external pressure on growth, although this feedback mechanism has a lag effect. Economic growth plays an important role in explaining the predicted decomposition of the variance in the discharge of industrial wastewater pollutants, but this discharge contributes less to predictions of the variations in economic growth. PMID:21845167

  15. Granger Test to Determine Causes of Harmful algal Blooms in TaiLake during the Last Decade

    NASA Astrophysics Data System (ADS)

    Guo, W.; Wu, F.

    2016-12-01

    Eutrophication-driven harmful cyanobacteria blooms can threaten stability of lake ecosystems. A key to solving this problem is identifying the main cause of algal blooms so that appropriate remediation can be employed. A test of causality was used to analyze data for Meiling Bay in Tai Lake (Ch: Taihu) from 2000 to 2012. After filtration of data by use of the stationary test and the co-integration test, the Granger causality test and impulse response analysis were used to analyze potential bloom causes from physicochemical parameters to chlorophyll-a concentration. Results of stationary tests showed that logarithms of secchi disk depth (lnSD), suspended solids (lnSS), lnNH4-N/NOx-N and pH were determined to be stationary as a function of time and could not be considered to be causal for changes in biomass of phytoplankton observed during that period. Results of co-integration tests indicated existence of long-run co-integrating relationships among natural logarithms of chlorophyll-a (lnChl-a), water temperature (lnWT), total organic carbon (lnTOC) and ratio of nitrogen to phosphorus (lnN/P). The Granger causality test suggested that once thresholds for nutrients such as nitrogen and phosphorus had been reached, WT could increase the likelihood or severities of cyanobacteria blooms. An unidirectional Granger relationship from N/P to Chl-a was established, the result indicated that because concentrations of TN in Meiliang Bay had reached their thresholds, it no longer limited proliferation of cyanobacteria and TP should be controlled to reduce the likelihood of algae blooms. The impulse response analysis implied that lagging effects of water temperature and N/P ratio could influence the variation of Chla concentration at certain lag periods. The results can advance understanding of mechanisms on formation of harmful cyanobacteria blooms.

  16. The relationship between economic growth, energy consumption, and CO2 emissions: Empirical evidence from China.

    PubMed

    Wang, Shaojian; Li, Qiuying; Fang, Chuanglin; Zhou, Chunshan

    2016-01-15

    Following several decades of rapid economic growth, China has become the largest energy consumer and the greatest emitter of CO2 in the world. Given the complex development situation faced by contemporary China, Chinese policymakers now confront the dual challenge of reducing energy use while continuing to foster economic growth. This study posits that a better understanding of the relationship between economic growth, energy consumption, and CO2 emissions is necessary, in order for the Chinese government to develop the energy saving and emission reduction strategies for addressing the impacts of climate change. This paper investigates the cointegrating, temporally dynamic, and casual relationships that exist between economic growth, energy consumption, and CO2 emissions in China, using data for the period 1990-2012. The study develops a comprehensive conceptual framework in order to perform this analysis. The results of cointegration tests suggest the existence of long-run cointegrating relationship among the variables, albeit with short dynamic adjustment mechanisms, indicating that the proportion of disequilibrium errors that can be adjusted in the next period will account for only a fraction of the changes. Further, impulse response analysis (which describes the reaction of any variable as a function of time in response to external shocks) found that the impact of a shock in CO2 emissions on economic growth or energy consumption was only marginally significant. Finally, Granger casual relationships were found to exist between economic growth, energy consumption, and CO2 emissions; specifically, a bi-directional causal relationship between economic growth and energy consumption was identified, and a unidirectional causal relationship was found to exist from energy consumption to CO2 emissions. The findings have significant implications for both academics and practitioners, warning of the need to develop and implement long-term energy and economic policies in order to effectively address greenhouse effects in China, thereby setting the nation on a low-carbon growth path. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Separating the Air Quality Impact of a Major Highway and Nearby Sources by Nonparametric Trajectory Analysis

    EPA Science Inventory

    Nonparametric Trajectory Analysis (NTA), a receptor-oriented model, was used to assess the impact of local sources of air pollution at monitoring sites located adjacent to highway I-15 in Las Vegas, NV. Measurements of black carbon, carbon monoxide, nitrogen oxides, and sulfur di...

  18. Estimating technical efficiency in the hospital sector with panel data: a comparison of parametric and non-parametric techniques.

    PubMed

    Siciliani, Luigi

    2006-01-01

    Policy makers are increasingly interested in developing performance indicators that measure hospital efficiency. These indicators may give the purchasers of health services an additional regulatory tool to contain health expenditure. Using panel data, this study compares different parametric (econometric) and non-parametric (linear programming) techniques for the measurement of a hospital's technical efficiency. This comparison was made using a sample of 17 Italian hospitals in the years 1996-9. Highest correlations are found in the efficiency scores between the non-parametric data envelopment analysis under the constant returns to scale assumption (DEA-CRS) and several parametric models. Correlation reduces markedly when using more flexible non-parametric specifications such as data envelopment analysis under the variable returns to scale assumption (DEA-VRS) and the free disposal hull (FDH) model. Correlation also generally reduces when moving from one output to two-output specifications. This analysis suggests that there is scope for developing performance indicators at hospital level using panel data, but it is important that extensive sensitivity analysis is carried out if purchasers wish to make use of these indicators in practice.

  19. Co-integration of nano-scale vertical- and horizontal-channel metal-oxide-semiconductor field-effect transistors for low power CMOS technology.

    PubMed

    Sun, Min-Chul; Kim, Garam; Kim, Sang Wan; Kim, Hyun Woo; Kim, Hyungjin; Lee, Jong-Ho; Shin, Hyungcheol; Park, Byung-Gook

    2012-07-01

    In order to extend the conventional low power Si CMOS technology beyond the 20-nm node without SOI substrates, we propose a novel co-integration scheme to build horizontal- and vertical-channel MOSFETs together and verify the idea using TCAD simulations. From the fabrication viewpoint, it is highlighted that this scheme provides additional vertical devices with good scalability by adding a few steps to the conventional CMOS process flow for fin formation. In addition, the benefits of the co-integrated vertical devices are investigated using a TCAD device simulation. From this study, it is confirmed that the vertical device shows improved off-current control and a larger drive current when the body dimension is less than 20 nm, due to the electric field coupling effect at the double-gated channel. Finally, the benefits from the circuit design viewpoint, such as the larger midpoint gain and beta and lower power consumption, are confirmed by the mixed-mode circuit simulation study.

  20. Wavelet transform approach for fitting financial time series data

    NASA Astrophysics Data System (ADS)

    Ahmed, Amel Abdoullah; Ismail, Mohd Tahir

    2015-10-01

    This study investigates a newly developed technique; a combined wavelet filtering and VEC model, to study the dynamic relationship among financial time series. Wavelet filter has been used to annihilate noise data in daily data set of NASDAQ stock market of US, and three stock markets of Middle East and North Africa (MENA) region, namely, Egypt, Jordan, and Istanbul. The data covered is from 6/29/2001 to 5/5/2009. After that, the returns of generated series by wavelet filter and original series are analyzed by cointegration test and VEC model. The results show that the cointegration test affirms the existence of cointegration between the studied series, and there is a long-term relationship between the US, stock markets and MENA stock markets. A comparison between the proposed model and traditional model demonstrates that, the proposed model (DWT with VEC model) outperforms traditional model (VEC model) to fit the financial stock markets series well, and shows real information about these relationships among the stock markets.

  1. A framework for multivariate data-based at-site flood frequency analysis: Essentiality of the conjugal application of parametric and nonparametric approaches

    NASA Astrophysics Data System (ADS)

    Vittal, H.; Singh, Jitendra; Kumar, Pankaj; Karmakar, Subhankar

    2015-06-01

    In watershed management, flood frequency analysis (FFA) is performed to quantify the risk of flooding at different spatial locations and also to provide guidelines for determining the design periods of flood control structures. The traditional FFA was extensively performed by considering univariate scenario for both at-site and regional estimation of return periods. However, due to inherent mutual dependence of the flood variables or characteristics [i.e., peak flow (P), flood volume (V) and flood duration (D), which are random in nature], analysis has been further extended to multivariate scenario, with some restrictive assumptions. To overcome the assumption of same family of marginal density function for all flood variables, the concept of copula has been introduced. Although, the advancement from univariate to multivariate analyses drew formidable attention to the FFA research community, the basic limitation was that the analyses were performed with the implementation of only parametric family of distributions. The aim of the current study is to emphasize the importance of nonparametric approaches in the field of multivariate FFA; however, the nonparametric distribution may not always be a good-fit and capable of replacing well-implemented multivariate parametric and multivariate copula-based applications. Nevertheless, the potential of obtaining best-fit using nonparametric distributions might be improved because such distributions reproduce the sample's characteristics, resulting in more accurate estimations of the multivariate return period. Hence, the current study shows the importance of conjugating multivariate nonparametric approach with multivariate parametric and copula-based approaches, thereby results in a comprehensive framework for complete at-site FFA. Although the proposed framework is designed for at-site FFA, this approach can also be applied to regional FFA because regional estimations ideally include at-site estimations. The framework is based on the following steps: (i) comprehensive trend analysis to assess nonstationarity in the observed data; (ii) selection of the best-fit univariate marginal distribution with a comprehensive set of parametric and nonparametric distributions for the flood variables; (iii) multivariate frequency analyses with parametric, copula-based and nonparametric approaches; and (iv) estimation of joint and various conditional return periods. The proposed framework for frequency analysis is demonstrated using 110 years of observed data from Allegheny River at Salamanca, New York, USA. The results show that for both univariate and multivariate cases, the nonparametric Gaussian kernel provides the best estimate. Further, we perform FFA for twenty major rivers over continental USA, which shows for seven rivers, all the flood variables followed nonparametric Gaussian kernel; whereas for other rivers, parametric distributions provide the best-fit either for one or two flood variables. Thus the summary of results shows that the nonparametric method cannot substitute the parametric and copula-based approaches, but should be considered during any at-site FFA to provide the broadest choices for best estimation of the flood return periods.

  2. Scale-Free Nonparametric Factor Analysis: A User-Friendly Introduction with Concrete Heuristic Examples.

    ERIC Educational Resources Information Center

    Mittag, Kathleen Cage

    Most researchers using factor analysis extract factors from a matrix of Pearson product-moment correlation coefficients. A method is presented for extracting factors in a non-parametric way, by extracting factors from a matrix of Spearman rho (rank correlation) coefficients. It is possible to factor analyze a matrix of association such that…

  3. Co-integrating plasmonics with Si3N4 photonics towards a generic CMOS compatible PIC platform for high-sensitivity multi-channel biosensors: the H2020 PlasmoFab approach (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Tsiokos, Dimitris M.; Dabos, George; Ketzaki, Dimitra; Weeber, Jean-Claude; Markey, Laurent; Dereux, Alain; Giesecke, Anna Lena; Porschatis, Caroline; Chmielak, Bartos; Wahlbrink, Thorsten; Rochracher, Karl; Pleros, Nikos

    2017-05-01

    Silicon photonics meet most fabrication requirements of standard CMOS process lines encompassing the photonics-electronics consolidation vision. Despite this remarkable progress, further miniaturization of PICs for common integration with electronics and for increasing PIC functional density is bounded by the inherent diffraction limit of light imposed by optical waveguides. Instead, Surface Plasmon Polariton (SPP) waveguides can guide light at sub-wavelength scales at the metal surface providing unique light-matter interaction properties, exploiting at the same time their metallic nature to naturally integrate with electronics in high-performance ASPICs. In this article, we demonstrate the main goals of the recently introduced H2020 project PlasmoFab towards addressing the ever increasing needs for low energy, small size and high performance mass manufactured PICs by developing a revolutionary yet CMOS-compatible fabrication platform for seamless co-integration of plasmonics with photonic and supporting electronic. We demonstrate recent advances on the hosting SiN photonic hosting platform reporting on low-loss passive SiN waveguide and Grating Coupler circuits for both the TM and TE polarization states. We also present experimental results of plasmonic gold thin-film and hybrid slot waveguide configurations that can allow for high-sensitivity sensing, providing also the ongoing activities towards replacing gold with Cu, Al or TiN metal in order to yield the same functionality over a CMOS metallic structure. Finally, the first experimental results on the co-integrated SiN+plasmonic platform are demonstrated, concluding to an initial theoretical performance analysis of the CMOS plasmo-photonic biosensor that has the potential to allow for sensitivities beyond 150000nm/RIU.

  4. Conventional and advanced time series estimation: application to the Australian and New Zealand Intensive Care Society (ANZICS) adult patient database, 1993-2006.

    PubMed

    Moran, John L; Solomon, Patricia J

    2011-02-01

    Time series analysis has seen limited application in the biomedical Literature. The utility of conventional and advanced time series estimators was explored for intensive care unit (ICU) outcome series. Monthly mean time series, 1993-2006, for hospital mortality, severity-of-illness score (APACHE III), ventilation fraction and patient type (medical and surgical), were generated from the Australia and New Zealand Intensive Care Society adult patient database. Analyses encompassed geographical seasonal mortality patterns, series structural time changes, mortality series volatility using autoregressive moving average and Generalized Autoregressive Conditional Heteroscedasticity models in which predicted variances are updated adaptively, and bivariate and multivariate (vector error correction models) cointegrating relationships between series. The mortality series exhibited marked seasonality, declining mortality trend and substantial autocorrelation beyond 24 lags. Mortality increased in winter months (July-August); the medical series featured annual cycling, whereas the surgical demonstrated long and short (3-4 months) cycling. Series structural breaks were apparent in January 1995 and December 2002. The covariance stationary first-differenced mortality series was consistent with a seasonal autoregressive moving average process; the observed conditional-variance volatility (1993-1995) and residual Autoregressive Conditional Heteroscedasticity effects entailed a Generalized Autoregressive Conditional Heteroscedasticity model, preferred by information criterion and mean model forecast performance. Bivariate cointegration, indicating long-term equilibrium relationships, was established between mortality and severity-of-illness scores at the database level and for categories of ICUs. Multivariate cointegration was demonstrated for {log APACHE III score, log ICU length of stay, ICU mortality and ventilation fraction}. A system approach to understanding series time-dependence may be established using conventional and advanced econometric time series estimators. © 2010 Blackwell Publishing Ltd.

  5. Integrated Airframe Design Technology (Les Technologies pour la Conception Integree des Cellules)

    DTIC Science & Technology

    1993-12-01

    encourageant ainsi une plus forte interaction entre les organisations, ce qui laisse prevoir une ing~nierie commune concurrente pour Ia conception des...cellules. La co-localisation de personnels de diff~rentes disciplines sera n~cessaire. mais ceci pourrait se faire sous Ia forme d’une "co...integrated analysis tool Their presentation highlighted the development (e.g., ELFINI) for managing aeroelasticity, of an Aeroelastic Design

  6. Multivariate co-integration analysis of the Kaya factors in Ghana.

    PubMed

    Asumadu-Sarkodie, Samuel; Owusu, Phebe Asantewaa

    2016-05-01

    The fundamental goal of the Government of Ghana's development agenda as enshrined in the Growth and Poverty Reduction Strategy to grow the economy to a middle income status of US$1000 per capita by the end of 2015 could be met by increasing the labour force, increasing energy supplies and expanding the energy infrastructure in order to achieve the sustainable development targets. In this study, a multivariate co-integration analysis of the Kaya factors namely carbon dioxide, total primary energy consumption, population and GDP was investigated in Ghana using vector error correction model with data spanning from 1980 to 2012. Our research results show an existence of long-run causality running from population, GDP and total primary energy consumption to carbon dioxide emissions. However, there is evidence of short-run causality running from population to carbon dioxide emissions. There was a bi-directional causality running from carbon dioxide emissions to energy consumption and vice versa. In other words, decreasing the primary energy consumption in Ghana will directly reduce carbon dioxide emissions. In addition, a bi-directional causality running from GDP to energy consumption and vice versa exists in the multivariate model. It is plausible that access to energy has a relationship with increasing economic growth and productivity in Ghana.

  7. R-factor cointegrate formation in Salmonella typhimurium bacteriophage type 201 strains.

    PubMed Central

    Helmuth, R; Stephan, R; Bulling, E; van Leeuwen, W J; van Embden, J D; Guinée, P A; Portnoy, D; Falkow, S

    1981-01-01

    The genetic and molecular properties of the plasmids in Salmonella typhimurium phase type 201 isolated are described. Such strains are resistant to streptomycin, tetracycline, chloramphenicol, ampicillin, kanamycin, and several other antimicrobial drugs, and are highly pathogenic for calves. These strains have been encountered with increasing frequency since 1972 in West Germany and The Netherlands. We show that isolates of this phage type constitute a very homogeneous group with regard to their extrachromosomal elements. These bacteria carry three small plasmids: pRQ3, a 4.2-megadalton (Md) colicinogenic plasmid; pRQ4, 3.4-Md plasmid that interferes with the propagation of phages; and pRQ5, a 3.2-Md cryptic plasmid. Tetracycline resistance resides on a conjugative 120-MD plasmid pRQ1, belonging to the incompatibility class H2. Other antibiotic resistance determinants are encoded by a nonconjugative 108-Md plasmid pRQ2. Transfer of multiple-antibiotic resistance to appropriate recipient strains was associated with the appearance of a 230-Md plasmid, pRQ6. It appears that pRQ6 is a stable cointegrate of pRQ1 and pRQ2. This cointegrate plasmid was transferable with the same efficiency as pRQ1. Other conjugative plasmids could mobilize pRQ2, but stable cointegrates were not detected in the transconjugants. Phase type 201 strains carry a prophage, and we show that phage pattern 201 reflects the interference with propagation of typing phages effected by this prophage and plasmid pRQ4 in strains of phage type 201. Images PMID:7012128

  8. Non-parametric correlative uncertainty quantification and sensitivity analysis: Application to a Langmuir bimolecular adsorption model

    NASA Astrophysics Data System (ADS)

    Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.

    2018-03-01

    We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.

  9. Testing for the validity of purchasing power parity theory both in the long-run and the short-run for ASEAN-5

    NASA Astrophysics Data System (ADS)

    Choji, Niri Martha; Sek, Siok Kun

    2017-11-01

    The purchasing power parity theory says that the trade rates among two nations ought to be equivalent to the proportion of the total price levels between the two nations. For more than a decade, there has been substantial interest in testing for the validity of the Purchasing Power Parity (PPP) empirically. This paper performs a series of tests to see if PPP is valid for ASEAN-5 nations for the period of 2000-2016 using monthly data. For this purpose, we conducted four different tests of stationarity, two cointegration tests (Pedroni and Westerlund), and also the VAR model. The stationarity (unit root) tests reveal that the variables are not stationary at levels however stationary at first difference. Cointegration test results did not reject the H0 of no cointegration implying the absence long-run association among the variables and results of the VAR model did not reveal a strong short-run relationship. Based on the data, we, therefore, conclude that PPP is not valid in long-and short-run for ASEAN-5 during 2000-2016.

  10. A nonparametric analysis of plot basal area growth using tree based models

    Treesearch

    G. L. Gadbury; H. K. lyer; H. T. Schreuder; C. Y. Ueng

    1997-01-01

    Tree based statistical models can be used to investigate data structure and predict future observations. We used nonparametric and nonlinear models to reexamine the data sets on tree growth used by Bechtold et al. (1991) and Ruark et al. (1991). The growth data were collected by Forest Inventory and Analysis (FIA) teams from 1962 to 1972 (4th cycle) and 1972 to 1982 (...

  11. Geometric analysis and restitution of digital multispectral scanner data arrays

    NASA Technical Reports Server (NTRS)

    Baker, J. R.; Mikhail, E. M.

    1975-01-01

    An investigation was conducted to define causes of geometric defects within digital multispectral scanner (MSS) data arrays, to analyze the resulting geometric errors, and to investigate restitution methods to correct or reduce these errors. Geometric transformation relationships for scanned data, from which collinearity equations may be derived, served as the basis of parametric methods of analysis and restitution of MSS digital data arrays. The linearization of these collinearity equations is presented. Algorithms considered for use in analysis and restitution included the MSS collinearity equations, piecewise polynomials based on linearized collinearity equations, and nonparametric algorithms. A proposed system for geometric analysis and restitution of MSS digital data arrays was used to evaluate these algorithms, utilizing actual MSS data arrays. It was shown that collinearity equations and nonparametric algorithms both yield acceptable results, but nonparametric algorithms possess definite advantages in computational efficiency. Piecewise polynomials were found to yield inferior results.

  12. Unveiling acoustic physics of the CMB using nonparametric estimation of the temperature angular power spectrum for Planck

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aghamousa, Amir; Shafieloo, Arman; Arjunwadkar, Mihir

    2015-02-01

    Estimation of the angular power spectrum is one of the important steps in Cosmic Microwave Background (CMB) data analysis. Here, we present a nonparametric estimate of the temperature angular power spectrum for the Planck 2013 CMB data. The method implemented in this work is model-independent, and allows the data, rather than the model, to dictate the fit. Since one of the main targets of our analysis is to test the consistency of the ΛCDM model with Planck 2013 data, we use the nuisance parameters associated with the best-fit ΛCDM angular power spectrum to remove foreground contributions from the data atmore » multipoles ℓ ≥50. We thus obtain a combined angular power spectrum data set together with the full covariance matrix, appropriately weighted over frequency channels. Our subsequent nonparametric analysis resolves six peaks (and five dips) up to ℓ ∼1850 in the temperature angular power spectrum. We present uncertainties in the peak/dip locations and heights at the 95% confidence level. We further show how these reflect the harmonicity of acoustic peaks, and can be used for acoustic scale estimation. Based on this nonparametric formalism, we found the best-fit ΛCDM model to be at 36% confidence distance from the center of the nonparametric confidence set—this is considerably larger than the confidence distance (9%) derived earlier from a similar analysis of the WMAP 7-year data. Another interesting result of our analysis is that at low multipoles, the Planck data do not suggest any upturn, contrary to the expectation based on the integrated Sachs-Wolfe contribution in the best-fit ΛCDM cosmology.« less

  13. Benchmark dose analysis via nonparametric regression modeling

    PubMed Central

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Estimation of benchmark doses (BMDs) in quantitative risk assessment traditionally is based upon parametric dose-response modeling. It is a well-known concern, however, that if the chosen parametric model is uncertain and/or misspecified, inaccurate and possibly unsafe low-dose inferences can result. We describe a nonparametric approach for estimating BMDs with quantal-response data based on an isotonic regression method, and also study use of corresponding, nonparametric, bootstrap-based confidence limits for the BMD. We explore the confidence limits’ small-sample properties via a simulation study, and illustrate the calculations with an example from cancer risk assessment. It is seen that this nonparametric approach can provide a useful alternative for BMD estimation when faced with the problem of parametric model uncertainty. PMID:23683057

  14. Summarizing techniques that combine three non-parametric scores to detect disease-associated 2-way SNP-SNP interactions.

    PubMed

    Sengupta Chattopadhyay, Amrita; Hsiao, Ching-Lin; Chang, Chien Ching; Lian, Ie-Bin; Fann, Cathy S J

    2014-01-01

    Identifying susceptibility genes that influence complex diseases is extremely difficult because loci often influence the disease state through genetic interactions. Numerous approaches to detect disease-associated SNP-SNP interactions have been developed, but none consistently generates high-quality results under different disease scenarios. Using summarizing techniques to combine a number of existing methods may provide a solution to this problem. Here we used three popular non-parametric methods-Gini, absolute probability difference (APD), and entropy-to develop two novel summary scores, namely principle component score (PCS) and Z-sum score (ZSS), with which to predict disease-associated genetic interactions. We used a simulation study to compare performance of the non-parametric scores, the summary scores, the scaled-sum score (SSS; used in polymorphism interaction analysis (PIA)), and the multifactor dimensionality reduction (MDR). The non-parametric methods achieved high power, but no non-parametric method outperformed all others under a variety of epistatic scenarios. PCS and ZSS, however, outperformed MDR. PCS, ZSS and SSS displayed controlled type-I-errors (<0.05) compared to GS, APDS, ES (>0.05). A real data study using the genetic-analysis-workshop 16 (GAW 16) rheumatoid arthritis dataset identified a number of interesting SNP-SNP interactions. © 2013 Elsevier B.V. All rights reserved.

  15. Assessment of Dimensionality in Social Science Subtest

    ERIC Educational Resources Information Center

    Ozbek Bastug, Ozlem Yesim

    2012-01-01

    Most of the literature on dimensionality focused on either comparison of parametric and nonparametric dimensionality detection procedures or showing the effectiveness of one type of procedure. There is no known study to shown how to do combined parametric and nonparametric dimensionality analysis on real data. The current study is aimed to fill…

  16. Measuring Youth Development: A Nonparametric Cross-Country "Youth Welfare Index"

    ERIC Educational Resources Information Center

    Chaaban, Jad M.

    2009-01-01

    This paper develops an empirical methodology for the construction of a synthetic multi-dimensional cross-country comparison of the performance of governments around the world in improving the livelihood of their younger population. The devised "Youth Welfare Index" is based on the nonparametric Data Envelopment Analysis (DEA) methodology and…

  17. Rediscovery of Good-Turing estimators via Bayesian nonparametrics.

    PubMed

    Favaro, Stefano; Nipoti, Bernardo; Teh, Yee Whye

    2016-03-01

    The problem of estimating discovery probabilities originated in the context of statistical ecology, and in recent years it has become popular due to its frequent appearance in challenging applications arising in genetics, bioinformatics, linguistics, designs of experiments, machine learning, etc. A full range of statistical approaches, parametric and nonparametric as well as frequentist and Bayesian, has been proposed for estimating discovery probabilities. In this article, we investigate the relationships between the celebrated Good-Turing approach, which is a frequentist nonparametric approach developed in the 1940s, and a Bayesian nonparametric approach recently introduced in the literature. Specifically, under the assumption of a two parameter Poisson-Dirichlet prior, we show that Bayesian nonparametric estimators of discovery probabilities are asymptotically equivalent, for a large sample size, to suitably smoothed Good-Turing estimators. As a by-product of this result, we introduce and investigate a methodology for deriving exact and asymptotic credible intervals to be associated with the Bayesian nonparametric estimators of discovery probabilities. The proposed methodology is illustrated through a comprehensive simulation study and the analysis of Expressed Sequence Tags data generated by sequencing a benchmark complementary DNA library. © 2015, The International Biometric Society.

  18. Applications of non-parametric statistics and analysis of variance on sample variances

    NASA Technical Reports Server (NTRS)

    Myers, R. H.

    1981-01-01

    Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.

  19. The relationship between energy consumption and economic growth in Malaysia: ARDL bound test approach

    NASA Astrophysics Data System (ADS)

    Razali, Radzuan; Khan, Habib; Shafie, Afza; Hassan, Abdul Rahman

    2016-11-01

    The objective of this paper is to examine the short-run and long-run dynamic causal relationship between energy consumption and income per capita both in bivariate and multivariate framework over the period 1971-2014 in the case of Malaysia [1]. The study applies ARDL Bound test procedure for the long run co-integration and Granger causality test for investigation of causal link between the variables. The ARDL bound test confirms the existence of long run co-integration relationship between the variables. The causality test show a feed-back hypothesis between income per capita and energy consumption over the period in the case of Malaysia.

  20. The opportune time to invest in residential properties - Engle-Granger cointegration test and Granger causality test approach

    NASA Astrophysics Data System (ADS)

    Chee-Yin, Yip; Hock-Eam, Lim

    2014-12-01

    This paper examines using housing supply as proxy to house prices, the causal relationship on house prices among 8 states in Malaysia by applying the Engle-Granger cointegration test and Granger causality test approach. The target states are Perak, Selangor, Penang, Federal Territory of Kuala Lumpur (WPKL or Kuala Lumpur), Kedah, Negeri Sembilan, Sabah and Sarawak. The primary aim of this study is to estimate how long (in months) house prices in Perak lag behind that of Selangor, Penang and WPKL. We classify the 8 states into two categories - developed and developing states. We use Engle-Granger cointegration test and Granger causality test to examine the long run and short run equilibrium relationship among the two categories.. It is found that the causal relationship is bidirectional in Perak and Sabah, Perak and Selangor while it is unidirectional for Perak and Sarawak, Perak and Penang, Perak and WPKL. The speed of deviation adjustment is about 273%, suggesting that the pricing dynamic of Perak has a 32- month or 2 3/4- year lag behind that of WPKL, Selangor and Penang. Such information will be useful to investors, house buyers and speculators.

  1. Renewable energy consumption and economic growth in nine OECD countries: bounds test approach and causality analysis.

    PubMed

    Hung-Pin, Lin

    2014-01-01

    The purpose of this paper is to investigate the short-run and long-run causality between renewable energy (RE) consumption and economic growth (EG) in nine OECD countries from the period between 1982 and 2011. To examine the linkage, this paper uses the autoregressive distributed lag (ARDL) bounds testing approach of cointegration test and vector error-correction models to test the causal relationship between variables. The co-integration and causal relationships are found in five countries-United States of America (USA), Japan, Germany, Italy, and United Kingdom (UK). The overall results indicate that (1) a short-run unidirectional causality runs from EG to RE in Italy and UK; (2) long-run unidirectional causalities run from RE to EG for Germany, Italy, and UK; (3) a long-run unidirectional causality runs from EG to RE in USA, and Japan; (4) both long-run and strong unidirectional causalities run from RE to EG for Germany and UK; and (5) Finally, both long-run and strong unidirectional causalities run from EG to RE in only USA. Further evidence reveals that policies for renewable energy conservation may have no impact on economic growth in France, Denmark, Portugal, and Spain.

  2. Renewable energy, carbon emissions, and economic growth in 24 Asian countries: evidence from panel cointegration analysis.

    PubMed

    Lu, Wen-Cheng

    2017-11-01

    This article aims to investigate the relationship among renewable energy consumption, carbon dioxide (CO 2 ) emissions, and GDP using panel data for 24 Asian countries between 1990 and 2012. Panel cross-sectional dependence tests and unit root test, which considers cross-sectional dependence across countries, are used to ensure that the empirical results are correct. Using the panel cointegration model, the vector error correction model, and the Granger causality test, this paper finds that a long-run equilibrium exists among renewable energy consumption, carbon emission, and GDP. CO 2 emissions have a positive effect on renewable energy consumption in the Philippines, Pakistan, China, Iraq, Yemen, and Saudi Arabia. A 1% increase in GDP will increase renewable energy by 0.64%. Renewable energy is significantly determined by GDP in India, Sri Lanka, the Philippines, Thailand, Turkey, Malaysia, Jordan, United Arab Emirates, Saudi Arabia, and Mongolia. A unidirectional causality runs from GDP to CO 2 emissions, and two bidirectional causal relationships were found between CO 2 emissions and renewable energy consumption and between renewable energy consumption and GDP. The findings can assist governments in curbing pollution from air pollutants, execute energy conservation policy, and reduce unnecessary wastage of energy.

  3. A study on the causal effect of urban population growth and international trade on environmental pollution: evidence from China.

    PubMed

    Boamah, Kofi Baah; Du, Jianguo; Boamah, Angela Jacinta; Appiah, Kingsley

    2018-02-01

    This study seeks to contribute to the recent literature by empirically investigating the causal effect of urban population growth and international trade on environmental pollution of China, for the period 1980-2014. The Johansen cointegration confirmed a long-run cointegration association among the utilised variables for the case of China. The direction of causality among the variables was, consequently, investigated using the recent bootstrapped Granger causality test. This bootstrapped Granger causality approach is preferred as it provides robust and accurate critical values for statistical inferences. The findings from the causality analysis revealed the existence of a bi-directional causality between import and urban population. The three most paramount variables that explain the environmental pollution in China, according to the impulse response function, are imports, urbanisation and energy consumption. Our study further established the presence of an N-shaped environmental Kuznets curve relationship between economic growth and environmental pollution of China. Hence, our study recommends that China should adhere to stricter environmental regulations in international trade, as well as enforce policies that promote energy efficiency in the urban residential and commercial sector, in the quest to mitigate environmental pollution issues as the economy advances.

  4. Renewable Energy Consumption and Economic Growth in Nine OECD Countries: Bounds Test Approach and Causality Analysis

    PubMed Central

    Hung-Pin, Lin

    2014-01-01

    The purpose of this paper is to investigate the short-run and long-run causality between renewable energy (RE) consumption and economic growth (EG) in nine OECD countries from the period between 1982 and 2011. To examine the linkage, this paper uses the autoregressive distributed lag (ARDL) bounds testing approach of cointegration test and vector error-correction models to test the causal relationship between variables. The co-integration and causal relationships are found in five countries—United States of America (USA), Japan, Germany, Italy, and United Kingdom (UK). The overall results indicate that (1) a short-run unidirectional causality runs from EG to RE in Italy and UK; (2) long-run unidirectional causalities run from RE to EG for Germany, Italy, and UK; (3) a long-run unidirectional causality runs from EG to RE in USA, and Japan; (4) both long-run and strong unidirectional causalities run from RE to EG for Germany and UK; and (5) Finally, both long-run and strong unidirectional causalities run from EG to RE in only USA. Further evidence reveals that policies for renewable energy conservation may have no impact on economic growth in France, Denmark, Portugal, and Spain. PMID:24558343

  5. On the relationship between health, education and economic growth: Time series evidence from Malaysia

    NASA Astrophysics Data System (ADS)

    Khan, Habib Nawaz; Razali, Radzuan B.; Shafei, Afza Bt.

    2016-11-01

    The objectives of this paper is two-fold: First, to empirically investigate the effects of an enlarged number of healthy and well-educated people on economic growth in Malaysia within the Endogeneous Growth Model framework. Second, to examine the causal links between education, health and economic growth using annual time series data from 1981 to 2014 for Malaysia. Data series were checked for the time series properties by using ADF and KPSS tests. Long run co-integration relationship was investigated with the help of vector autoregressive (VAR) method. For short and long run dynamic relationship investigation vector error correction model (VECM) was applied. Causality analysis was performed through Engle-Granger technique. The study results showed long run co-integration relation and positively significant effects of education and health on economic growth in Malaysia. The reported results also confirmed a feedback hypothesis between the variables in the case of Malaysia. The study results have policy relevance of the importance of human capital (health and education) to the growth process of the Malaysia. Thus, it is suggested that policy makers focus on education and health sectors for sustainable economic growth in Malaysia.

  6. Does Private Tutoring Work? The Effectiveness of Private Tutoring: A Nonparametric Bounds Analysis

    ERIC Educational Resources Information Center

    Hof, Stefanie

    2014-01-01

    Private tutoring has become popular throughout the world. However, evidence for the effect of private tutoring on students' academic outcome is inconclusive; therefore, this paper presents an alternative framework: a nonparametric bounds method. The present examination uses, for the first time, a large representative data-set in a European setting…

  7. Estimation of Spatial Dynamic Nonparametric Durbin Models with Fixed Effects

    ERIC Educational Resources Information Center

    Qian, Minghui; Hu, Ridong; Chen, Jianwei

    2016-01-01

    Spatial panel data models have been widely studied and applied in both scientific and social science disciplines, especially in the analysis of spatial influence. In this paper, we consider the spatial dynamic nonparametric Durbin model (SDNDM) with fixed effects, which takes the nonlinear factors into account base on the spatial dynamic panel…

  8. Nonparametric Item Response Curve Estimation with Correction for Measurement Error

    ERIC Educational Resources Information Center

    Guo, Hongwen; Sinharay, Sandip

    2011-01-01

    Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…

  9. Measurement Error in Nonparametric Item Response Curve Estimation. Research Report. ETS RR-11-28

    ERIC Educational Resources Information Center

    Guo, Hongwen; Sinharay, Sandip

    2011-01-01

    Nonparametric, or kernel, estimation of item response curve (IRC) is a concern theoretically and operationally. Accuracy of this estimation, often used in item analysis in testing programs, is biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. In this study, we investigate…

  10. Zero- vs. one-dimensional, parametric vs. non-parametric, and confidence interval vs. hypothesis testing procedures in one-dimensional biomechanical trajectory analysis.

    PubMed

    Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A

    2015-05-01

    Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Coal consumption and economic growth in Taiwan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, H.Y.

    2000-03-01

    The purpose of this paper is to examine the causality issue between coal consumption and economic growth for Taiwan. The co-integration and Granger's causality test are applied to investigate the relationship between the two economic series. Results of the co-integration and Granger's causality test based on 1954--1997 Taiwan data show a unidirectional causality from economic growth to coal consumption with no feedback effects. Their major finding supports the neutrality hypothesis of coal consumption with respect to economic growth. Further, the finding has practical policy implications for decision makers in the area of macroeconomic planning, as coal conservation is a feasiblemore » policy with no damaging repercussions on economic growth.« less

  12. Estimating the Relationship between Economic Growth and Health Expenditures in ECO Countries Using Panel Cointegration Approach.

    PubMed

    Hatam, Nahid; Tourani, Sogand; Homaie Rad, Enayatollah; Bastani, Peivand

    2016-02-01

    Increasing knowledge of people about health leads to raising the share of health expenditures in government budget continuously; although governors do not like this rise because of budget limitations. This study aimed to find the association between health expenditures and economic growth in ECO countries. We added health capital in Solow model and used the panel cointegration approach to show the importance of health expenditures in economic growth. For estimating the model, first we used Pesaran cross-sectional dependency test, after that we used Pesaran CADF unit root test, and then we used Westerlund panel cointegration test to show if there is a long-term association between variables or not. After that, we used chaw test, Breusch-Pagan test and Hausman test to find the form of the model. Finally, we used OLS estimator for panel data. Findings showed that there is a positive, strong association between health expenditures and economic growth in ECO countries. If governments increase investing in health, the total production of the country will be increased, so health expenditures are considered as an investing good. The effects of health expenditures in developing countries must be higher than those in developed countries. Such studies can help policy makers to make long-term decisions.

  13. Kruskal-Wallis test: BASIC computer program to perform nonparametric one-way analysis of variance and multiple comparisons on ranks of several independent samples.

    PubMed

    Theodorsson-Norheim, E

    1986-08-01

    Multiple t tests at a fixed p level are frequently used to analyse biomedical data where analysis of variance followed by multiple comparisons or the adjustment of the p values according to Bonferroni would be more appropriate. The Kruskal-Wallis test is a nonparametric 'analysis of variance' which may be used to compare several independent samples. The present program is written in an elementary subset of BASIC and will perform Kruskal-Wallis test followed by multiple comparisons between the groups on practically any computer programmable in BASIC.

  14. Development and Validation of a Brief Version of the Dyadic Adjustment Scale With a Nonparametric Item Analysis Model

    ERIC Educational Resources Information Center

    Sabourin, Stephane; Valois, Pierre; Lussier, Yvan

    2005-01-01

    The main purpose of the current research was to develop an abbreviated form of the Dyadic Adjustment Scale (DAS) with nonparametric item response theory. The authors conducted 5 studies, with a total participation of 8,256 married or cohabiting individuals. Results showed that the item characteristic curves behaved in a monotonically increasing…

  15. A Nonparametric Test for Homogeneity of Variances: Application to GPAs of Students across Academic Majors

    ERIC Educational Resources Information Center

    Bakir, Saad T.

    2010-01-01

    We propose a nonparametric (or distribution-free) procedure for testing the equality of several population variances (or scale parameters). The proposed test is a modification of Bakir's (1989, Commun. Statist., Simul-Comp., 18, 757-775) analysis of means by ranks (ANOMR) procedure for testing the equality of several population means. A proof is…

  16. Complementary nonparametric analysis of covariance for logistic regression in a randomized clinical trial setting.

    PubMed

    Tangen, C M; Koch, G G

    1999-03-01

    In the randomized clinical trial setting, controlling for covariates is expected to produce variance reduction for the treatment parameter estimate and to adjust for random imbalances of covariates between the treatment groups. However, for the logistic regression model, variance reduction is not obviously obtained. This can lead to concerns about the assumptions of the logistic model. We introduce a complementary nonparametric method for covariate adjustment. It provides results that are usually compatible with expectations for analysis of covariance. The only assumptions required are based on randomization and sampling arguments. The resulting treatment parameter is a (unconditional) population average log-odds ratio that has been adjusted for random imbalance of covariates. Data from a randomized clinical trial are used to compare results from the traditional maximum likelihood logistic method with those from the nonparametric logistic method. We examine treatment parameter estimates, corresponding standard errors, and significance levels in models with and without covariate adjustment. In addition, we discuss differences between unconditional population average treatment parameters and conditional subpopulation average treatment parameters. Additional features of the nonparametric method, including stratified (multicenter) and multivariate (multivisit) analyses, are illustrated. Extensions of this methodology to the proportional odds model are also made.

  17. Estimation of spline function in nonparametric path analysis based on penalized weighted least square (PWLS)

    NASA Astrophysics Data System (ADS)

    Fernandes, Adji Achmad Rinaldo; Solimun, Arisoesilaningsih, Endang

    2017-12-01

    The aim of this research is to estimate the spline in Path Analysis-based on Nonparametric Regression using Penalized Weighted Least Square (PWLS) approach. Approach used is Reproducing Kernel Hilbert Space at sobolev space. Nonparametric path analysis model on the equation y1 i=f1.1(x1 i)+ε1 i; y2 i=f1.2(x1 i)+f2.2(y1 i)+ε2 i; i =1 ,2 ,…,n Nonparametric Path Analysis which meet the criteria of minimizing PWLS min fw .k∈W2m[aw .k,bw .k], k =1 ,2 { (2n ) -1(y˜-f ˜ ) TΣ-1(y ˜-f ˜ ) + ∑k =1 2 ∑w =1 2 λw .k ∫aw .k bw .k [fw.k (m )(xi) ] 2d xi } is f ˜^=Ay ˜ with A=T1(T1TU1-1∑-1T1)-1T1TU1-1∑-1+V1U1-1∑-1[I-T1(T1TU1-1∑-1T1)-1T1TU1-1∑-1] columnalign="left">+T2(T2TU2-1∑-1T2)-1T2TU2-1∑-1+V2U2-1∑-1[I1-T2(T2TU2-1∑-1T2) -1T2TU2-1∑-1

  18. Short-term forecasting of meteorological time series using Nonparametric Functional Data Analysis (NPFDA)

    NASA Astrophysics Data System (ADS)

    Curceac, S.; Ternynck, C.; Ouarda, T.

    2015-12-01

    Over the past decades, a substantial amount of research has been conducted to model and forecast climatic variables. In this study, Nonparametric Functional Data Analysis (NPFDA) methods are applied to forecast air temperature and wind speed time series in Abu Dhabi, UAE. The dataset consists of hourly measurements recorded for a period of 29 years, 1982-2010. The novelty of the Functional Data Analysis approach is in expressing the data as curves. In the present work, the focus is on daily forecasting and the functional observations (curves) express the daily measurements of the above mentioned variables. We apply a non-linear regression model with a functional non-parametric kernel estimator. The computation of the estimator is performed using an asymmetrical quadratic kernel function for local weighting based on the bandwidth obtained by a cross validation procedure. The proximities between functional objects are calculated by families of semi-metrics based on derivatives and Functional Principal Component Analysis (FPCA). Additionally, functional conditional mode and functional conditional median estimators are applied and the advantages of combining their results are analysed. A different approach employs a SARIMA model selected according to the minimum Akaike (AIC) and Bayessian (BIC) Information Criteria and based on the residuals of the model. The performance of the models is assessed by calculating error indices such as the root mean square error (RMSE), relative RMSE, BIAS and relative BIAS. The results indicate that the NPFDA models provide more accurate forecasts than the SARIMA models. Key words: Nonparametric functional data analysis, SARIMA, time series forecast, air temperature, wind speed

  19. Determinants of Healthcare Expenditure in Economic Cooperation Organization (ECO) Countries: Evidence from Panel Cointegration Tests.

    PubMed

    Samadi, Alihussein; Homaie Rad, Enayatollah

    2013-06-01

    Over the last decade there has been an increase in healthcare expenditures while at the same time the inequity in distribution of resources has grown. These two issues have urged the researchers to review the determinants of healthcare expenditures. In this study, we surveyed the determinants of health expenditures in Economic Cooperation Organization (ECO) countries. We used Panel data econometrics methods for the purpose of this research. For long term analysis, we used Pesaran cross sectional dependency test followed by panel unit root tests to show first whether the variables were stationary or not. Upon confirmation of no stationary variables, we used Westerlund panel cointegration test in order to show whether long term relationships exist between the variables. At the end, we estimated the model with Continuous-Updated Fully Modified (CUP-FM) estimator. For short term analysis also, we used Fixed Effects (FE) estimator to estimate the model. A long term relationship was found between the health expenditures per capita and GDP per capita, the proportion of population below 15 and above 65 years old, number of physicians, and urbanisation. Besides, all the variables had short term relationships with health expenditures, except for the proportion of population above 65 years old. The coefficient of GDP was below 1 in the model. Therefore, health is counted as a necessary good in ECO countries and governments must pay due attention to the equal distribution of health services in all regions of the country.

  20. Determinants of Healthcare Expenditure in Economic Cooperation Organization (ECO) Countries: Evidence from Panel Cointegration Tests

    PubMed Central

    Samadi, Alihussein; Homaie Rad, Enayatollah

    2013-01-01

    Background: Over the last decade there has been an increase in healthcare expenditures while at the same time the inequity in distribution of resources has grown. These two issues have urged the researchers to review the determinants of healthcare expenditures. In this study, we surveyed the determinants of health expenditures in Economic Cooperation Organization (ECO) countries. Methods: We used Panel data econometrics methods for the purpose of this research. For long term analysis, we used Pesaran cross sectional dependency test followed by panel unit root tests to show first whether the variables were stationary or not. Upon confirmation of no stationary variables, we used Westerlund panel cointegration test in order to show whether long term relationships exist between the variables. At the end, we estimated the model with Continuous-Updated Fully Modified (CUP-FM) estimator. For short term analysis also, we used Fixed Effects (FE) estimator to estimate the model. Results: A long term relationship was found between the health expenditures per capita and GDP per capita, the proportion of population below 15 and above 65 years old, number of physicians, and urbanisation. Besides, all the variables had short term relationships with health expenditures, except for the proportion of population above 65 years old. Conclusion: The coefficient of GDP was below 1 in the model. Therefore, health is counted as a necessary good in ECO countries and governments must pay due attention to the equal distribution of health services in all regions of the country. PMID:24596838

  1. Role of the RS1 sequence of the cholera vibrio in amplification of the segment of plasmid DNA carrying the gene of resistance to tetracycline and the genes of cholera toxin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fil'kova, S.L.; Il'ina, T.S.; Gintsburg, A.L.

    1988-11-01

    The hybrid plasmid pCO107, representing cointegrate 14(2)-5(2) of two plasmids, an F-derivative (pOX38) and a PBR322-derivative (pCT105) with an RS1 sequence of the cholera vibrio cloned in its makeup, contains two copes of RS1 at the sites of union of the two plasmids. Using a tetracycline resistance marker (Tc/sup R/) of the plasmid pCT105, clones were isolated which have an elevated level of resistance to tetracycline (an increase of from 4- to 30-fold). Using restriction analysis and the Southern blot method of hybridization it was shown that the increase in the level of resistance of tetracycline is associated with themore » amplification of pCT105 portion of the cointegrate, and that the process of amplification is governed by the presence of direct repeats of the RS1 sequence at its ends. The increase in the number of copies of the pCT105 segment, which contains in its composition the genes of cholera toxin (vct), is accompanied by an increase in toxin production.« less

  2. Nonparametric bootstrap analysis with applications to demographic effects in demand functions.

    PubMed

    Gozalo, P L

    1997-12-01

    "A new bootstrap proposal, labeled smooth conditional moment (SCM) bootstrap, is introduced for independent but not necessarily identically distributed data, where the classical bootstrap procedure fails.... A good example of the benefits of using nonparametric and bootstrap methods is the area of empirical demand analysis. In particular, we will be concerned with their application to the study of two important topics: what are the most relevant effects of household demographic variables on demand behavior, and to what extent present parametric specifications capture these effects." excerpt

  3. The dynamic relationship between Bursa Malaysia composite index and macroeconomic variables

    NASA Astrophysics Data System (ADS)

    Ismail, Mohd Tahir; Rose, Farid Zamani Che; Rahman, Rosmanjawati Abd.

    2017-08-01

    This study investigates and analyzes the long run and short run relationships between Bursa Malaysia Composite index (KLCI) and nine macroeconomic variables in a VAR/VECM framework. After regression analysis seven out the nine macroeconomic variables are chosen for further analysis. The use of Johansen-Juselius Cointegration and Vector Error Correction Model (VECM) technique indicate that there are long run relationships between the seven macroeconomic variables and KLCI. Meanwhile, Granger causality test shows that bidirectional relationship between KLCI and oil price. Furthermore, after 12 months the shock on KLCI are explained by innovations of the seven macroeconomic variables. This indicate the close relationship between macroeconomic variables and KLCI.

  4. Bayesian non-parametric inference for stochastic epidemic models using Gaussian Processes.

    PubMed

    Xu, Xiaoguang; Kypraios, Theodore; O'Neill, Philip D

    2016-10-01

    This paper considers novel Bayesian non-parametric methods for stochastic epidemic models. Many standard modeling and data analysis methods use underlying assumptions (e.g. concerning the rate at which new cases of disease will occur) which are rarely challenged or tested in practice. To relax these assumptions, we develop a Bayesian non-parametric approach using Gaussian Processes, specifically to estimate the infection process. The methods are illustrated with both simulated and real data sets, the former illustrating that the methods can recover the true infection process quite well in practice, and the latter illustrating that the methods can be successfully applied in different settings. © The Author 2016. Published by Oxford University Press.

  5. Application of Semiparametric Spline Regression Model in Analyzing Factors that In uence Population Density in Central Java

    NASA Astrophysics Data System (ADS)

    Sumantari, Y. D.; Slamet, I.; Sugiyanto

    2017-06-01

    Semiparametric regression is a statistical analysis method that consists of parametric and nonparametric regression. There are various approach techniques in nonparametric regression. One of the approach techniques is spline. Central Java is one of the most densely populated province in Indonesia. Population density in this province can be modeled by semiparametric regression because it consists of parametric and nonparametric component. Therefore, the purpose of this paper is to determine the factors that in uence population density in Central Java using the semiparametric spline regression model. The result shows that the factors which in uence population density in Central Java is Family Planning (FP) active participants and district minimum wage.

  6. Generalized Correlation Coefficient for Non-Parametric Analysis of Microarray Time-Course Data.

    PubMed

    Tan, Qihua; Thomassen, Mads; Burton, Mark; Mose, Kristian Fredløv; Andersen, Klaus Ejner; Hjelmborg, Jacob; Kruse, Torben

    2017-06-06

    Modeling complex time-course patterns is a challenging issue in microarray study due to complex gene expression patterns in response to the time-course experiment. We introduce the generalized correlation coefficient and propose a combinatory approach for detecting, testing and clustering the heterogeneous time-course gene expression patterns. Application of the method identified nonlinear time-course patterns in high agreement with parametric analysis. We conclude that the non-parametric nature in the generalized correlation analysis could be an useful and efficient tool for analyzing microarray time-course data and for exploring the complex relationships in the omics data for studying their association with disease and health.

  7. Non-parametric directionality analysis - Extension for removal of a single common predictor and application to time series.

    PubMed

    Halliday, David M; Senik, Mohd Harizal; Stevenson, Carl W; Mason, Rob

    2016-08-01

    The ability to infer network structure from multivariate neuronal signals is central to computational neuroscience. Directed network analyses typically use parametric approaches based on auto-regressive (AR) models, where networks are constructed from estimates of AR model parameters. However, the validity of using low order AR models for neurophysiological signals has been questioned. A recent article introduced a non-parametric approach to estimate directionality in bivariate data, non-parametric approaches are free from concerns over model validity. We extend the non-parametric framework to include measures of directed conditional independence, using scalar measures that decompose the overall partial correlation coefficient summatively by direction, and a set of functions that decompose the partial coherence summatively by direction. A time domain partial correlation function allows both time and frequency views of the data to be constructed. The conditional independence estimates are conditioned on a single predictor. The framework is applied to simulated cortical neuron networks and mixtures of Gaussian time series data with known interactions. It is applied to experimental data consisting of local field potential recordings from bilateral hippocampus in anaesthetised rats. The framework offers a non-parametric approach to estimation of directed interactions in multivariate neuronal recordings, and increased flexibility in dealing with both spike train and time series data. The framework offers a novel alternative non-parametric approach to estimate directed interactions in multivariate neuronal recordings, and is applicable to spike train and time series data. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. TRAN-STAT: statistics for environmental transuranic studies, July 1978, Number 5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    This issue is concerned with nonparametric procedures for (1) estimating the central tendency of a population, (2) describing data sets through estimating percentiles, (3) estimating confidence limits for the median and other percentiles, (4) estimating tolerance limits and associated numbers of samples, and (5) tests of significance and associated procedures for a variety of testing situations (counterparts to t-tests and analysis of variance). Some characteristics of several nonparametric tests are illustrated using the NAEG /sup 241/Am aliquot data presented and discussed in the April issue of TRAN-STAT. Some of the statistical terms used here are defined in a glossary. Themore » reference list also includes short descriptions of nonparametric books. 31 references, 3 figures, 1 table.« less

  9. Cointegration and causal linkages in fertilizer markets across different regimes

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2017-04-01

    Cointegration and causal linkages among five different fertilizer markets are investigated during low and high market regimes. The database includes prices of rock phosphate (RP), triple super phosphate (TSP), diammonium phosphate (DAP), urea, and potassium chloride (PC). It is found that fertilizer markets are closely linked to each other during low and high regimes; and, particularly during high regime (after 2007 international financial crisis). In addition, there is no evidence of bidirectional linear relationship between markets during low and high regime time periods. Furthermore, all significant linkages are only unidirectional. Moreover, some causality effects have emerged during high regime. Finally, the effect of an impulse during high regime time period persists longer and is stronger than the effect of an impulse during low regime time period (before 2007 international financial crisis).

  10. Efficient and stable transformation of hop (Humulus lupulus L.) var. Eroica by particle bombardment.

    PubMed

    Batista, Dora; Fonseca, Sandra; Serrazina, Susana; Figueiredo, Andreia; Pais, Maria Salomé

    2008-07-01

    To the best of our knowledge, this is the first accurate and reliable protocol for hop (Humulus lupulus L.) genetic transformation using particle bombardment. Based on the highly productive regeneration system previously developed by us for hop var. Eroica, two efficient transformation protocols were established using petioles and green organogenic nodular clusters (GONCs) bombarded with gusA reporter and hpt selectable genes. A total of 36 hygromycin B-resistant (hyg(r)) plants obtained upon continuous selection were successfully transferred to the greenhouse, and a first generation group of transplanted plants was followed after spending a complete vegetative cycle. PCR analysis showed the presence of one of both transgenes in 25 plants, corresponding to an integration frequency of 69.4% and an overall transformation efficiency of 7.5%. Although all final transformants were GUS negative, the integration frequency of gusA gene was higher than that of hpt gene. Petiole-derived transgenic plants showed a higher co-integration rate of 76.9%. Real-time PCR analysis confirmed co-integration in 86% of the plants tested and its stability until the first generation, and identified positive plants amongst those previously assessed as hpt (+) only by conventional PCR. Our results suggest that the integration frequencies presented here, as well as those of others, may have been underestimated, and that PCR results should be taken with precaution not only for false positives, but also for false negatives. The protocols here described could be very useful for future introduction of metabolic or resistance traits in hop cultivars even if slight modifications for other genotypes are needed.

  11. The impact of foreign direct investment on CO2 emissions in Turkey: new evidence from cointegration and bootstrap causality analysis.

    PubMed

    Koçak, Emrah; Şarkgüneşi, Aykut

    2018-01-01

    Pollution haven hypothesis (PHH), which is defined as foreign direct investment inducing a raising impact on the pollution level in the hosting country, is lately a subject of discussion in the field of economics. This study, within the scope of related discussion, aims to look into the potential impact of foreign direct investments on CO 2 emission in Turkey in 1974-2013 period using environmental Kuznets curve (EKC) model. For this purpose, Maki (Econ Model 29(5):2011-2015, 2012) structural break cointegration test, Stock and Watson (Econometrica 61:783-820, 1993) dynamic ordinary least square estimator (DOLS), and Hacker and Hatemi-J (J Econ Stud 39(2):144-160, 2012) bootstrap test for causality method are used. Research results indicate the existence of a long-term balance relationship between FDI, economic growth, energy usage, and CO 2 emission. As per this relationship, in Turkey, (1) the potential impact of FDI on CO 2 emission is positive. This result shows that PHH is valid in Turkey. (2) Moreover, this is not a one-way relationship; the changes in CO 2 emission also affect FDI entries. (3) The results also provide evidence for the existence of the EKC hypothesis in Turkey. Within the frame of related findings, the study concludes several polities and presents various suggestions.

  12. Dynamic relationship between CO2 emissions, energy consumption and economic growth in three North African countries

    NASA Astrophysics Data System (ADS)

    Kais, Saidi; Ben Mbarek, Mounir

    2017-10-01

    This paper investigated the causal relationship between energy consumption (EC), carbon dioxide (CO2) emissions and economic growth for three selected North African countries. It uses a panel co-integration analysis to determine this econometric relationship using data during 1980-2012. Recently developed tests for panel unit root and co-integration tests are applied. In order to test the Granger causality, a panel Vector Error Correction Model is used. The conservation hypothesis is found; the short run panel results show that there is a unidirectional relationship from economic growth to EC. In addition, there is a unidirectional causality running from economic growth to CO2 emissions. A unidirectional relationship from EC to CO2 emissions is detected. Findings shown that there is a big interdependence between EC and economic growth in the long run, which indicates the level of economic activity and EC mutually influence each other in that a high level of economic growth leads to a high level of EC and vice versa. Similarly, a unidirectional causal relationship from EC to CO2 emissions is detected. This study opens up new insights for policy-makers to design comprehensive economic, energy and environmental policy to keep the economic green and a sustainable environment, implying that these three variables could play an important role in the adjustment process as the system changes from the long run equilibrium.

  13. Nonlinear joint dynamics between prices of crude oil and refined products

    NASA Astrophysics Data System (ADS)

    Zhang, Tao; Ma, Guofeng; Liu, Guangsheng

    2015-02-01

    In this paper, we investigate the relationships between crude oil and refined product prices. We find that nonlinear correlations are stronger in the long-term than in the short-term. Crude oil and product prices are cointegrated and financial crisis in 2007-2008 caused a structural break of the cointegrating relationship. Moreover, different from the findings in most studies, we reveal that the relationships are almost symmetric based on a threshold error correction model. The so-called 'asymmetric relationships' are caused by some outliers and financial crisis. Most of the time, crude oil prices play the major role in the adjustment process of the long-term equilibrium. However, refined product prices dominated crude oil prices during the period of financial crisis. Important policy and risk management implications can be learned from the empirical findings.

  14. The influence of renewable and non-renewable energy consumption and real income on CO2 emissions in the USA: evidence from structural break tests.

    PubMed

    Dogan, Eyup; Ozturk, Ilhan

    2017-04-01

    The objective of this study is to explore the influence of the real income (GDP), renewable energy consumption and non-renewable energy consumption on carbon dioxide (CO 2 ) emissions for the United States of America (USA) in the environmental Kuznets curve (EKC) model for the period 1980-2014. The Zivot-Andrews unit root test with a structural break and the Clemente-Montanes-Reyes unit root test with a structural break report that the analyzed variables become stationary at first-differences. The Gregory-Hansen cointegration test with a structural break and the bounds testing for cointegration in the presence of a structural break show CO 2 emissions, the real income, the quadratic real income, renewable and non-renewable energy consumption are cointegrated. The long-run estimates obtained from the ARDL model indicate that increases in renewable energy consumption mitigate environmental degradation whereas increases in non-renewable energy consumption contribute to CO 2 emissions. In addition, the EKC hypothesis is not valid for the USA. Since we use time-series econometric approaches that account for structural break in the data, findings of this study are robust, reliable and accurate. The US government is advised to put more weights on renewable sources in energy mix, to support and encourage the use and adoption of renewable energy and clean technologies, and to increase the public awareness of renewable energy for lower levels of emissions.

  15. Linear and non-linear impact of Internet usage and financial deepening on electricity consumption for Turkey: empirical evidence from asymmetric causality.

    PubMed

    Faisal, Faisal; Tursoy, Turgut; Berk, Niyazi

    2018-04-01

    This study investigates the relationship between Internet usage, financial development, economic growth, capital and electricity consumption using quarterly data from 1993Q1 to 2014Q4. The integration order of the series is analysed using the structural break unit root test. The ARDL bounds test for cointegration in addition to the Bayer-Hanck (2013) combined cointegration test is applied to analyse the existence of cointegration among the variables. The study found strong evidence of a long-run relationship between the variables. The long-run results under the ARDL framework confirm the existence of an inverted U-shaped relationship between financial development and electricity consumption, not only in the long-run, but also in the short-run. The study also confirms the existence of a U-shaped relationship between Internet usage and electricity consumption; however, the effect is insignificant. Additionally, the influence of trade, capital and economic growth is examined in both the long run and short run (ARDL-ECM). Finally, the results of asymmetric causality suggest a positive shock in electricity consumption that has a positive causal impact on Internet usage. The authors recommend that the Turkish Government should direct financial institutions to moderate the investment in the ICT sector by advancing credits at lower cost for purchasing energy-efficient technologies. In doing so, the Turkish Government can increase productivity in order to achieve sustainable growth, while simultaneously reducing emissions to improve environmental quality.

  16. Modeling of gold production in Malaysia

    NASA Astrophysics Data System (ADS)

    Muda, Nora; Ainuddeen, Nasihah Rasyiqah; Ismail, Hamizun; Umor, Mohd Rozi

    2013-04-01

    This study was conducted to identify the main factors that contribute to the gold production and hence determine the factors that affect to the development of the mining industry in Malaysia. An econometric approach was used by performing the cointegration analysis among the factors to determine the existence of long term relationship between the gold prices, the number of gold mines, the number of workers in gold mines and the gold production. The study continued with the Granger analysis to determine the relationship between factors and gold production. Results have found that there are long term relationship between price, gold production and number of employees. Granger causality analysis shows that there is only one way relationship between the number of employees with gold production in Malaysia and the number of gold mines in Malaysia.

  17. The Breslow estimator of the nonparametric baseline survivor function in Cox's regression model: some heuristics.

    PubMed

    Hanley, James A

    2008-01-01

    Most survival analysis textbooks explain how the hazard ratio parameters in Cox's life table regression model are estimated. Fewer explain how the components of the nonparametric baseline survivor function are derived. Those that do often relegate the explanation to an "advanced" section and merely present the components as algebraic or iterative solutions to estimating equations. None comment on the structure of these estimators. This note brings out a heuristic representation that may help to de-mystify the structure.

  18. A multi-instrument non-parametric reconstruction of the electron pressure profile in the galaxy cluster CLJ1226.9+3332

    NASA Astrophysics Data System (ADS)

    Romero, C.; McWilliam, M.; Macías-Pérez, J.-F.; Adam, R.; Ade, P.; André, P.; Aussel, H.; Beelen, A.; Benoît, A.; Bideaud, A.; Billot, N.; Bourrion, O.; Calvo, M.; Catalano, A.; Coiffard, G.; Comis, B.; de Petris, M.; Désert, F.-X.; Doyle, S.; Goupy, J.; Kramer, C.; Lagache, G.; Leclercq, S.; Lestrade, J.-F.; Mauskopf, P.; Mayet, F.; Monfardini, A.; Pascale, E.; Perotto, L.; Pisano, G.; Ponthieu, N.; Revéret, V.; Ritacco, A.; Roussel, H.; Ruppin, F.; Schuster, K.; Sievers, A.; Triqueneaux, S.; Tucker, C.; Zylka, R.

    2018-04-01

    Context. In the past decade, sensitive, resolved Sunyaev-Zel'dovich (SZ) studies of galaxy clusters have become common. Whereas many previous SZ studies have parameterized the pressure profiles of galaxy clusters, non-parametric reconstructions will provide insights into the thermodynamic state of the intracluster medium. Aim. We seek to recover the non-parametric pressure profiles of the high redshift (z = 0.89) galaxy cluster CLJ 1226.9+3332 as inferred from SZ data from the MUSTANG, NIKA, Bolocam, and Planck instruments, which all probe different angular scales. Methods: Our non-parametric algorithm makes use of logarithmic interpolation, which under the assumption of ellipsoidal symmetry is analytically integrable. For MUSTANG, NIKA, and Bolocam we derive a non-parametric pressure profile independently and find good agreement among the instruments. In particular, we find that the non-parametric profiles are consistent with a fitted generalized Navaro-Frenk-White (gNFW) profile. Given the ability of Planck to constrain the total signal, we include a prior on the integrated Compton Y parameter as determined by Planck. Results: For a given instrument, constraints on the pressure profile diminish rapidly beyond the field of view. The overlap in spatial scales probed by these four datasets is therefore critical in checking for consistency between instruments. By using multiple instruments, our analysis of CLJ 1226.9+3332 covers a large radial range, from the central regions to the cluster outskirts: 0.05 R500 < r < 1.1 R500. This is a wider range of spatial scales than is typically recovered by SZ instruments. Similar analyses will be possible with the new generation of SZ instruments such as NIKA2 and MUSTANG2.

  19. Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification.

    PubMed

    Fan, Jianqing; Feng, Yang; Jiang, Jiancheng; Tong, Xin

    We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing.

  20. Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification

    PubMed Central

    Feng, Yang; Jiang, Jiancheng; Tong, Xin

    2015-01-01

    We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing. PMID:27185970

  1. Population Explosions of Tiger Moth Lead to Lepidopterism Mimicking Infectious Fever Outbreaks

    PubMed Central

    Wills, Pallara Janardhanan; Anjana, Mohan; Nitin, Mohan; Varun, Raghuveeran; Sachidanandan, Parayil; Jacob, Tharaniyil Mani; Lilly, Madhavan; Thampan, Raghava Varman; Karthikeya Varma, Koyikkal

    2016-01-01

    Lepidopterism is a disease caused by the urticating scales and toxic fluids of adult moths, butterflies or its caterpillars. The resulting cutaneous eruptions and systemic problems progress to clinical complications sometimes leading to death. High incidence of fever epidemics were associated with massive outbreaks of tiger moth Asota caricae adult populations during monsoon in Kerala, India. A significant number of monsoon related fever characteristic to lepidopterism was erroneously treated as infectious fevers due to lookalike symptoms. To diagnose tiger moth lepidopterism, we conducted immunoblots for tiger moth specific IgE in fever patients’ sera. We selected a cohort of patients (n = 155) with hallmark symptoms of infectious fevers but were tested negative to infectious fevers. In these cases, the total IgE was elevated and was detected positive (78.6%) for tiger moth specific IgE allergens. Chemical characterization of caterpillar and adult moth fluids was performed by HPLC and GC-MS analysis and structural identification of moth scales was performed by SEM analysis. The body fluids and chitinous scales were found to be highly toxic and inflammatory in nature. To replicate the disease in experimental model, wistar rats were exposed to live tiger moths in a dose dependant manner and observed similar clinico-pathological complications reported during the fever epidemics. Further, to link larval abundance and fever epidemics we conducted cointegration test for the period 2009 to 2012 and physical presence of the tiger moths were found to be cointegrated with fever epidemics. In conclusion, our experiments demonstrate that inhalation of aerosols containing tiger moth fluids, scales and hairs cause systemic reactions that can be fatal to human. All these evidences points to the possible involvement of tiger moth disease as a major cause to the massive and fatal fever epidemics observed in Kerala. PMID:27073878

  2. Packaged integrated opto-fluidic solution for harmful fluid analysis

    NASA Astrophysics Data System (ADS)

    Allenet, T.; Bucci, D.; Geoffray, F.; Canto, F.; Couston, L.; Jardinier, E.; Broquin, J.-E.

    2016-02-01

    Advances in nuclear fuel reprocessing have led to a surging need for novel chemical analysis tools. In this paper, we present a packaged lab-on-chip approach with co-integration of optical and micro-fluidic functions on a glass substrate as a solution. A chip was built and packaged to obtain light/fluid interaction in order for the entire device to make spectral measurements using the photo spectroscopy absorption principle. The interaction between the analyte solution and light takes place at the boundary between a waveguide and a fluid micro-channel thanks to the evanescent part of the waveguide's guided mode that propagates into the fluid. The waveguide was obtained via ion exchange on a glass wafer. The input and the output of the waveguides were pigtailed with standard single mode optical fibers. The micro-scale fluid channel was elaborated with a lithography procedure and hydrofluoric acid wet etching resulting in a 150+/-8 μm deep channel. The channel was designed with fluidic accesses, in order for the chip to be compatible with commercial fluidic interfaces/chip mounts. This allows for analyte fluid in external capillaries to be pumped into the device through micro-pipes, hence resulting in a fully packaged chip. In order to produce this co-integrated structure, two substrates were bonded. A study of direct glass wafer-to-wafer molecular bonding was carried-out to improve detector sturdiness and durability and put forward a bonding protocol with a bonding surface energy of γ>2.0 J.m-2. Detector viability was shown by obtaining optical mode measurements and detecting traces of 1.2 M neodymium (Nd) solute in 12+/-1 μL of 0.01 M and pH 2 nitric acid (HNO3) solvent by obtaining an absorption peak specific to neodymium at 795 nm.

  3. Modelling lecturer performance index of private university in Tulungagung by using survival analysis with multivariate adaptive regression spline

    NASA Astrophysics Data System (ADS)

    Hasyim, M.; Prastyo, D. D.

    2018-03-01

    Survival analysis performs relationship between independent variables and survival time as dependent variable. In fact, not all survival data can be recorded completely by any reasons. In such situation, the data is called censored data. Moreover, several model for survival analysis requires assumptions. One of the approaches in survival analysis is nonparametric that gives more relax assumption. In this research, the nonparametric approach that is employed is Multivariate Regression Adaptive Spline (MARS). This study is aimed to measure the performance of private university’s lecturer. The survival time in this study is duration needed by lecturer to obtain their professional certificate. The results show that research activities is a significant factor along with developing courses material, good publication in international or national journal, and activities in research collaboration.

  4. Is health care a luxury or a necessity or both? Evidence from Turkey.

    PubMed

    Yavuz, Nilgun Cil; Yilanci, Veli; Ozturk, Zehra Ayca

    2013-02-01

    This study investigates the effect of per capita income on per capita health expenditures in Turkey over the period 1975-2007 by using ARDL bounds test approach to the cointegration considering both demand and supply side variables. Since we reject the null hypothesis that there is no cointegration among the series, we estimate long run and short run elasticities. The results show that while income has no effect on health expenditures in the long run, it is a necessity good in the short run that is a 1% increase in per capita income creates an 0.75% increase in per capita health expenditures. On the other hand, by examining the coefficient of demand and supply side variables, we found that average length of stay and number of physicians has negative effect, percentage of older people has positive effect and infant mortality rate has no effect on health expenditures in both short and long runs.

  5. Globalisation and its effect on pollution in Malaysia: the role of Trans-Pacific Partnership (TPP) agreement.

    PubMed

    Solarin, Sakiru Adebola; Al-Mulali, Usama; Sahu, Pritish Kumar

    2017-10-01

    The main objective of this study is to investigate the influence of the globalisation (Trans-Pacific Partnership (TPP) agreement in particular) on air pollution in Malaysia. To achieve this goal, the Autoregressive Distributed Lag (ARDL) model, Johansen cointegration test and fully modified ordinary least square (FMOLS) methods are utilised. CO 2 emission is used as an indicator of pollution while GDP per capita and urbanisation serve as its other determinants. In addition, this study uses Malaysia's total trade with 10 TPP members as an indicator of globalisation and analyse its effect on CO 2 emission in Malaysia. The outcome of this research shows that the variables are cointegrated. Additionally, GDP per capita, urbanisation and trade between Malaysia and its 10 TPP partners have a positive impact on CO 2 emissions in general. Based on the outcome of this research, important policy implications are provided for the investigated country.

  6. Identification and estimation of survivor average causal effects.

    PubMed

    Tchetgen Tchetgen, Eric J

    2014-09-20

    In longitudinal studies, outcomes ascertained at follow-up are typically undefined for individuals who die prior to the follow-up visit. In such settings, outcomes are said to be truncated by death and inference about the effects of a point treatment or exposure, restricted to individuals alive at the follow-up visit, could be biased even if as in experimental studies, treatment assignment were randomized. To account for truncation by death, the survivor average causal effect (SACE) defines the effect of treatment on the outcome for the subset of individuals who would have survived regardless of exposure status. In this paper, the author nonparametrically identifies SACE by leveraging post-exposure longitudinal correlates of survival and outcome that may also mediate the exposure effects on survival and outcome. Nonparametric identification is achieved by supposing that the longitudinal data arise from a certain nonparametric structural equations model and by making the monotonicity assumption that the effect of exposure on survival agrees in its direction across individuals. A novel weighted analysis involving a consistent estimate of the survival process is shown to produce consistent estimates of SACE. A data illustration is given, and the methods are extended to the context of time-varying exposures. We discuss a sensitivity analysis framework that relaxes assumptions about independent errors in the nonparametric structural equations model and may be used to assess the extent to which inference may be altered by a violation of key identifying assumptions. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.

  7. Identification and estimation of survivor average causal effects

    PubMed Central

    Tchetgen, Eric J Tchetgen

    2014-01-01

    In longitudinal studies, outcomes ascertained at follow-up are typically undefined for individuals who die prior to the follow-up visit. In such settings, outcomes are said to be truncated by death and inference about the effects of a point treatment or exposure, restricted to individuals alive at the follow-up visit, could be biased even if as in experimental studies, treatment assignment were randomized. To account for truncation by death, the survivor average causal effect (SACE) defines the effect of treatment on the outcome for the subset of individuals who would have survived regardless of exposure status. In this paper, the author nonparametrically identifies SACE by leveraging post-exposure longitudinal correlates of survival and outcome that may also mediate the exposure effects on survival and outcome. Nonparametric identification is achieved by supposing that the longitudinal data arise from a certain nonparametric structural equations model and by making the monotonicity assumption that the effect of exposure on survival agrees in its direction across individuals. A novel weighted analysis involving a consistent estimate of the survival process is shown to produce consistent estimates of SACE. A data illustration is given, and the methods are extended to the context of time-varying exposures. We discuss a sensitivity analysis framework that relaxes assumptions about independent errors in the nonparametric structural equations model and may be used to assess the extent to which inference may be altered by a violation of key identifying assumptions. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24889022

  8. Statistical analysis of water-quality data containing multiple detection limits II: S-language software for nonparametric distribution modeling and hypothesis testing

    USGS Publications Warehouse

    Lee, L.; Helsel, D.

    2007-01-01

    Analysis of low concentrations of trace contaminants in environmental media often results in left-censored data that are below some limit of analytical precision. Interpretation of values becomes complicated when there are multiple detection limits in the data-perhaps as a result of changing analytical precision over time. Parametric and semi-parametric methods, such as maximum likelihood estimation and robust regression on order statistics, can be employed to model distributions of multiply censored data and provide estimates of summary statistics. However, these methods are based on assumptions about the underlying distribution of data. Nonparametric methods provide an alternative that does not require such assumptions. A standard nonparametric method for estimating summary statistics of multiply-censored data is the Kaplan-Meier (K-M) method. This method has seen widespread usage in the medical sciences within a general framework termed "survival analysis" where it is employed with right-censored time-to-failure data. However, K-M methods are equally valid for the left-censored data common in the geosciences. Our S-language software provides an analytical framework based on K-M methods that is tailored to the needs of the earth and environmental sciences community. This includes routines for the generation of empirical cumulative distribution functions, prediction or exceedance probabilities, and related confidence limits computation. Additionally, our software contains K-M-based routines for nonparametric hypothesis testing among an unlimited number of grouping variables. A primary characteristic of K-M methods is that they do not perform extrapolation and interpolation. Thus, these routines cannot be used to model statistics beyond the observed data range or when linear interpolation is desired. For such applications, the aforementioned parametric and semi-parametric methods must be used.

  9. Wavelet Filtering to Reduce Conservatism in Aeroservoelastic Robust Stability Margins

    NASA Technical Reports Server (NTRS)

    Brenner, Marty; Lind, Rick

    1998-01-01

    Wavelet analysis for filtering and system identification was used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins was reduced with parametric and nonparametric time-frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data was used to reduce the effects of external desirableness and unmodeled dynamics. Parametric estimates of modal stability were also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. F-18 high Alpha Research Vehicle aeroservoelastic flight test data demonstrated improved robust stability prediction by extension of the stability boundary beyond the flight regime.

  10. Semiparametric mixed-effects analysis of PK/PD models using differential equations.

    PubMed

    Wang, Yi; Eskridge, Kent M; Zhang, Shunpu

    2008-08-01

    Motivated by the use of semiparametric nonlinear mixed-effects modeling on longitudinal data, we develop a new semiparametric modeling approach to address potential structural model misspecification for population pharmacokinetic/pharmacodynamic (PK/PD) analysis. Specifically, we use a set of ordinary differential equations (ODEs) with form dx/dt = A(t)x + B(t) where B(t) is a nonparametric function that is estimated using penalized splines. The inclusion of a nonparametric function in the ODEs makes identification of structural model misspecification feasible by quantifying the model uncertainty and provides flexibility for accommodating possible structural model deficiencies. The resulting model will be implemented in a nonlinear mixed-effects modeling setup for population analysis. We illustrate the method with an application to cefamandole data and evaluate its performance through simulations.

  11. Empirical validation of statistical parametric mapping for group imaging of fast neural activity using electrical impedance tomography.

    PubMed

    Packham, B; Barnes, G; Dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D

    2016-06-01

    Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have  >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p  <  0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity.

  12. Empirical validation of statistical parametric mapping for group imaging of fast neural activity using electrical impedance tomography

    PubMed Central

    Packham, B; Barnes, G; dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D

    2016-01-01

    Abstract Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have  >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p  <  0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity. PMID:27203477

  13. Combined non-parametric and parametric approach for identification of time-variant systems

    NASA Astrophysics Data System (ADS)

    Dziedziech, Kajetan; Czop, Piotr; Staszewski, Wieslaw J.; Uhl, Tadeusz

    2018-03-01

    Identification of systems, structures and machines with variable physical parameters is a challenging task especially when time-varying vibration modes are involved. The paper proposes a new combined, two-step - i.e. non-parametric and parametric - modelling approach in order to determine time-varying vibration modes based on input-output measurements. Single-degree-of-freedom (SDOF) vibration modes from multi-degree-of-freedom (MDOF) non-parametric system representation are extracted in the first step with the use of time-frequency wavelet-based filters. The second step involves time-varying parametric representation of extracted modes with the use of recursive linear autoregressive-moving-average with exogenous inputs (ARMAX) models. The combined approach is demonstrated using system identification analysis based on the experimental mass-varying MDOF frame-like structure subjected to random excitation. The results show that the proposed combined method correctly captures the dynamics of the analysed structure, using minimum a priori information on the model.

  14. Spectral analysis method for detecting an element

    DOEpatents

    Blackwood, Larry G [Idaho Falls, ID; Edwards, Andrew J [Idaho Falls, ID; Jewell, James K [Idaho Falls, ID; Reber, Edward L [Idaho Falls, ID; Seabury, Edward H [Idaho Falls, ID

    2008-02-12

    A method for detecting an element is described and which includes the steps of providing a gamma-ray spectrum which has a region of interest which corresponds with a small amount of an element to be detected; providing nonparametric assumptions about a shape of the gamma-ray spectrum in the region of interest, and which would indicate the presence of the element to be detected; and applying a statistical test to the shape of the gamma-ray spectrum based upon the nonparametric assumptions to detect the small amount of the element to be detected.

  15. Experimental investigation of static ice refrigeration air conditioning system driven by distributed photovoltaic energy system

    NASA Astrophysics Data System (ADS)

    Xu, Y. F.; Li, M.; Luo, X.; Wang, Y. F.; Yu, Q. F.; Hassanien, R. H. E.

    2016-08-01

    The static ice refrigeration air conditioning system (SIRACS) driven by distributed photovoltaic energy system (DPES) was proposed and the test experiment have been investigated in this paper. Results revealed that system energy utilization efficiency is low because energy losses were high in ice making process of ice slide maker. So the immersed evaporator and co-integrated exchanger were suggested in system structure optimization analysis and the system COP was improved nearly 40%. At the same time, we have researched that ice thickness and ice super-cooled temperature changed along with time and the relationship between system COP and ice thickness was obtained.

  16. Out-of-Sample Extensions for Non-Parametric Kernel Methods.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang

    2017-02-01

    Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.

  17. Further evidence for the increased power of LOD scores compared with nonparametric methods.

    PubMed

    Durner, M; Vieland, V J; Greenberg, D A

    1999-01-01

    In genetic analysis of diseases in which the underlying model is unknown, "model free" methods-such as affected sib pair (ASP) tests-are often preferred over LOD-score methods, although LOD-score methods under the correct or even approximately correct model are more powerful than ASP tests. However, there might be circumstances in which nonparametric methods will outperform LOD-score methods. Recently, Dizier et al. reported that, in some complex two-locus (2L) models, LOD-score methods with segregation analysis-derived parameters had less power to detect linkage than ASP tests. We investigated whether these particular models, in fact, represent a situation that ASP tests are more powerful than LOD scores. We simulated data according to the parameters specified by Dizier et al. and analyzed the data by using a (a) single locus (SL) LOD-score analysis performed twice, under a simple dominant and a recessive mode of inheritance (MOI), (b) ASP methods, and (c) nonparametric linkage (NPL) analysis. We show that SL analysis performed twice and corrected for the type I-error increase due to multiple testing yields almost as much linkage information as does an analysis under the correct 2L model and is more powerful than either the ASP method or the NPL method. We demonstrate that, even for complex genetic models, the most important condition for linkage analysis is that the assumed MOI at the disease locus being tested is approximately correct, not that the inheritance of the disease per se is correctly specified. In the analysis by Dizier et al., segregation analysis led to estimates of dominance parameters that were grossly misspecified for the locus tested in those models in which ASP tests appeared to be more powerful than LOD-score analyses.

  18. Major strengths and weaknesses of the lod score method.

    PubMed

    Ott, J

    2001-01-01

    Strengths and weaknesses of the lod score method for human genetic linkage analysis are discussed. The main weakness is its requirement for the specification of a detailed inheritance model for the trait. Various strengths are identified. For example, the lod score (likelihood) method has optimality properties when the trait to be studied is known to follow a Mendelian mode of inheritance. The ELOD is a useful measure for information content of the data. The lod score method can emulate various "nonparametric" methods, and this emulation is equivalent to the nonparametric methods. Finally, the possibility of building errors into the analysis will prove to be essential for the large amount of linkage and disequilibrium data expected in the near future.

  19. [Detection of quadratic phase coupling between EEG signal components by nonparamatric and parametric methods of bispectral analysis].

    PubMed

    Schmidt, K; Witte, H

    1999-11-01

    Recently the assumption of the independence of individual frequency components in a signal has been rejected, for example, for the EEG during defined physiological states such as sleep or sedation [9, 10]. Thus, the use of higher-order spectral analysis capable of detecting interrelations between individual signal components has proved useful. The aim of the present study was to investigate the quality of various non-parametric and parametric estimation algorithms using simulated as well as true physiological data. We employed standard algorithms available for the MATLAB. The results clearly show that parametric bispectral estimation is superior to non-parametric estimation in terms of the quality of peak localisation and the discrimination from other peaks.

  20. A SAS(®) macro implementation of a multiple comparison post hoc test for a Kruskal-Wallis analysis.

    PubMed

    Elliott, Alan C; Hynan, Linda S

    2011-04-01

    The Kruskal-Wallis (KW) nonparametric analysis of variance is often used instead of a standard one-way ANOVA when data are from a suspected non-normal population. The KW omnibus procedure tests for some differences between groups, but provides no specific post hoc pair wise comparisons. This paper provides a SAS(®) macro implementation of a multiple comparison test based on significant Kruskal-Wallis results from the SAS NPAR1WAY procedure. The implementation is designed for up to 20 groups at a user-specified alpha significance level. A Monte-Carlo simulation compared this nonparametric procedure to commonly used parametric multiple comparison tests. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  1. Modelling fourier regression for time series data- a case study: modelling inflation in foods sector in Indonesia

    NASA Astrophysics Data System (ADS)

    Prahutama, Alan; Suparti; Wahyu Utami, Tiani

    2018-03-01

    Regression analysis is an analysis to model the relationship between response variables and predictor variables. The parametric approach to the regression model is very strict with the assumption, but nonparametric regression model isn’t need assumption of model. Time series data is the data of a variable that is observed based on a certain time, so if the time series data wanted to be modeled by regression, then we should determined the response and predictor variables first. Determination of the response variable in time series is variable in t-th (yt), while the predictor variable is a significant lag. In nonparametric regression modeling, one developing approach is to use the Fourier series approach. One of the advantages of nonparametric regression approach using Fourier series is able to overcome data having trigonometric distribution. In modeling using Fourier series needs parameter of K. To determine the number of K can be used Generalized Cross Validation method. In inflation modeling for the transportation sector, communication and financial services using Fourier series yields an optimal K of 120 parameters with R-square 99%. Whereas if it was modeled by multiple linear regression yield R-square 90%.

  2. The Infinitesimal Jackknife with Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.

    2012-01-01

    The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…

  3. [Linkage analysis of susceptibility loci in 2 target chromosomes in pedigrees with paranoid schizophrenia and undifferentiated schizophrenia].

    PubMed

    Zeng, Li-ping; Hu, Zheng-mao; Mu, Li-li; Mei, Gui-sen; Lu, Xiu-ling; Zheng, Yong-jun; Li, Pei-jian; Zhang, Ying-xue; Pan, Qian; Long, Zhi-gao; Dai, He-ping; Zhang, Zhuo-hua; Xia, Jia-hui; Zhao, Jing-ping; Xia, Kun

    2011-06-01

    To investigate the relationship of susceptibility loci in chromosomes 1q21-25 and 6p21-25 and schizophrenia subtypes in Chinese population. A genomic scan and parametric and non-parametric analyses were performed on 242 individuals from 36 schizophrenia pedigrees, including 19 paranoid schizophrenia and 17 undifferentiated schizophrenia pedigrees, from Henan province of China using 5 microsatellite markers in the chromosome region 1q21-25 and 8 microsatellite markers in the chromosome region 6p21-25, which were the candidates of previous studies. All affected subjects were diagnosed and typed according to the criteria of the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revised (DSM-IV-TR; American Psychiatric Association, 2000). All subjects signed informed consent. In chromosome 1, parametric analysis under the dominant inheritance mode of all 36 pedigrees showed that the maximum multi-point heterogeneity Log of odds score method (HLOD) score was 1.33 (α = 0.38). The non-parametric analysis and the single point and multi-point nonparametric linkage (NPL) scores suggested linkage at D1S484, D1S2878, and D1S196. In the 19 paranoid schizophrenias pedigrees, linkage was not observed for any of the 5 markers. In the 17 undifferentiated schizophrenia pedigrees, the multi-point NPL score was 1.60 (P= 0.0367) at D1S484. The single point NPL score was 1.95(P= 0.0145) and the multi-point NPL score was 2.39 (P= 0.0041) at D1S2878. Additionally, the multi-point NPL score was 1.74 (P= 0.0255) at D1S196. These same three loci showed suggestive linkage during the integrative analysis of all 36 pedigrees. In chromosome 6, parametric linkage analysis under the dominant and recessive inheritance and the non-parametric linkage analysis of all 36 pedigrees and the 17 undifferentiated schizophrenia pedigrees, linkage was not observed for any of the 8 markers. In the 19 paranoid schizophrenias pedigrees, parametric analysis showed that under recessive inheritance mode the maximum single-point HLOD score was 1.26 (α = 0.40) and the multi-point HLOD was 1.12 (α = 0.38) at D6S289 in the chromosome 6p23. In nonparametric analysis, the single-point NPL score was 1.52 (P= 0.0402) and the multi-point NPL score was 1.92 (P= 0.0206) at D6S289. Susceptibility genes correlated with undifferentiated schizophrenia pedigrees from D1S484, D1S2878, D1S196 loci, and those correlated with paranoid schizophrenia pedigrees from D6S289 locus are likely present in chromosome regions 1q23.3 and 1q24.2, and chromosome region 6p23, respectively.

  4. Mutations in Subunits of the Activating Signal Cointegrator 1 Complex Are Associated with Prenatal Spinal Muscular Atrophy and Congenital Bone Fractures

    PubMed Central

    Knierim, Ellen; Hirata, Hiromi; Wolf, Nicole I.; Morales-Gonzalez, Susanne; Schottmann, Gudrun; Tanaka, Yu; Rudnik-Schöneborn, Sabine; Orgeur, Mickael; Zerres, Klaus; Vogt, Stefanie; van Riesen, Anne; Gill, Esther; Seifert, Franziska; Zwirner, Angelika; Kirschner, Janbernd; Goebel, Hans Hilmar; Hübner, Christoph; Stricker, Sigmar; Meierhofer, David; Stenzel, Werner; Schuelke, Markus

    2016-01-01

    Transcriptional signal cointegrators associate with transcription factors or nuclear receptors and coregulate tissue-specific gene transcription. We report on recessive loss-of-function mutations in two genes (TRIP4 and ASCC1) that encode subunits of the nuclear activating signal cointegrator 1 (ASC-1) complex. We used autozygosity mapping and whole-exome sequencing to search for pathogenic mutations in four families. Affected individuals presented with prenatal-onset spinal muscular atrophy (SMA), multiple congenital contractures (arthrogryposis multiplex congenita), respiratory distress, and congenital bone fractures. We identified homozygous and compound-heterozygous nonsense and frameshift TRIP4 and ASCC1 mutations that led to a truncation or the entire absence of the respective proteins and cosegregated with the disease phenotype. Trip4 and Ascc1 have identical expression patterns in 17.5-day-old mouse embryos with high expression levels in the spinal cord, brain, paraspinal ganglia, thyroid, and submandibular glands. Antisense morpholino-mediated knockdown of either trip4 or ascc1 in zebrafish disrupted the highly patterned and coordinated process of α-motoneuron outgrowth and formation of myotomes and neuromuscular junctions and led to a swimming defect in the larvae. Immunoprecipitation of the ASC-1 complex consistently copurified cysteine and glycine rich protein 1 (CSRP1), a transcriptional cofactor, which is known to be involved in spinal cord regeneration upon injury in adult zebrafish. ASCC1 mutant fibroblasts downregulated genes associated with neurogenesis, neuronal migration, and pathfinding (SERPINF1, DAB1, SEMA3D, SEMA3A), as well as with bone development (TNFRSF11B, RASSF2, STC1). Our findings indicate that the dysfunction of a transcriptional coactivator complex can result in a clinical syndrome affecting the neuromuscular system. PMID:26924529

  5. Nonparametric evaluation of birth cohort trends in disease rates.

    PubMed

    Tarone, R E; Chu, K C

    2000-01-01

    Although interpretation of age-period-cohort analyses is complicated by the non-identifiability of maximum likelihood estimates, changes in the slope of the birth-cohort effect curve are identifiable and have potential aetiologic significance. A nonparametric test for a change in the slope of the birth-cohort trend has been developed. The test is a generalisation of the sign test and is based on permutational distributions. A method for identifying interactions between age and calendar-period effects is also presented. The nonparametric method is shown to be powerful in detecting changes in the slope of the birth-cohort trend, although its power can be reduced considerably by calendar-period patterns of risk. The method identifies a previously unidentified decrease in the birth-cohort risk of lung-cancer mortality from 1912 to 1919, which appears to reflect a reduction in the initiation of smoking by young men at the beginning of the Great Depression (1930s). The method also detects an interaction between age and calendar period in leukemia mortality rates, reflecting the better response of children to chemotherapy. The proposed nonparametric method provides a data analytic approach, which is a useful adjunct to log-linear Poisson analysis of age-period-cohort models, either in the initial model building stage, or in the final interpretation stage.

  6. Parametrically Guided Generalized Additive Models with Application to Mergers and Acquisitions Data

    PubMed Central

    Fan, Jianqing; Maity, Arnab; Wang, Yihui; Wu, Yichao

    2012-01-01

    Generalized nonparametric additive models present a flexible way to evaluate the effects of several covariates on a general outcome of interest via a link function. In this modeling framework, one assumes that the effect of each of the covariates is nonparametric and additive. However, in practice, often there is prior information available about the shape of the regression functions, possibly from pilot studies or exploratory analysis. In this paper, we consider such situations and propose an estimation procedure where the prior information is used as a parametric guide to fit the additive model. Specifically, we first posit a parametric family for each of the regression functions using the prior information (parametric guides). After removing these parametric trends, we then estimate the remainder of the nonparametric functions using a nonparametric generalized additive model, and form the final estimates by adding back the parametric trend. We investigate the asymptotic properties of the estimates and show that when a good guide is chosen, the asymptotic variance of the estimates can be reduced significantly while keeping the asymptotic variance same as the unguided estimator. We observe the performance of our method via a simulation study and demonstrate our method by applying to a real data set on mergers and acquisitions. PMID:23645976

  7. Nonparametric analysis of bivariate gap time with competing risks.

    PubMed

    Huang, Chiung-Yu; Wang, Chenguang; Wang, Mei-Cheng

    2016-09-01

    This article considers nonparametric methods for studying recurrent disease and death with competing risks. We first point out that comparisons based on the well-known cumulative incidence function can be confounded by different prevalence rates of the competing events, and that comparisons of the conditional distribution of the survival time given the failure event type are more relevant for investigating the prognosis of different patterns of recurrence disease. We then propose nonparametric estimators for the conditional cumulative incidence function as well as the conditional bivariate cumulative incidence function for the bivariate gap times, that is, the time to disease recurrence and the residual lifetime after recurrence. To quantify the association between the two gap times in the competing risks setting, a modified Kendall's tau statistic is proposed. The proposed estimators for the conditional bivariate cumulative incidence distribution and the association measure account for the induced dependent censoring for the second gap time. Uniform consistency and weak convergence of the proposed estimators are established. Hypothesis testing procedures for two-sample comparisons are discussed. Numerical simulation studies with practical sample sizes are conducted to evaluate the performance of the proposed nonparametric estimators and tests. An application to data from a pancreatic cancer study is presented to illustrate the methods developed in this article. © 2016, The International Biometric Society.

  8. Joint nonparametric correction estimator for excess relative risk regression in survival analysis with exposure measurement error

    PubMed Central

    Wang, Ching-Yun; Cullings, Harry; Song, Xiao; Kopecky, Kenneth J.

    2017-01-01

    SUMMARY Observational epidemiological studies often confront the problem of estimating exposure-disease relationships when the exposure is not measured exactly. In the paper, we investigate exposure measurement error in excess relative risk regression, which is a widely used model in radiation exposure effect research. In the study cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies a generalized version of the classical additive measurement error model, but it may or may not have repeated measurements. In addition, an instrumental variable is available for individuals in a subset of the whole cohort. We develop a nonparametric correction (NPC) estimator using data from the subcohort, and further propose a joint nonparametric correction (JNPC) estimator using all observed data to adjust for exposure measurement error. An optimal linear combination estimator of JNPC and NPC is further developed. The proposed estimators are nonparametric, which are consistent without imposing a covariate or error distribution, and are robust to heteroscedastic errors. Finite sample performance is examined via a simulation study. We apply the developed methods to data from the Radiation Effects Research Foundation, in which chromosome aberration is used to adjust for the effects of radiation dose measurement error on the estimation of radiation dose responses. PMID:29354018

  9. Parametrically Guided Generalized Additive Models with Application to Mergers and Acquisitions Data.

    PubMed

    Fan, Jianqing; Maity, Arnab; Wang, Yihui; Wu, Yichao

    2013-01-01

    Generalized nonparametric additive models present a flexible way to evaluate the effects of several covariates on a general outcome of interest via a link function. In this modeling framework, one assumes that the effect of each of the covariates is nonparametric and additive. However, in practice, often there is prior information available about the shape of the regression functions, possibly from pilot studies or exploratory analysis. In this paper, we consider such situations and propose an estimation procedure where the prior information is used as a parametric guide to fit the additive model. Specifically, we first posit a parametric family for each of the regression functions using the prior information (parametric guides). After removing these parametric trends, we then estimate the remainder of the nonparametric functions using a nonparametric generalized additive model, and form the final estimates by adding back the parametric trend. We investigate the asymptotic properties of the estimates and show that when a good guide is chosen, the asymptotic variance of the estimates can be reduced significantly while keeping the asymptotic variance same as the unguided estimator. We observe the performance of our method via a simulation study and demonstrate our method by applying to a real data set on mergers and acquisitions.

  10. Robust neural network with applications to credit portfolio data analysis.

    PubMed

    Feng, Yijia; Li, Runze; Sudjianto, Agus; Zhang, Yiyun

    2010-01-01

    In this article, we study nonparametric conditional quantile estimation via neural network structure. We proposed an estimation method that combines quantile regression and neural network (robust neural network, RNN). It provides good smoothing performance in the presence of outliers and can be used to construct prediction bands. A Majorization-Minimization (MM) algorithm was developed for optimization. Monte Carlo simulation study is conducted to assess the performance of RNN. Comparison with other nonparametric regression methods (e.g., local linear regression and regression splines) in real data application demonstrate the advantage of the newly proposed procedure.

  11. Bayesian Nonparametric Ordination for the Analysis of Microbial Communities.

    PubMed

    Ren, Boyu; Bacallado, Sergio; Favaro, Stefano; Holmes, Susan; Trippa, Lorenzo

    2017-01-01

    Human microbiome studies use sequencing technologies to measure the abundance of bacterial species or Operational Taxonomic Units (OTUs) in samples of biological material. Typically the data are organized in contingency tables with OTU counts across heterogeneous biological samples. In the microbial ecology community, ordination methods are frequently used to investigate latent factors or clusters that capture and describe variations of OTU counts across biological samples. It remains important to evaluate how uncertainty in estimates of each biological sample's microbial distribution propagates to ordination analyses, including visualization of clusters and projections of biological samples on low dimensional spaces. We propose a Bayesian analysis for dependent distributions to endow frequently used ordinations with estimates of uncertainty. A Bayesian nonparametric prior for dependent normalized random measures is constructed, which is marginally equivalent to the normalized generalized Gamma process, a well-known prior for nonparametric analyses. In our prior, the dependence and similarity between microbial distributions is represented by latent factors that concentrate in a low dimensional space. We use a shrinkage prior to tune the dimensionality of the latent factors. The resulting posterior samples of model parameters can be used to evaluate uncertainty in analyses routinely applied in microbiome studies. Specifically, by combining them with multivariate data analysis techniques we can visualize credible regions in ecological ordination plots. The characteristics of the proposed model are illustrated through a simulation study and applications in two microbiome datasets.

  12. Testing for purchasing power parity in the long-run for ASEAN-5

    NASA Astrophysics Data System (ADS)

    Choji, Niri Martha; Sek, Siok Kun

    2017-04-01

    For more than a decade, there has been a substantial interest in testing for the validity of the purchasing power parity (PPP) hypothesis empirically. This paper performs a test on revealing a long-run relative Purchasing Power Parity for a group of ASEAN-5 countries for the period of 1996-2016 using monthly data. For this purpose, we used the Pedroni co-integration method to test for the long-run hypothesis of purchasing power parity. We first tested for the stationarity of the variables and found that the variables are non-stationary at levels but stationary at first difference. Results of the Pedroni test rejected the null hypothesis of no co-integration meaning that we have enough evidence to support PPP in the long-run for the ASEAN-5 countries over the period of 1996-2016. In other words, the rejection of null hypothesis implies a long-run relation between nominal exchange rates and relative prices.

  13. Confidence and self-attribution bias in an artificial stock market.

    PubMed

    Bertella, Mario A; Pires, Felipe R; Rego, Henio H A; Silva, Jonathas N; Vodenska, Irena; Stanley, H Eugene

    2017-01-01

    Using an agent-based model we examine the dynamics of stock price fluctuations and their rates of return in an artificial financial market composed of fundamentalist and chartist agents with and without confidence. We find that chartist agents who are confident generate higher price and rate of return volatilities than those who are not. We also find that kurtosis and skewness are lower in our simulation study of agents who are not confident. We show that the stock price and confidence index-both generated by our model-are cointegrated and that stock price affects confidence index but confidence index does not affect stock price. We next compare the results of our model with the S&P 500 index and its respective stock market confidence index using cointegration and Granger tests. As in our model, we find that stock prices drive their respective confidence indices, but that the opposite relationship, i.e., the assumption that confidence indices drive stock prices, is not significant.

  14. Confidence and self-attribution bias in an artificial stock market

    PubMed Central

    Bertella, Mario A.; Pires, Felipe R.; Rego, Henio H. A.; Vodenska, Irena; Stanley, H. Eugene

    2017-01-01

    Using an agent-based model we examine the dynamics of stock price fluctuations and their rates of return in an artificial financial market composed of fundamentalist and chartist agents with and without confidence. We find that chartist agents who are confident generate higher price and rate of return volatilities than those who are not. We also find that kurtosis and skewness are lower in our simulation study of agents who are not confident. We show that the stock price and confidence index—both generated by our model—are cointegrated and that stock price affects confidence index but confidence index does not affect stock price. We next compare the results of our model with the S&P 500 index and its respective stock market confidence index using cointegration and Granger tests. As in our model, we find that stock prices drive their respective confidence indices, but that the opposite relationship, i.e., the assumption that confidence indices drive stock prices, is not significant. PMID:28231255

  15. Urbanization, regime type and durability, and environmental degradation in Ghana.

    PubMed

    Adams, Samuel; Adom, Philip Kofi; Klobodu, Edem Kwame Mensah

    2016-12-01

    This study examines the effect of urbanization, income, trade openness, and institutional quality (i.e., regime type and durability) on environmental degradation in Ghana over the period 1965-2011. Using the bounds test approach to cointegration and the Fully Modified Phillip-Hansen (FMPH) technique, the findings show that urbanization, income, trade openness, and institutional quality have long-run cointegration with environmental degradation. Further, the results show that income, trade openness, and institutional quality are negatively associated with environmental degradation. This suggests that income, trade openness, and institutional quality enhance environmental performance. Urbanization, however, is positively related to environmental degradation. Additionally, long-run estimates conditioned on institutional quality reveal that the extent to which trade openness and urbanization enhance environmental performance is largely due to the presence of quality institutions (or democratic institutions). Finally, controlling for structural breaks, we find that trade openness, urbanization, and regime type (i.e., democracy) improve environmental performance significantly after the 1970s except for income.

  16. Efficiency Analysis of Public Universities in Thailand

    ERIC Educational Resources Information Center

    Kantabutra, Saranya; Tang, John C. S.

    2010-01-01

    This paper examines the performance of Thai public universities in terms of efficiency, using a non-parametric approach called data envelopment analysis. Two efficiency models, the teaching efficiency model and the research efficiency model, are developed and the analysis is conducted at the faculty level. Further statistical analyses are also…

  17. A Rational Analysis of the Acquisition of Multisensory Representations

    ERIC Educational Resources Information Center

    Yildirim, Ilker; Jacobs, Robert A.

    2012-01-01

    How do people learn multisensory, or amodal, representations, and what consequences do these representations have for perceptual performance? We address this question by performing a rational analysis of the problem of learning multisensory representations. This analysis makes use of a Bayesian nonparametric model that acquires latent multisensory…

  18. Target Identification Using Harmonic Wavelet Based ISAR Imaging

    NASA Astrophysics Data System (ADS)

    Shreyamsha Kumar, B. K.; Prabhakar, B.; Suryanarayana, K.; Thilagavathi, V.; Rajagopal, R.

    2006-12-01

    A new approach has been proposed to reduce the computations involved in the ISAR imaging, which uses harmonic wavelet-(HW) based time-frequency representation (TFR). Since the HW-based TFR falls into a category of nonparametric time-frequency (T-F) analysis tool, it is computationally efficient compared to parametric T-F analysis tools such as adaptive joint time-frequency transform (AJTFT), adaptive wavelet transform (AWT), and evolutionary AWT (EAWT). Further, the performance of the proposed method of ISAR imaging is compared with the ISAR imaging by other nonparametric T-F analysis tools such as short-time Fourier transform (STFT) and Choi-Williams distribution (CWD). In the ISAR imaging, the use of HW-based TFR provides similar/better results with significant (92%) computational advantage compared to that obtained by CWD. The ISAR images thus obtained are identified using a neural network-based classification scheme with feature set invariant to translation, rotation, and scaling.

  19. Component isolation for multi-component signal analysis using a non-parametric gaussian latent feature model

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Peng, Zhike; Dong, Xingjian; Zhang, Wenming; Clifton, David A.

    2018-03-01

    A challenge in analysing non-stationary multi-component signals is to isolate nonlinearly time-varying signals especially when they are overlapped in time and frequency plane. In this paper, a framework integrating time-frequency analysis-based demodulation and a non-parametric Gaussian latent feature model is proposed to isolate and recover components of such signals. The former aims to remove high-order frequency modulation (FM) such that the latter is able to infer demodulated components while simultaneously discovering the number of the target components. The proposed method is effective in isolating multiple components that have the same FM behavior. In addition, the results show that the proposed method is superior to generalised demodulation with singular-value decomposition-based method, parametric time-frequency analysis with filter-based method and empirical model decomposition base method, in recovering the amplitude and phase of superimposed components.

  20. Parametric and non-parametric approach for sensory RATA (Rate-All-That-Apply) method of ledre profile attributes

    NASA Astrophysics Data System (ADS)

    Hastuti, S.; Harijono; Murtini, E. S.; Fibrianto, K.

    2018-03-01

    This current study is aimed to investigate the use of parametric and non-parametric approach for sensory RATA (Rate-All-That-Apply) method. Ledre as Bojonegoro unique local food product was used as point of interest, in which 319 panelists were involved in the study. The result showed that ledre is characterized as easy-crushed texture, sticky in mouth, stingy sensation and easy to swallow. It has also strong banana flavour with brown in colour. Compared to eggroll and semprong, ledre has more variances in terms of taste as well the roll length. As RATA questionnaire is designed to collect categorical data, non-parametric approach is the common statistical procedure. However, similar results were also obtained as parametric approach, regardless the fact of non-normal distributed data. Thus, it suggests that parametric approach can be applicable for consumer study with large number of respondents, even though it may not satisfy the assumption of ANOVA (Analysis of Variances).

  1. Robust variable selection method for nonparametric differential equation models with application to nonlinear dynamic gene regulatory network analysis.

    PubMed

    Lu, Tao

    2016-01-01

    The gene regulation network (GRN) evaluates the interactions between genes and look for models to describe the gene expression behavior. These models have many applications; for instance, by characterizing the gene expression mechanisms that cause certain disorders, it would be possible to target those genes to block the progress of the disease. Many biological processes are driven by nonlinear dynamic GRN. In this article, we propose a nonparametric differential equation (ODE) to model the nonlinear dynamic GRN. Specially, we address following questions simultaneously: (i) extract information from noisy time course gene expression data; (ii) model the nonlinear ODE through a nonparametric smoothing function; (iii) identify the important regulatory gene(s) through a group smoothly clipped absolute deviation (SCAD) approach; (iv) test the robustness of the model against possible shortening of experimental duration. We illustrate the usefulness of the model and associated statistical methods through a simulation and a real application examples.

  2. Network structure exploration in networks with node attributes

    NASA Astrophysics Data System (ADS)

    Chen, Yi; Wang, Xiaolong; Bu, Junzhao; Tang, Buzhou; Xiang, Xin

    2016-05-01

    Complex networks provide a powerful way to represent complex systems and have been widely studied during the past several years. One of the most important tasks of network analysis is to detect structures (also called structural regularities) embedded in networks by determining group number and group partition. Most of network structure exploration models only consider network links. However, in real world networks, nodes may have attributes that are useful for network structure exploration. In this paper, we propose a novel Bayesian nonparametric (BNP) model to explore structural regularities in networks with node attributes, called Bayesian nonparametric attribute (BNPA) model. This model does not only take full advantage of both links between nodes and node attributes for group partition via shared hidden variables, but also determine group number automatically via the Bayesian nonparametric theory. Experiments conducted on a number of real and synthetic networks show that our BNPA model is able to automatically explore structural regularities in networks with node attributes and is competitive with other state-of-the-art models.

  3. Non-Parametric Collision Probability for Low-Velocity Encounters

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    2007-01-01

    An implicit, but not necessarily obvious, assumption in all of the current techniques for assessing satellite collision probability is that the relative position uncertainty is perfectly correlated in time. If there is any mis-modeling of the dynamics in the propagation of the relative position error covariance matrix, time-wise de-correlation of the uncertainty will increase the probability of collision over a given time interval. The paper gives some examples that illustrate this point. This paper argues that, for the present, Monte Carlo analysis is the best available tool for handling low-velocity encounters, and suggests some techniques for addressing the issues just described. One proposal is for the use of a non-parametric technique that is widely used in actuarial and medical studies. The other suggestion is that accurate process noise models be used in the Monte Carlo trials to which the non-parametric estimate is applied. A further contribution of this paper is a description of how the time-wise decorrelation of uncertainty increases the probability of collision.

  4. Does agricultural ecosystem cause environmental pollution in Pakistan? Promise and menace.

    PubMed

    Ullah, Arif; Khan, Dilawar; Khan, Imran; Zheng, Shaofeng

    2018-05-01

    The increasing trend of atmospheric carbon dioxide (CO 2 ) is the main cause of harmful anthropogenic greenhouse gas emissions, which may result in environmental pollution, global warming, and climate change. These issues are expected to adversely affect the agricultural ecosystem and well-being of the society. In order to minimize food insecurity and prevent hunger, a timely adaptation is desirable to reduce potential losses and to seek alternatives for promoting a global knowledge system for agricultural sustainability. This paper examines the causal relationship between agricultural ecosystem and CO 2 emissions as an environmental pollution indicator in Pakistan from the period 1972 to 2014 by employing Johansen cointegration, autoregressive distributed lag (ARDL) model, and Granger causality approach. The Johansen cointegration results show that there is a significant long-run relationship between the agricultural ecosystem and the CO 2 emissions. The long-run relationship shows that a 1% increase in biomass burned crop residues, emissions of CO 2 equivalent of nitrous oxide (N 2 O) from synthetic fertilizers, stock of livestock, agricultural machinery, cereal production, and other crop productions will increase CO 2 emissions by 1.29, 0.05, 0.45, 0.05, 0.03, and 0.65%, respectively. Further, our finding detects that there is a bidirectional causality of CO 2 emissions with rice area paddy harvested, cereal production, and other crop productions. The impulse response function analysis displays that biomass-burned crop residues, stock of livestock, agriculture machinery, cereal production, and other crop productions are significantly contributing to CO 2 emissions in Pakistan.

  5. CO2 emissions, real output, energy consumption, trade, urbanization and financial development: testing the EKC hypothesis for the USA.

    PubMed

    Dogan, Eyup; Turkekul, Berna

    2016-01-01

    This study aims to investigate the relationship between carbon dioxide (CO2) emissions, energy consumption, real output (GDP), the square of real output (GDP(2)), trade openness, urbanization, and financial development in the USA for the period 1960-2010. The bounds testing for cointegration indicates that the analyzed variables are cointegrated. In the long run, energy consumption and urbanization increase environmental degradation while financial development has no effect on it, and trade leads to environmental improvements. In addition, this study does not support the validity of the environmental Kuznets curve (EKC) hypothesis for the USA because real output leads to environmental improvements while GDP(2) increases the levels of gas emissions. The results from the Granger causality test show that there is bidirectional causality between CO2 and GDP, CO2 and energy consumption, CO2 and urbanization, GDP and urbanization, and GDP and trade openness while no causality is determined between CO2 and trade openness, and gas emissions and financial development. In addition, we have enough evidence to support one-way causality running from GDP to energy consumption, from financial development to output, and from urbanization to financial development. In light of the long-run estimates and the Granger causality analysis, the US government should take into account the importance of trade openness, urbanization, and financial development in controlling for the levels of GDP and pollution. Moreover, it should be noted that the development of efficient energy policies likely contributes to lower CO2 emissions without harming real output.

  6. Probit vs. semi-nonparametric estimation: examining the role of disability on institutional entry for older adults.

    PubMed

    Sharma, Andy

    2017-06-01

    The purpose of this study was to showcase an advanced methodological approach to model disability and institutional entry. Both of these are important areas to investigate given the on-going aging of the United States population. By 2020, approximately 15% of the population will be 65 years and older. Many of these older adults will experience disability and require formal care. A probit analysis was employed to determine which disabilities were associated with admission into an institution (i.e. long-term care). Since this framework imposes strong distributional assumptions, misspecification leads to inconsistent estimators. To overcome such a short-coming, this analysis extended the probit framework by employing an advanced semi-nonparamertic maximum likelihood estimation utilizing Hermite polynomial expansions. Specification tests show semi-nonparametric estimation is preferred over probit. In terms of the estimates, semi-nonparametric ratios equal 42 for cognitive difficulty, 64 for independent living, and 111 for self-care disability while probit yields much smaller estimates of 19, 30, and 44, respectively. Public health professionals can use these results to better understand why certain interventions have not shown promise. Equally important, healthcare workers can use this research to evaluate which type of treatment plans may delay institutionalization and improve the quality of life for older adults. Implications for rehabilitation With on-going global aging, understanding the association between disability and institutional entry is important in devising successful rehabilitation interventions. Semi-nonparametric is preferred to probit and shows ambulatory and cognitive impairments present high risk for institutional entry (long-term care). Informal caregiving and home-based care require further examination as forms of rehabilitation/therapy for certain types of disabilities.

  7. Three essays in energy consumption: Time series analyses

    NASA Astrophysics Data System (ADS)

    Ahn, Hee Bai

    1997-10-01

    Firstly, this dissertation investigates that which demand specification is an appropriate model for long-run energy demand between the conventional demand specification and the limited demand specification. In order to determine the components of a stable long-run demand for different sectors of the energy industry, I perform cointegration tests by using the Johansen test procedure. First, I test the conventional demand specification including prices and income as components. Second, I test a limited demand specification only income as a component. The reason for performing these tests is that we can determine that which demand specification is a good long-run predictor of energy consumption between the two demand specifications by using the cointegration tests. Secondly, for the purpose of planning and forecasting energy demand in case of cointegrated system, long-run elasticities are of particular interest. To retrieve the optimal level of energy demand in case of price shock, we need long-run elasticities rather than short-run elasticities. The energy demand study provides valuable information to the energy policy makers who are concerned about the long-run impact of taxes and tariffs. A long-run price elasticity is a primary barometer of the substitution effect between energy and non-energy inputs and long-run income elasticity is an important factor since we can measure the energy demand growing slowly or fast than in the past depending on the magnitude of long-run elasticity. The one other problem in estimating the total energy demand is that there exists an aggregation bias stemming from the process of summation in four different energy types for the total aggregation prices and total aggregation energy consumption. In order to measure the aggregation bias between the Btu aggregation method and the Divisia Index method, i.e., which methodology has less aggregation bias in the long-run, I compare the two estimation results with calculated results estimated on a disaggregated basis. Thus, we can confirm whether or not the theoretically superior methodology has less aggregation bias in empirical estimation. Thirdly, I investigate the causal relationships between energy use and GDP. In order to detect causal relationships both in the long-run and in the short-run, the VECM (Vector Error Correction Model) can be used if there exists cointegration relationships among the variables. I detect the causal effects between energy use and GDP by estimating the VECM based on the multivariate production function including the labor and capital variables.

  8. Estimation from PET data of transient changes in dopamine concentration induced by alcohol: support for a non-parametric signal estimation method

    NASA Astrophysics Data System (ADS)

    Constantinescu, C. C.; Yoder, K. K.; Kareken, D. A.; Bouman, C. A.; O'Connor, S. J.; Normandin, M. D.; Morris, E. D.

    2008-03-01

    We previously developed a model-independent technique (non-parametric ntPET) for extracting the transient changes in neurotransmitter concentration from paired (rest & activation) PET studies with a receptor ligand. To provide support for our method, we introduced three hypotheses of validation based on work by Endres and Carson (1998 J. Cereb. Blood Flow Metab. 18 1196-210) and Yoder et al (2004 J. Nucl. Med. 45 903-11), and tested them on experimental data. All three hypotheses describe relationships between the estimated free (synaptic) dopamine curves (FDA(t)) and the change in binding potential (ΔBP). The veracity of the FDA(t) curves recovered by nonparametric ntPET is supported when the data adhere to the following hypothesized behaviors: (1) ΔBP should decline with increasing DA peak time, (2) ΔBP should increase as the strength of the temporal correlation between FDA(t) and the free raclopride (FRAC(t)) curve increases, (3) ΔBP should decline linearly with the effective weighted availability of the receptor sites. We analyzed regional brain data from 8 healthy subjects who received two [11C]raclopride scans: one at rest, and one during which unanticipated IV alcohol was administered to stimulate dopamine release. For several striatal regions, nonparametric ntPET was applied to recover FDA(t), and binding potential values were determined. Kendall rank-correlation analysis confirmed that the FDA(t) data followed the expected trends for all three validation hypotheses. Our findings lend credence to our model-independent estimates of FDA(t). Application of nonparametric ntPET may yield important insights into how alterations in timing of dopaminergic neurotransmission are involved in the pathologies of addiction and other psychiatric disorders.

  9. A Bayesian nonparametric method for prediction in EST analysis

    PubMed Central

    Lijoi, Antonio; Mena, Ramsés H; Prünster, Igor

    2007-01-01

    Background Expressed sequence tags (ESTs) analyses are a fundamental tool for gene identification in organisms. Given a preliminary EST sample from a certain library, several statistical prediction problems arise. In particular, it is of interest to estimate how many new genes can be detected in a future EST sample of given size and also to determine the gene discovery rate: these estimates represent the basis for deciding whether to proceed sequencing the library and, in case of a positive decision, a guideline for selecting the size of the new sample. Such information is also useful for establishing sequencing efficiency in experimental design and for measuring the degree of redundancy of an EST library. Results In this work we propose a Bayesian nonparametric approach for tackling statistical problems related to EST surveys. In particular, we provide estimates for: a) the coverage, defined as the proportion of unique genes in the library represented in the given sample of reads; b) the number of new unique genes to be observed in a future sample; c) the discovery rate of new genes as a function of the future sample size. The Bayesian nonparametric model we adopt conveys, in a statistically rigorous way, the available information into prediction. Our proposal has appealing properties over frequentist nonparametric methods, which become unstable when prediction is required for large future samples. EST libraries, previously studied with frequentist methods, are analyzed in detail. Conclusion The Bayesian nonparametric approach we undertake yields valuable tools for gene capture and prediction in EST libraries. The estimators we obtain do not feature the kind of drawbacks associated with frequentist estimators and are reliable for any size of the additional sample. PMID:17868445

  10. Nonparametric estimation and testing of fixed effects panel data models

    PubMed Central

    Henderson, Daniel J.; Carroll, Raymond J.; Li, Qi

    2009-01-01

    In this paper we consider the problem of estimating nonparametric panel data models with fixed effects. We introduce an iterative nonparametric kernel estimator. We also extend the estimation method to the case of a semiparametric partially linear fixed effects model. To determine whether a parametric, semiparametric or nonparametric model is appropriate, we propose test statistics to test between the three alternatives in practice. We further propose a test statistic for testing the null hypothesis of random effects against fixed effects in a nonparametric panel data regression model. Simulations are used to examine the finite sample performance of the proposed estimators and the test statistics. PMID:19444335

  11. Mapping the Structure-Function Relationship in Glaucoma and Healthy Patients Measured with Spectralis OCT and Humphrey Perimetry

    PubMed Central

    Muñoz–Negrete, Francisco J.; Oblanca, Noelia; Rebolleda, Gema

    2018-01-01

    Purpose To study the structure-function relationship in glaucoma and healthy patients assessed with Spectralis OCT and Humphrey perimetry using new statistical approaches. Materials and Methods Eighty-five eyes were prospectively selected and divided into 2 groups: glaucoma (44) and healthy patients (41). Three different statistical approaches were carried out: (1) factor analysis of the threshold sensitivities (dB) (automated perimetry) and the macular thickness (μm) (Spectralis OCT), subsequently applying Pearson's correlation to the obtained regions, (2) nonparametric regression analysis relating the values in each pair of regions that showed significant correlation, and (3) nonparametric spatial regressions using three models designed for the purpose of this study. Results In the glaucoma group, a map that relates structural and functional damage was drawn. The strongest correlation with visual fields was observed in the peripheral nasal region of both superior and inferior hemigrids (r = 0.602 and r = 0.458, resp.). The estimated functions obtained with the nonparametric regressions provided the mean sensitivity that corresponds to each given macular thickness. These functions allowed for accurate characterization of the structure-function relationship. Conclusions Both maps and point-to-point functions obtained linking structure and function damage contribute to a better understanding of this relationship and may help in the future to improve glaucoma diagnosis. PMID:29850196

  12. The analysis of incontinence episodes and other count data in patients with overactive bladder by Poisson and negative binomial regression.

    PubMed

    Martina, R; Kay, R; van Maanen, R; Ridder, A

    2015-01-01

    Clinical studies in overactive bladder have traditionally used analysis of covariance or nonparametric methods to analyse the number of incontinence episodes and other count data. It is known that if the underlying distributional assumptions of a particular parametric method do not hold, an alternative parametric method may be more efficient than a nonparametric one, which makes no assumptions regarding the underlying distribution of the data. Therefore, there are advantages in using methods based on the Poisson distribution or extensions of that method, which incorporate specific features that provide a modelling framework for count data. One challenge with count data is overdispersion, but methods are available that can account for this through the introduction of random effect terms in the modelling, and it is this modelling framework that leads to the negative binomial distribution. These models can also provide clinicians with a clearer and more appropriate interpretation of treatment effects in terms of rate ratios. In this paper, the previously used parametric and non-parametric approaches are contrasted with those based on Poisson regression and various extensions in trials evaluating solifenacin and mirabegron in patients with overactive bladder. In these applications, negative binomial models are seen to fit the data well. Copyright © 2014 John Wiley & Sons, Ltd.

  13. NONPARAMETRIC MANOVA APPROACHES FOR NON-NORMAL MULTIVARIATE OUTCOMES WITH MISSING VALUES

    PubMed Central

    He, Fanyin; Mazumdar, Sati; Tang, Gong; Bhatia, Triptish; Anderson, Stewart J.; Dew, Mary Amanda; Krafty, Robert; Nimgaonkar, Vishwajit; Deshpande, Smita; Hall, Martica; Reynolds, Charles F.

    2017-01-01

    Between-group comparisons often entail many correlated response variables. The multivariate linear model, with its assumption of multivariate normality, is the accepted standard tool for these tests. When this assumption is violated, the nonparametric multivariate Kruskal-Wallis (MKW) test is frequently used. However, this test requires complete cases with no missing values in response variables. Deletion of cases with missing values likely leads to inefficient statistical inference. Here we extend the MKW test to retain information from partially-observed cases. Results of simulated studies and analysis of real data show that the proposed method provides adequate coverage and superior power to complete-case analyses. PMID:29416225

  14. Theory and Application of DNA Histogram Analysis.

    ERIC Educational Resources Information Center

    Bagwell, Charles Bruce

    The underlying principles and assumptions associated with DNA histograms are discussed along with the characteristics of fluorescent probes. Information theory was described and used to calculate the information content of a DNA histogram. Two major types of DNA histogram analyses are proposed: parametric and nonparametric analysis. Three levels…

  15. HBCU Efficiency and Endowments: An Exploratory Analysis

    ERIC Educational Resources Information Center

    Coupet, Jason; Barnum, Darold

    2010-01-01

    Discussions of efficiency among Historically Black Colleges and Universities (HBCUs) are often missing in academic conversations. This article seeks to assess efficiency of individual HBCUs using Data Envelopment Analysis (DEA), a non-parametric technique that can synthesize multiple inputs and outputs to determine a single efficiency score for…

  16. Assessment of 48 Stock markets using adaptive multifractal approach

    NASA Astrophysics Data System (ADS)

    Ferreira, Paulo; Dionísio, Andreia; Movahed, S. M. S.

    2017-11-01

    In this paper, Stock market comovements are examined using cointegration, Granger causality tests and nonlinear approaches in context of mutual information and correlations. Since underlying data sets are affected by non-stationarities and trends, we also apply Adaptive Multifractal Detrended Fluctuation Analysis (AMF-DFA) and Adaptive Multifractal Detrended Cross-Correlation Analysis (AMF-DXA). We find only 170 pair of Stock markets cointegrated, and according to the Granger causality and mutual information, we realize that the strongest relations lies between emerging markets, and between emerging and frontier markets. According to scaling exponent given by AMF-DFA, h(q = 2) > 1, we find that all underlying data sets belong to non-stationary process. According to Efficient Market Hypothesis (EMH), only 8 markets are classified in uncorrelated processes at 2 σ confidence interval. 6 Stock markets belong to anti-correlated class and dominant part of markets has memory in corresponding daily index prices during January 1995 to February 2014. New-Zealand with H = 0 . 457 ± 0 . 004 and Jordan with H = 0 . 602 ± 0 . 006 are far from EMH. The nature of cross-correlation exponents based on AMF-DXA is almost multifractal for all pair of Stock markets. The empirical relation, Hxy ≤ [Hxx +Hyy ] / 2, is confirmed. Mentioned relation for q > 0 is also satisfied while for q < 0 there is a deviation from this relation confirming behavior of markets for small fluctuations is affected by contribution of major pair. For larger fluctuations, the cross-correlation contains information from both local (internal) and global (external) conditions. Width of singularity spectrum for auto-correlation and cross-correlation are Δαxx ∈ [ 0 . 304 , 0 . 905 ] and Δαxy ∈ [ 0 . 246 , 1 . 178 ] , respectively. The wide range of singularity spectrum for cross-correlation confirms that the bilateral relation between Stock markets is more complex. The value of σDCCA indicates that all pairs of stock market studied in this time interval belong to cross-correlated processes.

  17. Analyzing Single-Molecule Time Series via Nonparametric Bayesian Inference

    PubMed Central

    Hines, Keegan E.; Bankston, John R.; Aldrich, Richard W.

    2015-01-01

    The ability to measure the properties of proteins at the single-molecule level offers an unparalleled glimpse into biological systems at the molecular scale. The interpretation of single-molecule time series has often been rooted in statistical mechanics and the theory of Markov processes. While existing analysis methods have been useful, they are not without significant limitations including problems of model selection and parameter nonidentifiability. To address these challenges, we introduce the use of nonparametric Bayesian inference for the analysis of single-molecule time series. These methods provide a flexible way to extract structure from data instead of assuming models beforehand. We demonstrate these methods with applications to several diverse settings in single-molecule biophysics. This approach provides a well-constrained and rigorously grounded method for determining the number of biophysical states underlying single-molecule data. PMID:25650922

  18. Cluster-level statistical inference in fMRI datasets: The unexpected behavior of random fields in high dimensions.

    PubMed

    Bansal, Ravi; Peterson, Bradley S

    2018-06-01

    Identifying regional effects of interest in MRI datasets usually entails testing a priori hypotheses across many thousands of brain voxels, requiring control for false positive findings in these multiple hypotheses testing. Recent studies have suggested that parametric statistical methods may have incorrectly modeled functional MRI data, thereby leading to higher false positive rates than their nominal rates. Nonparametric methods for statistical inference when conducting multiple statistical tests, in contrast, are thought to produce false positives at the nominal rate, which has thus led to the suggestion that previously reported studies should reanalyze their fMRI data using nonparametric tools. To understand better why parametric methods may yield excessive false positives, we assessed their performance when applied both to simulated datasets of 1D, 2D, and 3D Gaussian Random Fields (GRFs) and to 710 real-world, resting-state fMRI datasets. We showed that both the simulated 2D and 3D GRFs and the real-world data contain a small percentage (<6%) of very large clusters (on average 60 times larger than the average cluster size), which were not present in 1D GRFs. These unexpectedly large clusters were deemed statistically significant using parametric methods, leading to empirical familywise error rates (FWERs) as high as 65%: the high empirical FWERs were not a consequence of parametric methods failing to model spatial smoothness accurately, but rather of these very large clusters that are inherently present in smooth, high-dimensional random fields. In fact, when discounting these very large clusters, the empirical FWER for parametric methods was 3.24%. Furthermore, even an empirical FWER of 65% would yield on average less than one of those very large clusters in each brain-wide analysis. Nonparametric methods, in contrast, estimated distributions from those large clusters, and therefore, by construct rejected the large clusters as false positives at the nominal FWERs. Those rejected clusters were outlying values in the distribution of cluster size but cannot be distinguished from true positive findings without further analyses, including assessing whether fMRI signal in those regions correlates with other clinical, behavioral, or cognitive measures. Rejecting the large clusters, however, significantly reduced the statistical power of nonparametric methods in detecting true findings compared with parametric methods, which would have detected most true findings that are essential for making valid biological inferences in MRI data. Parametric analyses, in contrast, detected most true findings while generating relatively few false positives: on average, less than one of those very large clusters would be deemed a true finding in each brain-wide analysis. We therefore recommend the continued use of parametric methods that model nonstationary smoothness for cluster-level, familywise control of false positives, particularly when using a Cluster Defining Threshold of 2.5 or higher, and subsequently assessing rigorously the biological plausibility of the findings, even for large clusters. Finally, because nonparametric methods yielded a large reduction in statistical power to detect true positive findings, we conclude that the modest reduction in false positive findings that nonparametric analyses afford does not warrant a re-analysis of previously published fMRI studies using nonparametric techniques. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Proceedings of the Third Annual Symposium on Mathematical Pattern Recognition and Image Analysis

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.

    1985-01-01

    Topics addressed include: multivariate spline method; normal mixture analysis applied to remote sensing; image data analysis; classifications in spatially correlated environments; probability density functions; graphical nonparametric methods; subpixel registration analysis; hypothesis integration in image understanding systems; rectification of satellite scanner imagery; spatial variation in remotely sensed images; smooth multidimensional interpolation; and optimal frequency domain textural edge detection filters.

  20. Potential linkage for schizophrenia on chromosome 22q12-q13: A replication study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwab, S.G.; Bondy, B.; Wildenauer, D.B.

    1995-10-09

    In an attempt to replicate a potential linkage on chromosome 22q12-q13.1 reported by Pulver et al., we have analyzed 4 microsatellite markers which span this chromosomal region, including the IL2RB locus, for linkage with schizophrenia in 30 families from Israel and Germany. Linkage analysis by pairwise lod score analysis as well as by multipoint analysis did not provide evidence for a single major gene locus. However, a lod score of Z{sub max} = 0.612 was obtained for a dominant model of inheritance with the marker D22S304 at recombination fraction 0.2 by pairwise analysis. In addition, using a nonparametric method, sibmore » pair analysis, a P value of 0.068 corresponding to a lod score of 0.48 was obtained for this marker. This finding, together with those of Pulver et al., is suggestive of a genetic factor in this region, predisposing for schizophrenia in a subset of families. Further studies using nonparametric methods should be conducted in order to clarify this point. 32 refs., 1 fig., 4 tabs.« less

  1. A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation.

    PubMed

    Karabatsos, George

    2017-02-01

    Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.

  2. Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife

    ERIC Educational Resources Information Center

    Jennrich, Robert I.

    2008-01-01

    The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…

  3. Can Percentiles Replace Raw Scores in the Statistical Analysis of Test Data?

    ERIC Educational Resources Information Center

    Zimmerman, Donald W.; Zumbo, Bruno D.

    2005-01-01

    Educational and psychological testing textbooks typically warn of the inappropriateness of performing arithmetic operations and statistical analysis on percentiles instead of raw scores. This seems inconsistent with the well-established finding that transforming scores to ranks and using nonparametric methods often improves the validity and power…

  4. A Bayesian Nonparametric Meta-Analysis Model

    ERIC Educational Resources Information Center

    Karabatsos, George; Talbott, Elizabeth; Walker, Stephen G.

    2015-01-01

    In a meta-analysis, it is important to specify a model that adequately describes the effect-size distribution of the underlying population of studies. The conventional normal fixed-effect and normal random-effects models assume a normal effect-size population distribution, conditionally on parameters and covariates. For estimating the mean overall…

  5. Exploring Rating Quality in Rater-Mediated Assessments Using Mokken Scale Analysis

    ERIC Educational Resources Information Center

    Wind, Stefanie A.; Engelhard, George, Jr.

    2016-01-01

    Mokken scale analysis is a probabilistic nonparametric approach that offers statistical and graphical tools for evaluating the quality of social science measurement without placing potentially inappropriate restrictions on the structure of a data set. In particular, Mokken scaling provides a useful method for evaluating important measurement…

  6. Exploring Incomplete Rating Designs with Mokken Scale Analysis

    ERIC Educational Resources Information Center

    Wind, Stefanie A.; Patil, Yogendra J.

    2018-01-01

    Recent research has explored the use of models adapted from Mokken scale analysis as a nonparametric approach to evaluating rating quality in educational performance assessments. A potential limiting factor to the widespread use of these techniques is the requirement for complete data, as practical constraints in operational assessment systems…

  7. Statistical analysis of temperature data sampled at Station-M in the Norwegian Sea

    NASA Astrophysics Data System (ADS)

    Lorentzen, Torbjørn

    2014-02-01

    The paper analyzes sea temperature data sampled at Station-M in the Norwegian Sea. The data cover the period 1948-2010. The following questions are addressed: What type of stochastic process characterizes the temperature series? Are there any changes or patterns which indicate climate change? Are there any characteristics in the data which can be linked to the shrinking sea-ice in the Arctic area? Can the series be modeled consistently and applied in forecasting of the future sea temperature? The paper applies the following methods: Augmented Dickey-Fuller tests for testing of unit-root and stationarity, ARIMA-models in univariate modeling, cointegration and error-correcting models are applied in estimating short- and long-term dynamics of non-stationary series, Granger-causality tests in analyzing the interaction pattern between the deep and upper layer temperatures, and simultaneous equation systems are applied in forecasting future temperature. The paper shows that temperature at 2000 m Granger-causes temperature at 150 m, and that the 2000 m series can represent an important information carrier of the long-term development of the sea temperature in the geographical area. Descriptive statistics shows that the temperature level has been on a positive trend since the beginning of the 1980s which is also measured in most of the oceans in the North Atlantic. The analysis shows that the temperature series are cointegrated which means they share the same long-term stochastic trend and they do not diverge too far from each other. The measured long-term temperature increase is one of the factors that can explain the shrinking summer sea-ice in the Arctic region. The analysis shows that there is a significant negative correlation between the shrinking sea ice and the sea temperature at Station-M. The paper shows that the temperature forecasts are conditioned on the properties of the stochastic processes, causality pattern between the variables and specification of model, respectively. The estimated models forecast that temperature at 150 m is expected to increase by 0.018 °C per year, while deep water temperature at 2000 m is expected to increase between 0.0022 and 0.0024 °C per year.

  8. Revisiting the Principle of Relative Constancy: Consumer Mass Media Expenditures in Belgium.

    ERIC Educational Resources Information Center

    Dupagne, Michel; Green, R. Jeffery

    1996-01-01

    Proposes two new econometric models for testing the principle of relative constancy (PRC). Reports on regression and cointegration analyses conducted with Belgian mass media expenditure data from 1953-91. Suggests that alternative mass media expenditure models should be developed because PRC lacks of economic foundation and sound empirical…

  9. On the international stability of health care expenditure functions: are government and private functions similar?

    PubMed

    Clemente, Jesús; Marcuello, Carmen; Montañés, Antonio; Pueyo, Fernando

    2004-05-01

    This paper studies the stability of health care expenditure functions in a sample of OECD countries. We adopt the cointegration approach and the results show that there is a long-term relationship between total health care expenditure (HCE) and gross domestic product (GDP). However, the existence of cointegration is only shown when we admit the presence of some changes in the elasticities of the model. Our results also provide evidence against the existence of a unique relationship between health and GDP for the sample. Thus, we can conclude that the differences in health systems may cause differences in the aggregate functions. Additionally, we examine aggregate health functions for government (GHCE) and private expenditures (PHCE), again finding evidence of different patterns of behaviour. Finally, we open a discussion on the character of health as a necessary or luxury good. In this context, we find differences between the government and the private function. In order to illustrate these findings, we propose a theoretical model as an example of the influence of political decisions on income elasticity. Copyright 2003 Elsevier B.V.

  10. Persistence of airline accidents.

    PubMed

    Barros, Carlos Pestana; Faria, Joao Ricardo; Gil-Alana, Luis Alberiko

    2010-10-01

    This paper expands on air travel accident research by examining the relationship between air travel accidents and airline traffic or volume in the period from 1927-2006. The theoretical model is based on a representative airline company that aims to maximise its profits, and it utilises a fractional integration approach in order to determine whether there is a persistent pattern over time with respect to air accidents and air traffic. Furthermore, the paper analyses how airline accidents are related to traffic using a fractional cointegration approach. It finds that airline accidents are persistent and that a (non-stationary) fractional cointegration relationship exists between total airline accidents and airline passengers, airline miles and airline revenues, with shocks that affect the long-run equilibrium disappearing in the very long term. Moreover, this relation is negative, which might be due to the fact that air travel is becoming safer and there is greater competition in the airline industry. Policy implications are derived for countering accident events, based on competition and regulation. © 2010 The Author(s). Journal compilation © Overseas Development Institute, 2010.

  11. Investigation of InP/InGaAs metamorphic co-integrated complementary doping-channel field-effect transistors for logic application

    NASA Astrophysics Data System (ADS)

    Tsai, Jung-Hui

    2014-01-01

    DC performance of InP/InGaAs metamorphic co-integrated complementary doping-channel field-effect transistors (DCFETs) grown on a low-cost GaAs substrate is first demonstrated. In the complementary DCFETs, the n-channel device was fabricated on the InxGa1-xP metamorphic linearly graded buffer layer and the p-channel field-effect transistor was stacked on the top of the n-channel device. Particularly, the saturation voltage of the n-channel device is substantially reduced to decrease the VOL and VIH values attributed that two-dimensional electron gas is formed and could be modulated in the n-InGaAs channel. Experimentally, a maximum extrinsic transconductance of 215 (17) mS/mm and a maximum saturation current density of 43 (-27) mA/mm are obtained in the n-channel (p-channel) device. Furthermore, the noise margins NMH and NML are up to 0.842 and 0.330 V at a supply voltage of 1.5 V in the complementary logic inverter application.

  12. Does financial development reduce environmental degradation? Evidence from a panel study of 129 countries.

    PubMed

    Al-Mulali, Usama; Tang, Chor Foon; Ozturk, Ilhan

    2015-10-01

    The purpose of this study is to explore the effect of financial development on CO2 emission in 129 countries classified by the income level. A panel CO2 emission model using urbanisation, GDP growth, trade openness, petroleum consumption and financial development variables that are major determinants of CO2 emission was constructed for the 1980-2011 period. The results revealed that the variables are cointegrated based on the Pedroni cointegration test. The dynamic ordinary least squares (OLS) and the Granger causality test results also show that financial development can improve environmental quality in the short run and long run due to its negative effect on CO2 emission. The rest of the determinants, especially petroleum consumption, are determined to be the major source of environmental damage in most of the income group countries. Based on the results obtained, the investigated countries should provide banking loans to projects and investments that can promote energy savings, energy efficiency and renewable energy to help these countries reduce environmental damage in both the short and long run.

  13. A neural network approach to cloud classification

    NASA Technical Reports Server (NTRS)

    Lee, Jonathan; Weger, Ronald C.; Sengupta, Sailes K.; Welch, Ronald M.

    1990-01-01

    It is shown that, using high-spatial-resolution data, very high cloud classification accuracies can be obtained with a neural network approach. A texture-based neural network classifier using only single-channel visible Landsat MSS imagery achieves an overall cloud identification accuracy of 93 percent. Cirrus can be distinguished from boundary layer cloudiness with an accuracy of 96 percent, without the use of an infrared channel. Stratocumulus is retrieved with an accuracy of 92 percent, cumulus at 90 percent. The use of the neural network does not improve cirrus classification accuracy. Rather, its main effect is in the improved separation between stratocumulus and cumulus cloudiness. While most cloud classification algorithms rely on linear parametric schemes, the present study is based on a nonlinear, nonparametric four-layer neural network approach. A three-layer neural network architecture, the nonparametric K-nearest neighbor approach, and the linear stepwise discriminant analysis procedure are compared. A significant finding is that significantly higher accuracies are attained with the nonparametric approaches using only 20 percent of the database as training data, compared to 67 percent of the database in the linear approach.

  14. Goodness-Of-Fit Test for Nonparametric Regression Models: Smoothing Spline ANOVA Models as Example.

    PubMed

    Teran Hidalgo, Sebastian J; Wu, Michael C; Engel, Stephanie M; Kosorok, Michael R

    2018-06-01

    Nonparametric regression models do not require the specification of the functional form between the outcome and the covariates. Despite their popularity, the amount of diagnostic statistics, in comparison to their parametric counter-parts, is small. We propose a goodness-of-fit test for nonparametric regression models with linear smoother form. In particular, we apply this testing framework to smoothing spline ANOVA models. The test can consider two sources of lack-of-fit: whether covariates that are not currently in the model need to be included, and whether the current model fits the data well. The proposed method derives estimated residuals from the model. Then, statistical dependence is assessed between the estimated residuals and the covariates using the HSIC. If dependence exists, the model does not capture all the variability in the outcome associated with the covariates, otherwise the model fits the data well. The bootstrap is used to obtain p-values. Application of the method is demonstrated with a neonatal mental development data analysis. We demonstrate correct type I error as well as power performance through simulations.

  15. Sequence analysis of the lactococcal plasmid pNP40: a mobile replicon for coping with environmental hazards.

    PubMed

    O'Driscoll, Jonathan; Glynn, Frances; Fitzgerald, Gerald F; van Sinderen, Douwe

    2006-09-01

    The conjugative lactococcal plasmid pNP40, identified in Lactococcus lactis subsp. diacetylactis DRC3, possesses a potent complement of bacteriophage resistance systems, which has stimulated its application as a fitness-improving, food-grade genetic element for industrial starter cultures. The complete sequence of this plasmid allowed the mapping of previously known functions including replication, conjugation, bacteriocin resistance, heavy metal tolerance, and bacteriophage resistance. In addition, functions for cold shock adaptation and DNA damage repair were identified, further confirming pNP40's contribution to environmental stress protection. A plasmid cointegration event appears to have been part of the evolution of pNP40, resulting in a "stockpiling" of bacteriophage resistance systems.

  16. Comparison of the Effects of Public and Private Health Expenditures on the Health Status: A Panel Data Analysis in Eastern Mediterranean Countries

    PubMed Central

    Homaie Rad, Enayatollah; Vahedi, Sajad; Teimourizad, Abedin; Esmaeilzadeh, Firooz; Hadian, Mohamad; Torabi Pour, Amin

    2013-01-01

    Background: Health expenditures are divided in two parts of public and private health expenditures. Public health expenditures contain social security spending, taxing to private and public sectors, and foreign resources like loans and subventions. On the other hand, private health expenditures contain out of pocket expenditures and private insurances. Each of these has different effects on the health status. The present study aims to compare the effects of these expenditures on health in Eastern Mediterranean Region (EMR). Methods: In this study, infant mortality rate was considered as an indicator of health status. We estimated the model using the panel data of EMR countries between 1995 and 2010. First, we used Pesaran CD test followed by Pesaran’s CADF unit root test. After the confirmation of having unit root, we used Westerlund panel cointegration test and found that the model was cointegrated and then after using Hausman and Breusch-Pagan tests, we estimated the model using the random effects. Results: The results showed that the public health expenditures had a strong negative relationship with infant mortality rate. However, a positive relationship was found between the private health expenditures and infant mortality rate (IMR). The relationship for public health expenditures was significant, but for private health expenditures was not. Conclusion: The study findings showed that the public health expenditures in the EMR countries improved health outcome, while the private health expenditures did not have any significant relationship with health status, so often increasing the public health expenditures leads to reduce IMR. But this relationship was not significant because of contradictory effects for poor and wealthy peoples. PMID:24596857

  17. Comparison of the effects of public and private health expenditures on the health status: a panel data analysis in eastern mediterranean countries.

    PubMed

    Homaie Rad, Enayatollah; Vahedi, Sajad; Teimourizad, Abedin; Esmaeilzadeh, Firooz; Hadian, Mohamad; Torabi Pour, Amin

    2013-08-01

    Health expenditures are divided in two parts of public and private health expenditures. Public health expenditures contain social security spending, taxing to private and public sectors, and foreign resources like loans and subventions. On the other hand, private health expenditures contain out of pocket expenditures and private insurances. Each of these has different effects on the health status. The present study aims to compare the effects of these expenditures on health in Eastern Mediterranean Region (EMR). In this study, infant mortality rate was considered as an indicator of health status. We estimated the model using the panel data of EMR countries between 1995 and 2010. First, we used Pesaran CD test followed by Pesaran's CADF unit root test. After the confirmation of having unit root, we used Westerlund panel cointegration test and found that the model was cointegrated and then after using Hausman and Breusch-Pagan tests, we estimated the model using the random effects. The results showed that the public health expenditures had a strong negative relationship with infant mortality rate. However, a positive relationship was found between the private health expenditures and infant mortality rate (IMR). The relationship for public health expenditures was significant, but for private health expenditures was not. The study findings showed that the public health expenditures in the EMR countries improved health outcome, while the private health expenditures did not have any significant relationship with health status, so often increasing the public health expenditures leads to reduce IMR. But this relationship was not significant because of contradictory effects for poor and wealthy peoples.

  18. A new approach to correct the QT interval for changes in heart rate using a nonparametric regression model in beagle dogs.

    PubMed

    Watanabe, Hiroyuki; Miyazaki, Hiroyasu

    2006-01-01

    Over- and/or under-correction of QT intervals for changes in heart rate may lead to misleading conclusions and/or masking the potential of a drug to prolong the QT interval. This study examines a nonparametric regression model (Loess Smoother) to adjust the QT interval for differences in heart rate, with an improved fitness over a wide range of heart rates. 240 sets of (QT, RR) observations collected from each of 8 conscious and non-treated beagle dogs were used as the materials for investigation. The fitness of the nonparametric regression model to the QT-RR relationship was compared with four models (individual linear regression, common linear regression, and Bazett's and Fridericia's correlation models) with reference to Akaike's Information Criterion (AIC). Residuals were visually assessed. The bias-corrected AIC of the nonparametric regression model was the best of the models examined in this study. Although the parametric models did not fit, the nonparametric regression model improved the fitting at both fast and slow heart rates. The nonparametric regression model is the more flexible method compared with the parametric method. The mathematical fit for linear regression models was unsatisfactory at both fast and slow heart rates, while the nonparametric regression model showed significant improvement at all heart rates in beagle dogs.

  19. A nonparametric mean-variance smoothing method to assess Arabidopsis cold stress transcriptional regulator CBF2 overexpression microarray data.

    PubMed

    Hu, Pingsha; Maiti, Tapabrata

    2011-01-01

    Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request.

  20. A Nonparametric Mean-Variance Smoothing Method to Assess Arabidopsis Cold Stress Transcriptional Regulator CBF2 Overexpression Microarray Data

    PubMed Central

    Hu, Pingsha; Maiti, Tapabrata

    2011-01-01

    Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request. PMID:21611181

  1. Low-temperature crack-free Si3N4 nonlinear photonic circuits for CMOS-compatible optoelectronic co-integration

    NASA Astrophysics Data System (ADS)

    Casale, Marco; Kerdiles, Sebastien; Brianceau, Pierre; Hugues, Vincent; El Dirani, Houssein; Sciancalepore, Corrado

    2017-02-01

    In this communication, authors report for the first time on the fabrication and testing of Si3N4 non-linear photonic circuits for CMOS-compatible monolithic co-integration with silicon-based optoelectronics. In particular, a novel process has been developed to fabricate low-loss crack-free Si3N4 750-nm-thick films for Kerr-based nonlinear functions featuring full thermal budget compatibility with existing Silicon photonics and front-end Si optoelectronics. Briefly, differently from previous and state-of-the-art works, our nonlinear nitride-based platform has been realized without resorting to commonly-used high-temperature annealing ( 1200°C) of the film and its silica upper-cladding used to break N-H bonds otherwise causing absorption in the C-band and destroying its nonlinear functionality. Furthermore, no complex and fabrication-intolerant Damascene process - as recently reported earlier this year - aimed at controlling cracks generated in thick tensile-strained Si3N4 films has been used as well. Instead, a tailored Si3N4 multiple-step film deposition in 200-mm LPCVD-based reactor and subsequent low-temperature (400°C) PECVD oxide encapsulation have been used to fabricate the nonlinear micro-resonant circuits aiming at generating optical frequency combs via optical parametric oscillators (OPOs), thus allowing the monolithic co-integration of such nonlinear functions on existing CMOS-compatible optoelectronics, for both active and passive components such as, for instance, silicon modulators and wavelength (de-)multiplexers. Experimental evidence based on wafer-level statistics show nitride-based 112-μm-radius ring resonators using such low-temperature crack-free nitride film exhibiting quality factors exceeding Q >3 x 105, thus paving the way to low-threshold power-efficient Kerr-based comb sources and dissipative temporal solitons in the C-band featuring full thermal processing compatibility with Si photonic integrated circuits (Si-PICs).

  2. Nonparametric EROC analysis for observer performance evaluation on joint detection and estimation tasks

    NASA Astrophysics Data System (ADS)

    Wunderlich, Adam; Goossens, Bart

    2014-03-01

    The majority of the literature on task-based image quality assessment has focused on lesion detection tasks, using the receiver operating characteristic (ROC) curve, or related variants, to measure performance. However, since many clinical image evaluation tasks involve both detection and estimation (e.g., estimation of kidney stone composition, estimation of tumor size), there is a growing interest in performance evaluation for joint detection and estimation tasks. To evaluate observer performance on such tasks, Clarkson introduced the estimation ROC (EROC) curve, and the area under the EROC curve as a summary figure of merit. In the present work, we propose nonparametric estimators for practical EROC analysis from experimental data, including estimators for the area under the EROC curve and its variance. The estimators are illustrated with a practical example comparing MRI images reconstructed from different k-space sampling trajectories.

  3. Does multi-functionality affect technical efficiency? A non-parametric analysis of the Scottish dairy industry.

    PubMed

    Barnes, A P

    2006-09-01

    Recent policy changes within the Common Agricultural Policy have led to a shift from a solely production-led agriculture towards the promotion of multi-functionality. Conversely, the removal of production-led supports would indicate that an increased concentration on production efficiencies would seem a critical strategy for a country's future competitiveness. This paper explores the relationship between the 'multi-functional' farming attitude desired by policy makers and its effect on technical efficiency within Scottish dairy farming. Technical efficiency scores are calculated by applying the non-parametric data envelopment analysis technique and then measured against causes of inefficiency. Amongst these explanatory factors is a constructed score of multi-functionality. This research finds that, amongst other factors, a multi-functional attitude has a significant positive effect on technical efficiency. Consequently, this seems to validate the promotion of a multi-functional approach to farming currently being championed by policy-makers.

  4. On-Line Robust Modal Stability Prediction using Wavelet Processing

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.; Lind, Rick

    1998-01-01

    Wavelet analysis for filtering and system identification has been used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins is reduced with parametric and nonparametric time- frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data is used to reduce the effects of external disturbances and unmodeled dynamics. Parametric estimates of modal stability are also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. The F-18 High Alpha Research Vehicle aeroservoelastic flight test data demonstrates improved robust stability prediction by extension of the stability boundary beyond the flight regime. Guidelines and computation times are presented to show the efficiency and practical aspects of these procedures for on-line implementation. Feasibility of the method is shown for processing flight data from time- varying nonstationary test points.

  5. Nonparametric test of consistency between cosmological models and multiband CMB measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aghamousa, Amir; Shafieloo, Arman, E-mail: amir@apctp.org, E-mail: shafieloo@kasi.re.kr

    2015-06-01

    We present a novel approach to test the consistency of the cosmological models with multiband CMB data using a nonparametric approach. In our analysis we calibrate the REACT (Risk Estimation and Adaptation after Coordinate Transformation) confidence levels associated with distances in function space (confidence distances) based on the Monte Carlo simulations in order to test the consistency of an assumed cosmological model with observation. To show the applicability of our algorithm, we confront Planck 2013 temperature data with concordance model of cosmology considering two different Planck spectra combination. In order to have an accurate quantitative statistical measure to compare betweenmore » the data and the theoretical expectations, we calibrate REACT confidence distances and perform a bias control using many realizations of the data. Our results in this work using Planck 2013 temperature data put the best fit ΛCDM model at 95% (∼ 2σ) confidence distance from the center of the nonparametric confidence set while repeating the analysis excluding the Planck 217 × 217 GHz spectrum data, the best fit ΛCDM model shifts to 70% (∼ 1σ) confidence distance. The most prominent features in the data deviating from the best fit ΛCDM model seems to be at low multipoles  18 < ℓ < 26 at greater than 2σ, ℓ ∼ 750 at ∼1 to 2σ and ℓ ∼ 1800 at greater than 2σ level. Excluding the 217×217 GHz spectrum the feature at ℓ ∼ 1800 becomes substantially less significance at ∼1 to 2σ confidence level. Results of our analysis based on the new approach we propose in this work are in agreement with other analysis done using alternative methods.« less

  6. Cox regression analysis with missing covariates via nonparametric multiple imputation.

    PubMed

    Hsu, Chiu-Hsieh; Yu, Mandi

    2018-01-01

    We consider the situation of estimating Cox regression in which some covariates are subject to missing, and there exists additional information (including observed event time, censoring indicator and fully observed covariates) which may be predictive of the missing covariates. We propose to use two working regression models: one for predicting the missing covariates and the other for predicting the missing probabilities. For each missing covariate observation, these two working models are used to define a nearest neighbor imputing set. This set is then used to non-parametrically impute covariate values for the missing observation. Upon the completion of imputation, Cox regression is performed on the multiply imputed datasets to estimate the regression coefficients. In a simulation study, we compare the nonparametric multiple imputation approach with the augmented inverse probability weighted (AIPW) method, which directly incorporates the two working models into estimation of Cox regression, and the predictive mean matching imputation (PMM) method. We show that all approaches can reduce bias due to non-ignorable missing mechanism. The proposed nonparametric imputation method is robust to mis-specification of either one of the two working models and robust to mis-specification of the link function of the two working models. In contrast, the PMM method is sensitive to misspecification of the covariates included in imputation. The AIPW method is sensitive to the selection probability. We apply the approaches to a breast cancer dataset from Surveillance, Epidemiology and End Results (SEER) Program.

  7. Parametric and non-parametric species delimitation methods result in the recognition of two new Neotropical woody bamboo species.

    PubMed

    Ruiz-Sanchez, Eduardo

    2015-12-01

    The Neotropical woody bamboo genus Otatea is one of five genera in the subtribe Guaduinae. Of the eight described Otatea species, seven are endemic to Mexico and one is also distributed in Central and South America. Otatea acuminata has the widest geographical distribution of the eight species, and two of its recently collected populations do not match the known species morphologically. Parametric and non-parametric methods were used to delimit the species in Otatea using five chloroplast markers, one nuclear marker, and morphological characters. The parametric coalescent method and the non-parametric analysis supported the recognition of two distinct evolutionary lineages. Molecular clock estimates were used to estimate divergence times in Otatea. The results for divergence time in Otatea estimated the origin of the speciation events from the Late Miocene to Late Pleistocene. The species delimitation analyses (parametric and non-parametric) identified that the two populations of O. acuminata from Chiapas and Hidalgo are from two separate evolutionary lineages and these new species have morphological characters that separate them from O. acuminata s.s. The geological activity of the Trans-Mexican Volcanic Belt and the Isthmus of Tehuantepec may have isolated populations and limited the gene flow between Otatea species, driving speciation. Based on the results found here, I describe Otatea rzedowskiorum and Otatea victoriae as two new species, morphologically different from O. acuminata. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Multilevel Latent Class Analysis: Parametric and Nonparametric Models

    ERIC Educational Resources Information Center

    Finch, W. Holmes; French, Brian F.

    2014-01-01

    Latent class analysis is an analytic technique often used in educational and psychological research to identify meaningful groups of individuals within a larger heterogeneous population based on a set of variables. This technique is flexible, encompassing not only a static set of variables but also longitudinal data in the form of growth mixture…

  9. A Comparison of Distribution Free and Non-Distribution Free Factor Analysis Methods

    ERIC Educational Resources Information Center

    Ritter, Nicola L.

    2012-01-01

    Many researchers recognize that factor analysis can be conducted on both correlation matrices and variance-covariance matrices. Although most researchers extract factors from non-distribution free or parametric methods, researchers can also extract factors from distribution free or non-parametric methods. The nature of the data dictates the method…

  10. Fusion of Hard and Soft Information in Nonparametric Density Estimation

    DTIC Science & Technology

    2015-06-10

    and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for

  11. An Empirical Study of Eight Nonparametric Tests in Hierarchical Regression.

    ERIC Educational Resources Information Center

    Harwell, Michael; Serlin, Ronald C.

    When normality does not hold, nonparametric tests represent an important data-analytic alternative to parametric tests. However, the use of nonparametric tests in educational research has been limited by the absence of easily performed tests for complex experimental designs and analyses, such as factorial designs and multiple regression analyses,…

  12. Nonparametric Estimation of the Probability of Ruin.

    DTIC Science & Technology

    1985-02-01

    MATHEMATICS RESEARCH CENTER I E N FREES FEB 85 MRC/TSR...in NONPARAMETRIC ESTIMATION OF THE PROBABILITY OF RUIN Lf Edward W. Frees * Mathematics Research Center University of Wisconsin-Madison 610 Walnut...34 - .. --- - • ’. - -:- - - ..- . . .- -- .-.-. . -. . .- •. . - . . - . . .’ . ’- - .. -’vi . .-" "-- -" ,’- UNIVERSITY OF WISCONSIN-MADISON MATHEMATICS RESEARCH CENTER NONPARAMETRIC ESTIMATION OF THE PROBABILITY

  13. Marginally specified priors for non-parametric Bayesian estimation

    PubMed Central

    Kessler, David C.; Hoff, Peter D.; Dunson, David B.

    2014-01-01

    Summary Prior specification for non-parametric Bayesian inference involves the difficult task of quantifying prior knowledge about a parameter of high, often infinite, dimension. A statistician is unlikely to have informed opinions about all aspects of such a parameter but will have real information about functionals of the parameter, such as the population mean or variance. The paper proposes a new framework for non-parametric Bayes inference in which the prior distribution for a possibly infinite dimensional parameter is decomposed into two parts: an informative prior on a finite set of functionals, and a non-parametric conditional prior for the parameter given the functionals. Such priors can be easily constructed from standard non-parametric prior distributions in common use and inherit the large support of the standard priors on which they are based. Additionally, posterior approximations under these informative priors can generally be made via minor adjustments to existing Markov chain approximation algorithms for standard non-parametric prior distributions. We illustrate the use of such priors in the context of multivariate density estimation using Dirichlet process mixture models, and in the modelling of high dimensional sparse contingency tables. PMID:25663813

  14. Forecast and analysis of the ratio of electric energy to terminal energy consumption for global energy internet

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Zhong, Ming; Cheng, Ling; Jin, Lu; Shen, Si

    2018-02-01

    In the background of building global energy internet, it has both theoretical and realistic significance for forecasting and analysing the ratio of electric energy to terminal energy consumption. This paper firstly analysed the influencing factors of the ratio of electric energy to terminal energy and then used combination method to forecast and analyse the global proportion of electric energy. And then, construct the cointegration model for the proportion of electric energy by using influence factor such as electricity price index, GDP, economic structure, energy use efficiency and total population level. At last, this paper got prediction map of the proportion of electric energy by using the combination-forecasting model based on multiple linear regression method, trend analysis method, and variance-covariance method. This map describes the development trend of the proportion of electric energy in 2017-2050 and the proportion of electric energy in 2050 was analysed in detail using scenario analysis.

  15. Tremor Detection Using Parametric and Non-Parametric Spectral Estimation Methods: A Comparison with Clinical Assessment

    PubMed Central

    Martinez Manzanera, Octavio; Elting, Jan Willem; van der Hoeven, Johannes H.; Maurits, Natasha M.

    2016-01-01

    In the clinic, tremor is diagnosed during a time-limited process in which patients are observed and the characteristics of tremor are visually assessed. For some tremor disorders, a more detailed analysis of these characteristics is needed. Accelerometry and electromyography can be used to obtain a better insight into tremor. Typically, routine clinical assessment of accelerometry and electromyography data involves visual inspection by clinicians and occasionally computational analysis to obtain objective characteristics of tremor. However, for some tremor disorders these characteristics may be different during daily activity. This variability in presentation between the clinic and daily life makes a differential diagnosis more difficult. A long-term recording of tremor by accelerometry and/or electromyography in the home environment could help to give a better insight into the tremor disorder. However, an evaluation of such recordings using routine clinical standards would take too much time. We evaluated a range of techniques that automatically detect tremor segments in accelerometer data, as accelerometer data is more easily obtained in the home environment than electromyography data. Time can be saved if clinicians only have to evaluate the tremor characteristics of segments that have been automatically detected in longer daily activity recordings. We tested four non-parametric methods and five parametric methods on clinical accelerometer data from 14 patients with different tremor disorders. The consensus between two clinicians regarding the presence or absence of tremor on 3943 segments of accelerometer data was employed as reference. The nine methods were tested against this reference to identify their optimal parameters. Non-parametric methods generally performed better than parametric methods on our dataset when optimal parameters were used. However, one parametric method, employing the high frequency content of the tremor bandwidth under consideration (High Freq) performed similarly to non-parametric methods, but had the highest recall values, suggesting that this method could be employed for automatic tremor detection. PMID:27258018

  16. Alaska softwood market price arbitrage.

    Treesearch

    James A. Stevens; David J. Brooks

    2003-01-01

    This study formally tests the hypothesis that markets for Alaska lumber and logs are integrated with those of similar products from the U.S. Pacific Northwest and Canada. The prices from these three supply regions are tested in a common demand market (Japan). Cointegration tests are run on paired log and lumber data. Our results support the conclusion that western...

  17. Does Expanding Higher Education Reduce Income Inequality in Emerging Economy? Evidence from Pakistan

    ERIC Educational Resources Information Center

    Qazi, Wasim; Raza, Syed Ali; Jawaid, Syed Tehseen; Karim, Mohd Zaini Abd

    2018-01-01

    This study investigates the impact of development in the higher education sector, on the Income Inequality in Pakistan, by using the annual time series data from 1973 to 2012. The autoregressive distributed lag bound testing co-integration approach confirms the existence of long-run relationship between higher education and income inequality.…

  18. Nanoelectronics and More-than-Moore at IMEC

    NASA Astrophysics Data System (ADS)

    Cartuyvels, Rudi; Biesemans, Serge; Vandervorst, Wilfried; De Boeck, Jo

    2011-11-01

    This paper presents an overview of imec's R&D addressing the challenges of CMOS scaling towards the 10 nm node and its outlook beyond. In addition to the relentless geometrical shrinks, opportunities to further increase nanoelectronic system functionality and performance by co-integration and chip stacking technologies combined with emerging MEMS and optoelectronic technologies will be presented.

  19. Mindfulness, Empathy, and Intercultural Sensitivity amongst Undergraduate Students

    ERIC Educational Resources Information Center

    Menardo, Dayne Arvin

    2017-01-01

    This study examined the relationships amongst mindfulness, empathy, and intercultural sensitivity. Non-parametric analysis were conducted through Spearman and Hayes's PROCESS bootstrapping to examine the relationship between mindfulness and intercultural sensitivity, and whether empathy mediates the relationship between mindfulness and…

  20. Nonparametric Trajectory Analysis of R2PIER Data

    EPA Science Inventory

    Strategies to isolate air pollution contributions from sources is of interest as voluntary or regulatory measures are undertaken to reduce air pollution. When different sources are located in close proximity to one another and have similar emissions, separating source emissions ...

  1. Geostatistical radar-raingauge combination with nonparametric correlograms: methodological considerations and application in Switzerland

    NASA Astrophysics Data System (ADS)

    Schiemann, R.; Erdin, R.; Willi, M.; Frei, C.; Berenguer, M.; Sempere-Torres, D.

    2011-05-01

    Modelling spatial covariance is an essential part of all geostatistical methods. Traditionally, parametric semivariogram models are fit from available data. More recently, it has been suggested to use nonparametric correlograms obtained from spatially complete data fields. Here, both estimation techniques are compared. Nonparametric correlograms are shown to have a substantial negative bias. Nonetheless, when combined with the sample variance of the spatial field under consideration, they yield an estimate of the semivariogram that is unbiased for small lag distances. This justifies the use of this estimation technique in geostatistical applications. Various formulations of geostatistical combination (Kriging) methods are used here for the construction of hourly precipitation grids for Switzerland based on data from a sparse realtime network of raingauges and from a spatially complete radar composite. Two variants of Ordinary Kriging (OK) are used to interpolate the sparse gauge observations. In both OK variants, the radar data are only used to determine the semivariogram model. One variant relies on a traditional parametric semivariogram estimate, whereas the other variant uses the nonparametric correlogram. The variants are tested for three cases and the impact of the semivariogram model on the Kriging prediction is illustrated. For the three test cases, the method using nonparametric correlograms performs equally well or better than the traditional method, and at the same time offers great practical advantages. Furthermore, two variants of Kriging with external drift (KED) are tested, both of which use the radar data to estimate nonparametric correlograms, and as the external drift variable. The first KED variant has been used previously for geostatistical radar-raingauge merging in Catalonia (Spain). The second variant is newly proposed here and is an extension of the first. Both variants are evaluated for the three test cases as well as an extended evaluation period. It is found that both methods yield merged fields of better quality than the original radar field or fields obtained by OK of gauge data. The newly suggested KED formulation is shown to be beneficial, in particular in mountainous regions where the quality of the Swiss radar composite is comparatively low. An analysis of the Kriging variances shows that none of the methods tested here provides a satisfactory uncertainty estimate. A suitable variable transformation is expected to improve this.

  2. Geostatistical radar-raingauge combination with nonparametric correlograms: methodological considerations and application in Switzerland

    NASA Astrophysics Data System (ADS)

    Schiemann, R.; Erdin, R.; Willi, M.; Frei, C.; Berenguer, M.; Sempere-Torres, D.

    2010-09-01

    Modelling spatial covariance is an essential part of all geostatistical methods. Traditionally, parametric semivariogram models are fit from available data. More recently, it has been suggested to use nonparametric correlograms obtained from spatially complete data fields. Here, both estimation techniques are compared. Nonparametric correlograms are shown to have a substantial negative bias. Nonetheless, when combined with the sample variance of the spatial field under consideration, they yield an estimate of the semivariogram that is unbiased for small lag distances. This justifies the use of this estimation technique in geostatistical applications. Various formulations of geostatistical combination (Kriging) methods are used here for the construction of hourly precipitation grids for Switzerland based on data from a sparse realtime network of raingauges and from a spatially complete radar composite. Two variants of Ordinary Kriging (OK) are used to interpolate the sparse gauge observations. In both OK variants, the radar data are only used to determine the semivariogram model. One variant relies on a traditional parametric semivariogram estimate, whereas the other variant uses the nonparametric correlogram. The variants are tested for three cases and the impact of the semivariogram model on the Kriging prediction is illustrated. For the three test cases, the method using nonparametric correlograms performs equally well or better than the traditional method, and at the same time offers great practical advantages. Furthermore, two variants of Kriging with external drift (KED) are tested, both of which use the radar data to estimate nonparametric correlograms, and as the external drift variable. The first KED variant has been used previously for geostatistical radar-raingauge merging in Catalonia (Spain). The second variant is newly proposed here and is an extension of the first. Both variants are evaluated for the three test cases as well as an extended evaluation period. It is found that both methods yield merged fields of better quality than the original radar field or fields obtained by OK of gauge data. The newly suggested KED formulation is shown to be beneficial, in particular in mountainous regions where the quality of the Swiss radar composite is comparatively low. An analysis of the Kriging variances shows that none of the methods tested here provides a satisfactory uncertainty estimate. A suitable variable transformation is expected to improve this.

  3. Nonparametric Inference of Doubly Stochastic Poisson Process Data via the Kernel Method

    PubMed Central

    Zhang, Tingting; Kou, S. C.

    2010-01-01

    Doubly stochastic Poisson processes, also known as the Cox processes, frequently occur in various scientific fields. In this article, motivated primarily by analyzing Cox process data in biophysics, we propose a nonparametric kernel-based inference method. We conduct a detailed study, including an asymptotic analysis, of the proposed method, and provide guidelines for its practical use, introducing a fast and stable regression method for bandwidth selection. We apply our method to real photon arrival data from recent single-molecule biophysical experiments, investigating proteins' conformational dynamics. Our result shows that conformational fluctuation is widely present in protein systems, and that the fluctuation covers a broad range of time scales, highlighting the dynamic and complex nature of proteins' structure. PMID:21258615

  4. Nonparametric Inference of Doubly Stochastic Poisson Process Data via the Kernel Method.

    PubMed

    Zhang, Tingting; Kou, S C

    2010-01-01

    Doubly stochastic Poisson processes, also known as the Cox processes, frequently occur in various scientific fields. In this article, motivated primarily by analyzing Cox process data in biophysics, we propose a nonparametric kernel-based inference method. We conduct a detailed study, including an asymptotic analysis, of the proposed method, and provide guidelines for its practical use, introducing a fast and stable regression method for bandwidth selection. We apply our method to real photon arrival data from recent single-molecule biophysical experiments, investigating proteins' conformational dynamics. Our result shows that conformational fluctuation is widely present in protein systems, and that the fluctuation covers a broad range of time scales, highlighting the dynamic and complex nature of proteins' structure.

  5. Nonparametric Bayesian Dictionary Learning for Analysis of Noisy and Incomplete Images

    PubMed Central

    Zhou, Mingyuan; Chen, Haojun; Paisley, John; Ren, Lu; Li, Lingbo; Xing, Zhengming; Dunson, David; Sapiro, Guillermo; Carin, Lawrence

    2013-01-01

    Nonparametric Bayesian methods are considered for recovery of imagery based upon compressive, incomplete, and/or noisy measurements. A truncated beta-Bernoulli process is employed to infer an appropriate dictionary for the data under test and also for image recovery. In the context of compressive sensing, significant improvements in image recovery are manifested using learned dictionaries, relative to using standard orthonormal image expansions. The compressive-measurement projections are also optimized for the learned dictionary. Additionally, we consider simpler (incomplete) measurements, defined by measuring a subset of image pixels, uniformly selected at random. Spatial interrelationships within imagery are exploited through use of the Dirichlet and probit stick-breaking processes. Several example results are presented, with comparisons to other methods in the literature. PMID:21693421

  6. Parametric Covariance Model for Horizon-Based Optical Navigation

    NASA Technical Reports Server (NTRS)

    Hikes, Jacob; Liounis, Andrew J.; Christian, John A.

    2016-01-01

    This Note presents an entirely parametric version of the covariance for horizon-based optical navigation measurements. The covariance can be written as a function of only the spacecraft position, two sensor design parameters, the illumination direction, the size of the observed planet, the size of the lit arc to be used, and the total number of observed horizon points. As a result, one may now more clearly understand the sensitivity of horizon-based optical navigation performance as a function of these key design parameters, which is insight that was obscured in previous (and nonparametric) versions of the covariance. Finally, the new parametric covariance is shown to agree with both the nonparametric analytic covariance and results from a Monte Carlo analysis.

  7. The linear transformation model with frailties for the analysis of item response times.

    PubMed

    Wang, Chun; Chang, Hua-Hua; Douglas, Jeffrey A

    2013-02-01

    The item response times (RTs) collected from computerized testing represent an underutilized source of information about items and examinees. In addition to knowing the examinees' responses to each item, we can investigate the amount of time examinees spend on each item. In this paper, we propose a semi-parametric model for RTs, the linear transformation model with a latent speed covariate, which combines the flexibility of non-parametric modelling and the brevity as well as interpretability of parametric modelling. In this new model, the RTs, after some non-parametric monotone transformation, become a linear model with latent speed as covariate plus an error term. The distribution of the error term implicitly defines the relationship between the RT and examinees' latent speeds; whereas the non-parametric transformation is able to describe various shapes of RT distributions. The linear transformation model represents a rich family of models that includes the Cox proportional hazards model, the Box-Cox normal model, and many other models as special cases. This new model is embedded in a hierarchical framework so that both RTs and responses are modelled simultaneously. A two-stage estimation method is proposed. In the first stage, the Markov chain Monte Carlo method is employed to estimate the parametric part of the model. In the second stage, an estimating equation method with a recursive algorithm is adopted to estimate the non-parametric transformation. Applicability of the new model is demonstrated with a simulation study and a real data application. Finally, methods to evaluate the model fit are suggested. © 2012 The British Psychological Society.

  8. Robust non-parametric one-sample tests for the analysis of recurrent events.

    PubMed

    Rebora, Paola; Galimberti, Stefania; Valsecchi, Maria Grazia

    2010-12-30

    One-sample non-parametric tests are proposed here for inference on recurring events. The focus is on the marginal mean function of events and the basis for inference is the standardized distance between the observed and the expected number of events under a specified reference rate. Different weights are considered in order to account for various types of alternative hypotheses on the mean function of the recurrent events process. A robust version and a stratified version of the test are also proposed. The performance of these tests was investigated through simulation studies under various underlying event generation processes, such as homogeneous and nonhomogeneous Poisson processes, autoregressive and renewal processes, with and without frailty effects. The robust versions of the test have been shown to be suitable in a wide variety of event generating processes. The motivating context is a study on gene therapy in a very rare immunodeficiency in children, where a major end-point is the recurrence of severe infections. Robust non-parametric one-sample tests for recurrent events can be useful to assess efficacy and especially safety in non-randomized studies or in epidemiological studies for comparison with a standard population. Copyright © 2010 John Wiley & Sons, Ltd.

  9. Granger causality revisited

    PubMed Central

    Friston, Karl J.; Bastos, André M.; Oswal, Ashwini; van Wijk, Bernadette; Richter, Craig; Litvak, Vladimir

    2014-01-01

    This technical paper offers a critical re-evaluation of (spectral) Granger causality measures in the analysis of biological timeseries. Using realistic (neural mass) models of coupled neuronal dynamics, we evaluate the robustness of parametric and nonparametric Granger causality. Starting from a broad class of generative (state-space) models of neuronal dynamics, we show how their Volterra kernels prescribe the second-order statistics of their response to random fluctuations; characterised in terms of cross-spectral density, cross-covariance, autoregressive coefficients and directed transfer functions. These quantities in turn specify Granger causality — providing a direct (analytic) link between the parameters of a generative model and the expected Granger causality. We use this link to show that Granger causality measures based upon autoregressive models can become unreliable when the underlying dynamics is dominated by slow (unstable) modes — as quantified by the principal Lyapunov exponent. However, nonparametric measures based on causal spectral factors are robust to dynamical instability. We then demonstrate how both parametric and nonparametric spectral causality measures can become unreliable in the presence of measurement noise. Finally, we show that this problem can be finessed by deriving spectral causality measures from Volterra kernels, estimated using dynamic causal modelling. PMID:25003817

  10. Impact of Business Cycles on US Suicide Rates, 1928–2007

    PubMed Central

    Florence, Curtis S.; Quispe-Agnoli, Myriam; Ouyang, Lijing; Crosby, Alexander E.

    2011-01-01

    Objectives. We examined the associations of overall and age-specific suicide rates with business cycles from 1928 to 2007 in the United States. Methods. We conducted a graphical analysis of changes in suicide rates during business cycles, used nonparametric analyses to test associations between business cycles and suicide rates, and calculated correlations between the national unemployment rate and suicide rates. Results. Graphical analyses showed that the overall suicide rate generally rose during recessions and fell during expansions. Age-specific suicide rates responded differently to recessions and expansions. Nonparametric tests indicated that the overall suicide rate and the suicide rates of the groups aged 25 to 34 years, 35 to 44 years, 45 to 54 years, and 55 to 64 years rose during contractions and fell during expansions. Suicide rates of the groups aged 15 to 24 years, 65 to 74 years, and 75 years and older did not exhibit this behavior. Correlation results were concordant with all nonparametric results except for the group aged 65 to 74 years. Conclusions. Business cycles may affect suicide rates, although different age groups responded differently. Our findings suggest that public health responses are a necessary component of suicide prevention during recessions. PMID:21493938

  11. Nonparametric Analyses of Log-Periodic Precursors to Financial Crashes

    NASA Astrophysics Data System (ADS)

    Zhou, Wei-Xing; Sornette, Didier

    We apply two nonparametric methods to further test the hypothesis that log-periodicity characterizes the detrended price trajectory of large financial indices prior to financial crashes or strong corrections. The term "parametric" refers here to the use of the log-periodic power law formula to fit the data; in contrast, "nonparametric" refers to the use of general tools such as Fourier transform, and in the present case the Hilbert transform and the so-called (H, q)-analysis. The analysis using the (H, q)-derivative is applied to seven time series ending with the October 1987 crash, the October 1997 correction and the April 2000 crash of the Dow Jones Industrial Average (DJIA), the Standard & Poor 500 and Nasdaq indices. The Hilbert transform is applied to two detrended price time series in terms of the ln(tc-t) variable, where tc is the time of the crash. Taking all results together, we find strong evidence for a universal fundamental log-frequency f=1.02±0.05 corresponding to the scaling ratio λ=2.67±0.12. These values are in very good agreement with those obtained in earlier works with different parametric techniques. This note is extracted from a long unpublished report with 58 figures available at , which extensively describes the evidence we have accumulated on these seven time series, in particular by presenting all relevant details so that the reader can judge for himself or herself the validity and robustness of the results.

  12. Stochastic Residual-Error Analysis For Estimating Hydrologic Model Predictive Uncertainty

    EPA Science Inventory

    A hybrid time series-nonparametric sampling approach, referred to herein as semiparametric, is presented for the estimation of model predictive uncertainty. The methodology is a two-step procedure whereby a distributed hydrologic model is first calibrated, then followed by brute ...

  13. An investigation on the determinants of carbon emissions for OECD countries: empirical evidence from panel models robust to heterogeneity and cross-sectional dependence.

    PubMed

    Dogan, Eyup; Seker, Fahri

    2016-07-01

    This empirical study analyzes the impacts of real income, energy consumption, financial development and trade openness on CO2 emissions for the OECD countries in the Environmental Kuznets Curve (EKC) model by using panel econometric approaches that consider issues of heterogeneity and cross-sectional dependence. Results from the Pesaran CD test, the Pesaran-Yamagata's homogeneity test, the CADF and the CIPS unit root tests, the LM bootstrap cointegration test, the DSUR estimator, and the Emirmahmutoglu-Kose Granger causality test indicate that (i) the panel time-series data are heterogeneous and cross-sectionally dependent; (ii) CO2 emissions, real income, the quadratic income, energy consumption, financial development and openness are integrated of order one; (iii) the analyzed data are cointegrated; (iv) the EKC hypothesis is validated for the OECD countries; (v) increases in openness and financial development mitigate the level of emissions whereas energy consumption contributes to carbon emissions; (vi) a variety of Granger causal relationship is detected among the analyzed variables; and (vii) empirical results and policy recommendations are accurate and efficient since panel econometric models used in this study account for heterogeneity and cross-sectional dependence in their estimation procedures.

  14. Long-memory and the sea level-temperature relationship: a fractional cointegration approach.

    PubMed

    Ventosa-Santaulària, Daniel; Heres, David R; Martínez-Hernández, L Catalina

    2014-01-01

    Through thermal expansion of oceans and melting of land-based ice, global warming is very likely contributing to the sea level rise observed during the 20th century. The amount by which further increases in global average temperature could affect sea level is only known with large uncertainties due to the limited capacity of physics-based models to predict sea levels from global surface temperatures. Semi-empirical approaches have been implemented to estimate the statistical relationship between these two variables providing an alternative measure on which to base potentially disrupting impacts on coastal communities and ecosystems. However, only a few of these semi-empirical applications had addressed the spurious inference that is likely to be drawn when one nonstationary process is regressed on another. Furthermore, it has been shown that spurious effects are not eliminated by stationary processes when these possess strong long memory. Our results indicate that both global temperature and sea level indeed present the characteristics of long memory processes. Nevertheless, we find that these variables are fractionally cointegrated when sea-ice extent is incorporated as an instrumental variable for temperature which in our estimations has a statistically significant positive impact on global sea level.

  15. Bayesian dynamic mediation analysis.

    PubMed

    Huang, Jing; Yuan, Ying

    2017-12-01

    Most existing methods for mediation analysis assume that mediation is a stationary, time-invariant process, which overlooks the inherently dynamic nature of many human psychological processes and behavioral activities. In this article, we consider mediation as a dynamic process that continuously changes over time. We propose Bayesian multilevel time-varying coefficient models to describe and estimate such dynamic mediation effects. By taking the nonparametric penalized spline approach, the proposed method is flexible and able to accommodate any shape of the relationship between time and mediation effects. Simulation studies show that the proposed method works well and faithfully reflects the true nature of the mediation process. By modeling mediation effect nonparametrically as a continuous function of time, our method provides a valuable tool to help researchers obtain a more complete understanding of the dynamic nature of the mediation process underlying psychological and behavioral phenomena. We also briefly discuss an alternative approach of using dynamic autoregressive mediation model to estimate the dynamic mediation effect. The computer code is provided to implement the proposed Bayesian dynamic mediation analysis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. On the Use of Nonparametric Item Characteristic Curve Estimation Techniques for Checking Parametric Model Fit

    ERIC Educational Resources Information Center

    Lee, Young-Sun; Wollack, James A.; Douglas, Jeffrey

    2009-01-01

    The purpose of this study was to assess the model fit of a 2PL through comparison with the nonparametric item characteristic curve (ICC) estimation procedures. Results indicate that three nonparametric procedures implemented produced ICCs that are similar to that of the 2PL for items simulated to fit the 2PL. However for misfitting items,…

  17. Randomization Procedures Applied to Analysis of Ballistic Data

    DTIC Science & Technology

    1991-06-01

    test,;;15. NUMBER OF PAGES data analysis; computationally intensive statistics ; randomization tests; permutation tests; 16 nonparametric statistics ...be 0.13. 8 Any reasonable statistical procedure would fail to support the notion of improvement of dynamic over standard indexing based on this data ...AD-A238 389 TECHNICAL REPORT BRL-TR-3245 iBRL RANDOMIZATION PROCEDURES APPLIED TO ANALYSIS OF BALLISTIC DATA MALCOLM S. TAYLOR BARRY A. BODT - JUNE

  18. On the development of a semi-nonparametric generalized multinomial logit model for travel-related choices

    PubMed Central

    Ye, Xin; Pendyala, Ram M.; Zou, Yajie

    2017-01-01

    A semi-nonparametric generalized multinomial logit model, formulated using orthonormal Legendre polynomials to extend the standard Gumbel distribution, is presented in this paper. The resulting semi-nonparametric function can represent a probability density function for a large family of multimodal distributions. The model has a closed-form log-likelihood function that facilitates model estimation. The proposed method is applied to model commute mode choice among four alternatives (auto, transit, bicycle and walk) using travel behavior data from Argau, Switzerland. Comparisons between the multinomial logit model and the proposed semi-nonparametric model show that violations of the standard Gumbel distribution assumption lead to considerable inconsistency in parameter estimates and model inferences. PMID:29073152

  19. On the development of a semi-nonparametric generalized multinomial logit model for travel-related choices.

    PubMed

    Wang, Ke; Ye, Xin; Pendyala, Ram M; Zou, Yajie

    2017-01-01

    A semi-nonparametric generalized multinomial logit model, formulated using orthonormal Legendre polynomials to extend the standard Gumbel distribution, is presented in this paper. The resulting semi-nonparametric function can represent a probability density function for a large family of multimodal distributions. The model has a closed-form log-likelihood function that facilitates model estimation. The proposed method is applied to model commute mode choice among four alternatives (auto, transit, bicycle and walk) using travel behavior data from Argau, Switzerland. Comparisons between the multinomial logit model and the proposed semi-nonparametric model show that violations of the standard Gumbel distribution assumption lead to considerable inconsistency in parameter estimates and model inferences.

  20. Mathematical models for nonparametric inferences from line transect data

    USGS Publications Warehouse

    Burnham, K.P.; Anderson, D.R.

    1976-01-01

    A general mathematical theory of line transects is develoepd which supplies a framework for nonparametric density estimation based on either right angle or sighting distances. The probability of observing a point given its right angle distance (y) from the line is generalized to an arbitrary function g(y). Given only that g(O) = 1, it is shown there are nonparametric approaches to density estimation using the observed right angle distances. The model is then generalized to include sighting distances (r). Let f(y/r) be the conditional distribution of right angle distance given sighting distance. It is shown that nonparametric estimation based only on sighting distances requires we know the transformation of r given by f(O/r).

  1. The Efficiency Change of Italian Public Universities in the New Millennium: A Non-Parametric Analysis

    ERIC Educational Resources Information Center

    Guccio, Calogero; Martorana, Marco Ferdinando; Mazza, Isidoro

    2017-01-01

    The paper assesses the evolution of efficiency of Italian public universities for the period 2000-2010. It aims at investigating whether their levels of efficiency showed signs of convergence, and if the well-known disparity between northern and southern regions decreased. For this purpose, we use a refinement of data envelopment analysis, namely…

  2. LSAT Dimensionality Analysis for the December 1991, June 1992, and October 1992 Administrations. Statistical Report. LSAC Research Report Series.

    ERIC Educational Resources Information Center

    Douglas, Jeff; Kim, Hae-Rim; Roussos, Louis; Stout, William; Zhang, Jinming

    An extensive nonparametric dimensionality analysis of latent structure was conducted on three forms of the Law School Admission Test (LSAT) (December 1991, June 1992, and October 1992) using the DIMTEST model in confirmatory analyses and using DIMTEST, FAC, DETECT, HCA, PROX, and a genetic algorithm in exploratory analyses. Results indicate that…

  3. Analysis of Parasite and Other Skewed Counts

    PubMed Central

    Alexander, Neal

    2012-01-01

    Objective To review methods for the statistical analysis of parasite and other skewed count data. Methods Statistical methods for skewed count data are described and compared, with reference to those used over a ten year period of Tropical Medicine and International Health. Two parasitological datasets are used for illustration. Results Ninety papers were identified, 89 with descriptive and 60 with inferential analysis. A lack of clarity is noted in identifying measures of location, in particular the Williams and geometric mean. The different measures are compared, emphasizing the legitimacy of the arithmetic mean for skewed data. In the published papers, the t test and related methods were often used on untransformed data, which is likely to be invalid. Several approaches to inferential analysis are described, emphasizing 1) non-parametric methods, while noting that they are not simply comparisons of medians, and 2) generalized linear modelling, in particular with the negative binomial distribution. Additional methods, such as the bootstrap, with potential for greater use are described. Conclusions Clarity is recommended when describing transformations and measures of location. It is suggested that non-parametric methods and generalized linear models are likely to be sufficient for most analyses. PMID:22943299

  4. LANDSCAPE STRUCTURE AND ESTUARINE CONDITION IN THE MID-ATLANTIC REGION OF THE UNITED STATES: I. DEVELOPING QUANTITATIVE RELATIONSHIPS

    EPA Science Inventory

    In a previously published study, quantitative relationships were developed between landscape metrics and sediment contamination for 25 small estuarine systems within Chesapeake Bay. Nonparametric statistical analysis (rank transformation) was used to develop an empirical relation...

  5. Surface Estimation, Variable Selection, and the Nonparametric Oracle Property.

    PubMed

    Storlie, Curtis B; Bondell, Howard D; Reich, Brian J; Zhang, Hao Helen

    2011-04-01

    Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting.

  6. Surface Estimation, Variable Selection, and the Nonparametric Oracle Property

    PubMed Central

    Storlie, Curtis B.; Bondell, Howard D.; Reich, Brian J.; Zhang, Hao Helen

    2010-01-01

    Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting. PMID:21603586

  7. How to Deal with Interval-Censored Data Practically while Assessing the Progression-Free Survival: A Step-by-Step Guide Using SAS and R Software.

    PubMed

    Dugué, Audrey Emmanuelle; Pulido, Marina; Chabaud, Sylvie; Belin, Lisa; Gal, Jocelyn

    2016-12-01

    We describe how to estimate progression-free survival while dealing with interval-censored data in the setting of clinical trials in oncology. Three procedures with SAS and R statistical software are described: one allowing for a nonparametric maximum likelihood estimation of the survival curve using the EM-ICM (Expectation and Maximization-Iterative Convex Minorant) algorithm as described by Wellner and Zhan in 1997; a sensitivity analysis procedure in which the progression time is assigned (i) at the midpoint, (ii) at the upper limit (reflecting the standard analysis when the progression time is assigned at the first radiologic exam showing progressive disease), or (iii) at the lower limit of the censoring interval; and finally, two multiple imputations are described considering a uniform or the nonparametric maximum likelihood estimation (NPMLE) distribution. Clin Cancer Res; 22(23); 5629-35. ©2016 AACR. ©2016 American Association for Cancer Research.

  8. Bioconductor Workflow for Microbiome Data Analysis: from raw reads to community analyses

    PubMed Central

    Callahan, Ben J.; Sankaran, Kris; Fukuyama, Julia A.; McMurdie, Paul J.; Holmes, Susan P.

    2016-01-01

    High-throughput sequencing of PCR-amplified taxonomic markers (like the 16S rRNA gene) has enabled a new level of analysis of complex bacterial communities known as microbiomes. Many tools exist to quantify and compare abundance levels or OTU composition of communities in different conditions. The sequencing reads have to be denoised and assigned to the closest taxa from a reference database. Common approaches use a notion of 97% similarity and normalize the data by subsampling to equalize library sizes. In this paper, we show that statistical models allow more accurate abundance estimates. By providing a complete workflow in R, we enable the user to do sophisticated downstream statistical analyses, whether parametric or nonparametric. We provide examples of using the R packages dada2, phyloseq, DESeq2, ggplot2 and vegan to filter, visualize and test microbiome data. We also provide examples of supervised analyses using random forests and nonparametric testing using community networks and the ggnetwork package. PMID:27508062

  9. Smoothing spline ANOVA frailty model for recurrent event data.

    PubMed

    Du, Pang; Jiang, Yihua; Wang, Yuedong

    2011-12-01

    Gap time hazard estimation is of particular interest in recurrent event data. This article proposes a fully nonparametric approach for estimating the gap time hazard. Smoothing spline analysis of variance (ANOVA) decompositions are used to model the log gap time hazard as a joint function of gap time and covariates, and general frailty is introduced to account for between-subject heterogeneity and within-subject correlation. We estimate the nonparametric gap time hazard function and parameters in the frailty distribution using a combination of the Newton-Raphson procedure, the stochastic approximation algorithm (SAA), and the Markov chain Monte Carlo (MCMC) method. The convergence of the algorithm is guaranteed by decreasing the step size of parameter update and/or increasing the MCMC sample size along iterations. Model selection procedure is also developed to identify negligible components in a functional ANOVA decomposition of the log gap time hazard. We evaluate the proposed methods with simulation studies and illustrate its use through the analysis of bladder tumor data. © 2011, The International Biometric Society.

  10. On sample size of the kruskal-wallis test with application to a mouse peritoneal cavity study.

    PubMed

    Fan, Chunpeng; Zhang, Donghui; Zhang, Cun-Hui

    2011-03-01

    As the nonparametric generalization of the one-way analysis of variance model, the Kruskal-Wallis test applies when the goal is to test the difference between multiple samples and the underlying population distributions are nonnormal or unknown. Although the Kruskal-Wallis test has been widely used for data analysis, power and sample size methods for this test have been investigated to a much lesser extent. This article proposes new power and sample size calculation methods for the Kruskal-Wallis test based on the pilot study in either a completely nonparametric model or a semiparametric location model. No assumption is made on the shape of the underlying population distributions. Simulation results show that, in terms of sample size calculation for the Kruskal-Wallis test, the proposed methods are more reliable and preferable to some more traditional methods. A mouse peritoneal cavity study is used to demonstrate the application of the methods. © 2010, The International Biometric Society.

  11. Assessing Goodness of Fit in Item Response Theory with Nonparametric Models: A Comparison of Posterior Probabilities and Kernel-Smoothing Approaches

    ERIC Educational Resources Information Center

    Sueiro, Manuel J.; Abad, Francisco J.

    2011-01-01

    The distance between nonparametric and parametric item characteristic curves has been proposed as an index of goodness of fit in item response theory in the form of a root integrated squared error index. This article proposes to use the posterior distribution of the latent trait as the nonparametric model and compares the performance of an index…

  12. The non-parametric Parzen's window in stereo vision matching.

    PubMed

    Pajares, G; de la Cruz, J

    2002-01-01

    This paper presents an approach to the local stereovision matching problem using edge segments as features with four attributes. From these attributes we compute a matching probability between pairs of features of the stereo images. A correspondence is said true when such a probability is maximum. We introduce a nonparametric strategy based on Parzen's window (1962) to estimate a probability density function (PDF) which is used to obtain the matching probability. This is the main finding of the paper. A comparative analysis of other recent matching methods is included to show that this finding can be justified theoretically. A generalization of the proposed method is made in order to give guidelines about its use with the similarity constraint and also in different environments where other features and attributes are more suitable.

  13. Nonparametric Signal Extraction and Measurement Error in the Analysis of Electroencephalographic Activity During Sleep

    PubMed Central

    Crainiceanu, Ciprian M.; Caffo, Brian S.; Di, Chong-Zhi; Punjabi, Naresh M.

    2009-01-01

    We introduce methods for signal and associated variability estimation based on hierarchical nonparametric smoothing with application to the Sleep Heart Health Study (SHHS). SHHS is the largest electroencephalographic (EEG) collection of sleep-related data, which contains, at each visit, two quasi-continuous EEG signals for each subject. The signal features extracted from EEG data are then used in second level analyses to investigate the relation between health, behavioral, or biometric outcomes and sleep. Using subject specific signals estimated with known variability in a second level regression becomes a nonstandard measurement error problem. We propose and implement methods that take into account cross-sectional and longitudinal measurement error. The research presented here forms the basis for EEG signal processing for the SHHS. PMID:20057925

  14. Parametric, nonparametric and parametric modelling of a chaotic circuit time series

    NASA Astrophysics Data System (ADS)

    Timmer, J.; Rust, H.; Horbelt, W.; Voss, H. U.

    2000-09-01

    The determination of a differential equation underlying a measured time series is a frequently arising task in nonlinear time series analysis. In the validation of a proposed model one often faces the dilemma that it is hard to decide whether possible discrepancies between the time series and model output are caused by an inappropriate model or by bad estimates of parameters in a correct type of model, or both. We propose a combination of parametric modelling based on Bock's multiple shooting algorithm and nonparametric modelling based on optimal transformations as a strategy to test proposed models and if rejected suggest and test new ones. We exemplify this strategy on an experimental time series from a chaotic circuit where we obtain an extremely accurate reconstruction of the observed attractor.

  15. Adjacent-Categories Mokken Models for Rater-Mediated Assessments

    ERIC Educational Resources Information Center

    Wind, Stefanie A.

    2017-01-01

    Molenaar extended Mokken's original probabilistic-nonparametric scaling models for use with polytomous data. These polytomous extensions of Mokken's original scaling procedure have facilitated the use of Mokken scale analysis as an approach to exploring fundamental measurement properties across a variety of domains in which polytomous ratings are…

  16. Treatment of Selective Mutism: A Best-Evidence Synthesis.

    ERIC Educational Resources Information Center

    Stone, Beth Pionek; Kratochwill, Thomas R.; Sladezcek, Ingrid; Serlin, Ronald C.

    2002-01-01

    Presents systematic analysis of the major treatment approaches used for selective mutism. Based on nonparametric statistical tests of effect sizes, major findings include the following: treatment of selective mutism is more effective than no treatment; behaviorally oriented treatment approaches are more effective than no treatment; and no…

  17. Mathematical models for non-parametric inferences from line transect data

    USGS Publications Warehouse

    Burnham, K.P.; Anderson, D.R.

    1976-01-01

    A general mathematical theory of line transects is developed which supplies a framework for nonparametric density estimation based on either right angle or sighting distances. The probability of observing a point given its right angle distance (y) from the line is generalized to an arbitrary function g(y). Given only that g(0) = 1, it is shown there are nonparametric approaches to density estimation using the observed right angle distances. The model is then generalized to include sighting distances (r). Let f(y I r) be the conditional distribution of right angle distance given sighting distance. It is shown that nonparametric estimation based only on sighting distances requires we know the transformation of r given by f(0 I r).

  18. pHTβ-promoted mobilization of non-conjugative resistance plasmids from Enterococcus faecium to Enterococcus faecalis.

    PubMed

    Di Sante, Laura; Morroni, Gianluca; Brenciani, Andrea; Vignaroli, Carla; Antonelli, Alberto; D'Andrea, Marco Maria; Di Cesare, Andrea; Giovanetti, Eleonora; Varaldo, Pietro E; Rossolini, Gian Maria; Biavasco, Francesca

    2017-09-01

    To analyse the recombination events associated with conjugal mobilization of two multiresistance plasmids, pRUM17i48 and pLAG (formerly named pDO1-like), from Enterococcus faecium 17i48 to Enterococcus faecalis JH2-2. The plasmids from two E. faecalis transconjugants (JH-4T, tetracycline resistant, and JH-8E, erythromycin resistant) and from the E. faecium donor (also carrying a pHTβ-like conjugative plasmid, named pHTβ17i48) were investigated by several methods, including PCR mapping and sequencing, S1-PFGE followed by Southern blotting and hybridization, and WGS. Two locations of repApHTβ were detected in both transconjugants, one on a ∼50 kb plasmid (as in the donor) and the other on plasmids of larger sizes. In JH-4T, WGS disclosed an 88.6 kb plasmid resulting from the recombination of pHTβ17i48 (∼50 kb) and a new plasmid, named pLAG (35.3 kb), carrying the tet(M), tet(L), lsa(E), lnu(B), spw and aadE resistance genes. In JH-8E, a 75 kb plasmid resulting from the recombination of pHTβ17i48 and pRUM17i48 was observed. In both cases, the cointegrates were apparently derived from replicative transposition of an IS1216 present in each of the multiresistance plasmids into pHTβ17i48. The cointegrates could resolve to yield the multiresistance plasmids and a pHTβ17i48 derivative carrying an IS1216 (unlike the pHTβ17i48 of the donor). Our results completed the characterization of the multiresistance plasmids carried by the E. faecium 17i48, confirming the role of pHT plasmids in the mobilization of non-conjugative antibiotic resistance elements among enterococci. Results also revealed that mobilization to E. faecalis was associated with the generation of cointegrate plasmids promoted by IS1216-mediated transposition. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. A Semi-parametric Transformation Frailty Model for Semi-competing Risks Survival Data

    PubMed Central

    Jiang, Fei; Haneuse, Sebastien

    2016-01-01

    In the analysis of semi-competing risks data interest lies in estimation and inference with respect to a so-called non-terminal event, the observation of which is subject to a terminal event. Multi-state models are commonly used to analyse such data, with covariate effects on the transition/intensity functions typically specified via the Cox model and dependence between the non-terminal and terminal events specified, in part, by a unit-specific shared frailty term. To ensure identifiability, the frailties are typically assumed to arise from a parametric distribution, specifically a Gamma distribution with mean 1.0 and variance, say, σ2. When the frailty distribution is misspecified, however, the resulting estimator is not guaranteed to be consistent, with the extent of asymptotic bias depending on the discrepancy between the assumed and true frailty distributions. In this paper, we propose a novel class of transformation models for semi-competing risks analysis that permit the non-parametric specification of the frailty distribution. To ensure identifiability, the class restricts to parametric specifications of the transformation and the error distribution; the latter are flexible, however, and cover a broad range of possible specifications. We also derive the semi-parametric efficient score under the complete data setting and propose a non-parametric score imputation method to handle right censoring; consistency and asymptotic normality of the resulting estimators is derived and small-sample operating characteristics evaluated via simulation. Although the proposed semi-parametric transformation model and non-parametric score imputation method are motivated by the analysis of semi-competing risks data, they are broadly applicable to any analysis of multivariate time-to-event outcomes in which a unit-specific shared frailty is used to account for correlation. Finally, the proposed model and estimation procedures are applied to a study of hospital readmission among patients diagnosed with pancreatic cancer. PMID:28439147

  20. Nationwide disturbance attribution on NASA’s earth exchange: experiences in a high-end computing environment

    Treesearch

    J. Chris Toney; Karen G. Schleeweis; Jennifer Dungan; Andrew Michaelis; Todd Schroeder; Gretchen G. Moisen

    2015-01-01

    The North American Forest Dynamics (NAFD) project’s Attribution Team is completing nationwide processing of historic Landsat data to provide a comprehensive annual, wall-to-wall analysis of US disturbance history, with attribution, over the last 25+ years. Per-pixel time series analysis based on a new nonparametric curve fitting algorithm yields several metrics useful...

  1. Prevention of Substance Use among Adolescents through Social and Emotional Training in School: A Latent-Class Analysis of a Five-Year Intervention in Sweden

    ERIC Educational Resources Information Center

    Kimber, Birgitta; Sandell, Rolf

    2009-01-01

    The study considers the impact of a program for social and emotional learning in Swedish schools on use of drugs, volatile substances, alcohol and tobacco. The program was evaluated in an effectiveness study. Intervention students were compared longitudinally with non-intervention students using nonparametric latent class analysis to identify…

  2. The Importance of Practice in the Development of Statistics.

    DTIC Science & Technology

    1983-01-01

    RESOLUTION TEST CHART NATIONAL BUREAU OIF STANDARDS 1963 -A NRC Technical Summary Report #2471 C THE IMORTANCE OF PRACTICE IN to THE DEVELOPMENT OF STATISTICS...component analysis, bioassay, limits for a ratio, quality control, sampling inspection, non-parametric tests , transformation theory, ARIMA time series...models, sequential tests , cumulative sum charts, data analysis plotting techniques, and a resolution of the Bayes - frequentist controversy. It appears

  3. Economic hardship and suicide mortality in Finland, 1875-2010.

    PubMed

    Korhonen, Marko; Puhakka, Mikko; Viren, Matti

    2016-03-01

    We investigate the determinants of suicide in Finland using annual data for consumption and suicides from 1860 to 2010. Instead of using some ad hoc measures of cyclical movements of the economy, we build our analysis on a more solid economic theory. A key feature is the habit persistence in preferences, which provides a way to measure individual well-being and predict suicide. We estimate time series of habit levels and develop an indicator (the hardship index) to describe the economic hardship of consumers. The higher the level of the index, the worse off consumers are. As a rational response to such a bad situation, some consumers might commit suicide. We employ the autoregressive distributed lags cointegration method and find that our index works well in explaining the long-term behavior of people committing suicide in Finland.

  4. A Hybrid Index for Characterizing Drought Based on a Nonparametric Kernel Estimator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Shengzhi; Huang, Qiang; Leng, Guoyong

    This study develops a nonparametric multivariate drought index, namely, the Nonparametric Multivariate Standardized Drought Index (NMSDI), by considering the variations of both precipitation and streamflow. Building upon previous efforts in constructing Nonparametric Multivariate Drought Index, we use the nonparametric kernel estimator to derive the joint distribution of precipitation and streamflow, thus providing additional insights in drought index development. The proposed NMSDI are applied in the Wei River Basin (WRB), based on which the drought evolution characteristics are investigated. Results indicate: (1) generally, NMSDI captures the drought onset similar to Standardized Precipitation Index (SPI) and drought termination and persistence similar tomore » Standardized Streamflow Index (SSFI). The drought events identified by NMSDI match well with historical drought records in the WRB. The performances are also consistent with that by an existing Multivariate Standardized Drought Index (MSDI) at various timescales, confirming the validity of the newly constructed NMSDI in drought detections (2) An increasing risk of drought has been detected for the past decades, and will be persistent to a certain extent in future in most areas of the WRB; (3) the identified change points of annual NMSDI are mainly concentrated in the early 1970s and middle 1990s, coincident with extensive water use and soil reservation practices. This study highlights the nonparametric multivariable drought index, which can be used for drought detections and predictions efficiently and comprehensively.« less

  5. Why preferring parametric forecasting to nonparametric methods?

    PubMed

    Jabot, Franck

    2015-05-07

    A recent series of papers by Charles T. Perretti and collaborators have shown that nonparametric forecasting methods can outperform parametric methods in noisy nonlinear systems. Such a situation can arise because of two main reasons: the instability of parametric inference procedures in chaotic systems which can lead to biased parameter estimates, and the discrepancy between the real system dynamics and the modeled one, a problem that Perretti and collaborators call "the true model myth". Should ecologists go on using the demanding parametric machinery when trying to forecast the dynamics of complex ecosystems? Or should they rely on the elegant nonparametric approach that appears so promising? It will be here argued that ecological forecasting based on parametric models presents two key comparative advantages over nonparametric approaches. First, the likelihood of parametric forecasting failure can be diagnosed thanks to simple Bayesian model checking procedures. Second, when parametric forecasting is diagnosed to be reliable, forecasting uncertainty can be estimated on virtual data generated with the fitted to data parametric model. In contrast, nonparametric techniques provide forecasts with unknown reliability. This argumentation is illustrated with the simple theta-logistic model that was previously used by Perretti and collaborators to make their point. It should convince ecologists to stick to standard parametric approaches, until methods have been developed to assess the reliability of nonparametric forecasting. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. DIF Trees: Using Classification Trees to Detect Differential Item Functioning

    ERIC Educational Resources Information Center

    Vaughn, Brandon K.; Wang, Qiu

    2010-01-01

    A nonparametric tree classification procedure is used to detect differential item functioning for items that are dichotomously scored. Classification trees are shown to be an alternative procedure to detect differential item functioning other than the use of traditional Mantel-Haenszel and logistic regression analysis. A nonparametric…

  7. Nonparametric Trajectory Analysis of CMAPS Data

    EPA Science Inventory

    As part of the Cleveland Multiple Air Pollutant Study (CMAPS), 30-minute average concentrations of the elemental composition of PM2.5 were made at two sites during the months of August 2009 and February 2010. The elements measured were: Al, As, Ba, Be, Ca, Cd, Ce, Co, Cr, Cs, Cu...

  8. On the Bias-Amplifying Effect of Near Instruments in Observational Studies

    ERIC Educational Resources Information Center

    Steiner, Peter M.; Kim, Yongnam

    2014-01-01

    In contrast to randomized experiments, the estimation of unbiased treatment effects from observational data requires an analysis that conditions on all confounding covariates. Conditioning on covariates can be done via standard parametric regression techniques or nonparametric matching like propensity score (PS) matching. The regression or…

  9. Status and Pre-Employment Training Requirements of Welding Tradesmen, Technicians, and Technologists.

    ERIC Educational Resources Information Center

    Morgan, Daryle Whitney

    To assess the status of welding in various manufacturing industries and to ascertain the occupational preparation needed for welding tradesmen, technicians, and technologists, completed questionnaires were obtained from 138 selected industrial specialists. The hypotheses were tested by Freidman's two-way nonparametric analysis of variance and by…

  10. A nonparametric clustering technique which estimates the number of clusters

    NASA Technical Reports Server (NTRS)

    Ramey, D. B.

    1983-01-01

    In applications of cluster analysis, one usually needs to determine the number of clusters, K, and the assignment of observations to each cluster. A clustering technique based on recursive application of a multivariate test of bimodality which automatically estimates both K and the cluster assignments is presented.

  11. A statistical approach to bioclimatic trend detection in the airborne pollen records of Catalonia (NE Spain)

    NASA Astrophysics Data System (ADS)

    Fernández-Llamazares, Álvaro; Belmonte, Jordina; Delgado, Rosario; De Linares, Concepción

    2014-04-01

    Airborne pollen records are a suitable indicator for the study of climate change. The present work focuses on the role of annual pollen indices for the detection of bioclimatic trends through the analysis of the aerobiological spectra of 11 taxa of great biogeographical relevance in Catalonia over an 18-year period (1994-2011), by means of different parametric and non-parametric statistical methods. Among others, two non-parametric rank-based statistical tests were performed for detecting monotonic trends in time series data of the selected airborne pollen types and we have observed that they have similar power in detecting trends. Except for those cases in which the pollen data can be well-modeled by a normal distribution, it is better to apply non-parametric statistical methods to aerobiological studies. Our results provide a reliable representation of the pollen trends in the region and suggest that greater pollen quantities are being liberated to the atmosphere in the last years, specially by Mediterranean taxa such as Pinus, Total Quercus and Evergreen Quercus, although the trends may differ geographically. Longer aerobiological monitoring periods are required to corroborate these results and survey the increasing levels of certain pollen types that could exert an impact in terms of public health.

  12. A fully nonparametric estimator of the marginal survival function based on case–control clustered age-at-onset data

    PubMed Central

    Gorfine, Malka; Bordo, Nadia; Hsu, Li

    2017-01-01

    Summary Consider a popular case–control family study where individuals with a disease under study (case probands) and individuals who do not have the disease (control probands) are randomly sampled from a well-defined population. Possibly right-censored age at onset and disease status are observed for both probands and their relatives. For example, case probands are men diagnosed with prostate cancer, control probands are men free of prostate cancer, and the prostate cancer history of the fathers of the probands is also collected. Inherited genetic susceptibility, shared environment, and common behavior lead to correlation among the outcomes within a family. In this article, a novel nonparametric estimator of the marginal survival function is provided. The estimator is defined in the presence of intra-cluster dependence, and is based on consistent smoothed kernel estimators of conditional survival functions. By simulation, it is shown that the proposed estimator performs very well in terms of bias. The utility of the estimator is illustrated by the analysis of case–control family data of early onset prostate cancer. To our knowledge, this is the first article that provides a fully nonparametric marginal survival estimator based on case–control clustered age-at-onset data. PMID:27436674

  13. Non-parametric methods for cost-effectiveness analysis: the central limit theorem and the bootstrap compared.

    PubMed

    Nixon, Richard M; Wonderling, David; Grieve, Richard D

    2010-03-01

    Cost-effectiveness analyses (CEA) alongside randomised controlled trials commonly estimate incremental net benefits (INB), with 95% confidence intervals, and compute cost-effectiveness acceptability curves and confidence ellipses. Two alternative non-parametric methods for estimating INB are to apply the central limit theorem (CLT) or to use the non-parametric bootstrap method, although it is unclear which method is preferable. This paper describes the statistical rationale underlying each of these methods and illustrates their application with a trial-based CEA. It compares the sampling uncertainty from using either technique in a Monte Carlo simulation. The experiments are repeated varying the sample size and the skewness of costs in the population. The results showed that, even when data were highly skewed, both methods accurately estimated the true standard errors (SEs) when sample sizes were moderate to large (n>50), and also gave good estimates for small data sets with low skewness. However, when sample sizes were relatively small and the data highly skewed, using the CLT rather than the bootstrap led to slightly more accurate SEs. We conclude that while in general using either method is appropriate, the CLT is easier to implement, and provides SEs that are at least as accurate as the bootstrap. (c) 2009 John Wiley & Sons, Ltd.

  14. Nonparametric regression applied to quantitative structure-activity relationships

    PubMed

    Constans; Hirst

    2000-03-01

    Several nonparametric regressors have been applied to modeling quantitative structure-activity relationship (QSAR) data. The simplest regressor, the Nadaraya-Watson, was assessed in a genuine multivariate setting. Other regressors, the local linear and the shifted Nadaraya-Watson, were implemented within additive models--a computationally more expedient approach, better suited for low-density designs. Performances were benchmarked against the nonlinear method of smoothing splines. A linear reference point was provided by multilinear regression (MLR). Variable selection was explored using systematic combinations of different variables and combinations of principal components. For the data set examined, 47 inhibitors of dopamine beta-hydroxylase, the additive nonparametric regressors have greater predictive accuracy (as measured by the mean absolute error of the predictions or the Pearson correlation in cross-validation trails) than MLR. The use of principal components did not improve the performance of the nonparametric regressors over use of the original descriptors, since the original descriptors are not strongly correlated. It remains to be seen if the nonparametric regressors can be successfully coupled with better variable selection and dimensionality reduction in the context of high-dimensional QSARs.

  15. A nonparametric spatial scan statistic for continuous data.

    PubMed

    Jung, Inkyung; Cho, Ho Jin

    2015-10-20

    Spatial scan statistics are widely used for spatial cluster detection, and several parametric models exist. For continuous data, a normal-based scan statistic can be used. However, the performance of the model has not been fully evaluated for non-normal data. We propose a nonparametric spatial scan statistic based on the Wilcoxon rank-sum test statistic and compared the performance of the method with parametric models via a simulation study under various scenarios. The nonparametric method outperforms the normal-based scan statistic in terms of power and accuracy in almost all cases under consideration in the simulation study. The proposed nonparametric spatial scan statistic is therefore an excellent alternative to the normal model for continuous data and is especially useful for data following skewed or heavy-tailed distributions.

  16. Parametric and non-parametric modeling of short-term synaptic plasticity. Part I: computational study

    PubMed Central

    Marmarelis, Vasilis Z.; Berger, Theodore W.

    2009-01-01

    Parametric and non-parametric modeling methods are combined to study the short-term plasticity (STP) of synapses in the central nervous system (CNS). The nonlinear dynamics of STP are modeled by means: (1) previously proposed parametric models based on mechanistic hypotheses and/or specific dynamical processes, and (2) non-parametric models (in the form of Volterra kernels) that transforms the presynaptic signals into postsynaptic signals. In order to synergistically use the two approaches, we estimate the Volterra kernels of the parametric models of STP for four types of synapses using synthetic broadband input–output data. Results show that the non-parametric models accurately and efficiently replicate the input–output transformations of the parametric models. Volterra kernels provide a general and quantitative representation of the STP. PMID:18506609

  17. Nonparametric Bayesian Modeling for Automated Database Schema Matching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferragut, Erik M; Laska, Jason A

    2015-01-01

    The problem of merging databases arises in many government and commercial applications. Schema matching, a common first step, identifies equivalent fields between databases. We introduce a schema matching framework that builds nonparametric Bayesian models for each field and compares them by computing the probability that a single model could have generated both fields. Our experiments show that our method is more accurate and faster than the existing instance-based matching algorithms in part because of the use of nonparametric Bayesian models.

  18. Estimating survival of radio-tagged birds

    USGS Publications Warehouse

    Bunck, C.M.; Pollock, K.H.; Lebreton, J.-D.; North, P.M.

    1993-01-01

    Parametric and nonparametric methods for estimating survival of radio-tagged birds are described. The general assumptions of these methods are reviewed. An estimate based on the assumption of constant survival throughout the period is emphasized in the overview of parametric methods. Two nonparametric methods, the Kaplan-Meier estimate of the survival funcrion and the log rank test, are explained in detail The link between these nonparametric methods and traditional capture-recapture models is discussed aloag with considerations in designing studies that use telemetry techniques to estimate survival.

  19. Nonparametric estimation of benchmark doses in environmental risk assessment

    PubMed Central

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Summary An important statistical objective in environmental risk analysis is estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a pre-specified benchmark response in a dose-response experiment. In such settings, representations of the risk are traditionally based on a parametric dose-response model. It is a well-known concern, however, that if the chosen parametric form is misspecified, inaccurate and possibly unsafe low-dose inferences can result. We apply a nonparametric approach for calculating benchmark doses, based on an isotonic regression method for dose-response estimation with quantal-response data (Bhattacharya and Kong, 2007). We determine the large-sample properties of the estimator, develop bootstrap-based confidence limits on the BMDs, and explore the confidence limits’ small-sample properties via a short simulation study. An example from cancer risk assessment illustrates the calculations. PMID:23914133

  20. A New Hybrid-Multiscale SSA Prediction of Non-Stationary Time Series

    NASA Astrophysics Data System (ADS)

    Ghanbarzadeh, Mitra; Aminghafari, Mina

    2016-02-01

    Singular spectral analysis (SSA) is a non-parametric method used in the prediction of non-stationary time series. It has two parameters, which are difficult to determine and very sensitive to their values. Since, SSA is a deterministic-based method, it does not give good results when the time series is contaminated with a high noise level and correlated noise. Therefore, we introduce a novel method to handle these problems. It is based on the prediction of non-decimated wavelet (NDW) signals by SSA and then, prediction of residuals by wavelet regression. The advantages of our method are the automatic determination of parameters and taking account of the stochastic structure of time series. As shown through the simulated and real data, we obtain better results than SSA, a non-parametric wavelet regression method and Holt-Winters method.

  1. International comparisons of the technical efficiency of the hospital sector: panel data analysis of OECD countries using parametric and non-parametric approaches.

    PubMed

    Varabyova, Yauheniya; Schreyögg, Jonas

    2013-09-01

    There is a growing interest in the cross-country comparisons of the performance of national health care systems. The present work provides a comparison of the technical efficiency of the hospital sector using unbalanced panel data from OECD countries over the period 2000-2009. The estimation of the technical efficiency of the hospital sector is performed using nonparametric data envelopment analysis (DEA) and parametric stochastic frontier analysis (SFA). Internal and external validity of findings is assessed by estimating the Spearman rank correlations between the results obtained in different model specifications. The panel-data analyses using two-step DEA and one-stage SFA show that countries, which have higher health care expenditure per capita, tend to have a more technically efficient hospital sector. Whether the expenditure is financed through private or public sources is not related to the technical efficiency of the hospital sector. On the other hand, the hospital sector in countries with higher income inequality and longer average hospital length of stay is less technically efficient. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  2. An empirical likelihood ratio test robust to individual heterogeneity for differential expression analysis of RNA-seq.

    PubMed

    Xu, Maoqi; Chen, Liang

    2018-01-01

    The individual sample heterogeneity is one of the biggest obstacles in biomarker identification for complex diseases such as cancers. Current statistical models to identify differentially expressed genes between disease and control groups often overlook the substantial human sample heterogeneity. Meanwhile, traditional nonparametric tests lose detailed data information and sacrifice the analysis power, although they are distribution free and robust to heterogeneity. Here, we propose an empirical likelihood ratio test with a mean-variance relationship constraint (ELTSeq) for the differential expression analysis of RNA sequencing (RNA-seq). As a distribution-free nonparametric model, ELTSeq handles individual heterogeneity by estimating an empirical probability for each observation without making any assumption about read-count distribution. It also incorporates a constraint for the read-count overdispersion, which is widely observed in RNA-seq data. ELTSeq demonstrates a significant improvement over existing methods such as edgeR, DESeq, t-tests, Wilcoxon tests and the classic empirical likelihood-ratio test when handling heterogeneous groups. It will significantly advance the transcriptomics studies of cancers and other complex disease. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Factors Affecting Regional Per-Capita Carbon Emissions in China Based on an LMDI Factor Decomposition Model

    PubMed Central

    Dong, Feng; Long, Ruyin; Chen, Hong; Li, Xiaohui; Yang, Qingliang

    2013-01-01

    China is considered to be the main carbon producer in the world. The per-capita carbon emissions indicator is an important measure of the regional carbon emissions situation. This study used the LMDI factor decomposition model–panel co-integration test two-step method to analyze the factors that affect per-capita carbon emissions. The main results are as follows. (1) During 1997, Eastern China, Central China, and Western China ranked first, second, and third in the per-capita carbon emissions, while in 2009 the pecking order changed to Eastern China, Western China, and Central China. (2) According to the LMDI decomposition results, the key driver boosting the per-capita carbon emissions in the three economic regions of China between 1997 and 2009 was economic development, and the energy efficiency was much greater than the energy structure after considering their effect on restraining increased per-capita carbon emissions. (3) Based on the decomposition, the factors that affected per-capita carbon emissions in the panel co-integration test showed that Central China had the best energy structure elasticity in its regional per-capita carbon emissions. Thus, Central China was ranked first for energy efficiency elasticity, while Western China was ranked first for economic development elasticity. PMID:24353753

  4. Factors affecting regional per-capita carbon emissions in China based on an LMDI factor decomposition model.

    PubMed

    Dong, Feng; Long, Ruyin; Chen, Hong; Li, Xiaohui; Yang, Qingliang

    2013-01-01

    China is considered to be the main carbon producer in the world. The per-capita carbon emissions indicator is an important measure of the regional carbon emissions situation. This study used the LMDI factor decomposition model-panel co-integration test two-step method to analyze the factors that affect per-capita carbon emissions. The main results are as follows. (1) During 1997, Eastern China, Central China, and Western China ranked first, second, and third in the per-capita carbon emissions, while in 2009 the pecking order changed to Eastern China, Western China, and Central China. (2) According to the LMDI decomposition results, the key driver boosting the per-capita carbon emissions in the three economic regions of China between 1997 and 2009 was economic development, and the energy efficiency was much greater than the energy structure after considering their effect on restraining increased per-capita carbon emissions. (3) Based on the decomposition, the factors that affected per-capita carbon emissions in the panel co-integration test showed that Central China had the best energy structure elasticity in its regional per-capita carbon emissions. Thus, Central China was ranked first for energy efficiency elasticity, while Western China was ranked first for economic development elasticity.

  5. An 'instant gene bank' method for gene cloning by mutant complementation.

    PubMed

    Gems, D; Aleksenko, A; Belenky, L; Robertson, S; Ramsden, M; Vinetski, Y; Clutterbuck, A J

    1994-02-01

    We describe a new method of gene cloning by complementation of mutant alleles which obviates the need for construction of a gene library in a plasmid vector in vitro and its amplification in Escherichia coli. The method involves simultaneous transformation of mutant strains of the fungus Aspergillus nidulans with (i) fragmented chromosomal DNA from a donor species and (ii) DNA of a plasmid without a selectable marker gene, but with a fungal origin of DNA replication ('helper plasmid'). Transformant colonies appear as the result of the joining of chromosomal DNA fragments carrying the wild-type copies of the mutant allele with the helper plasmid. Joining may occur either by ligation (if the helper plasmid is in linear form) or recombination (if it is cccDNA). This event occurs with high efficiency in vivo, and generates an autonomously replicating plasmid cointegrate. Transformants containing Penicillium chrysogenum genomic DNA complementing A. nidulans niaD, nirA and argB mutations have been obtained. While some of these cointegrates were evidently rearranged or consisted only of unaltered replicating plasmid, in other cases plasmids could be recovered into E. coli and were subsequently shown to contain the selected gene. The utility of this "instant gene bank" technique is demonstrated here by the molecular cloning of the P. canescens trpC gene.

  6. Nonparametric methods in actigraphy: An update

    PubMed Central

    Gonçalves, Bruno S.B.; Cavalcanti, Paula R.A.; Tavares, Gracilene R.; Campos, Tania F.; Araujo, John F.

    2014-01-01

    Circadian rhythmicity in humans has been well studied using actigraphy, a method of measuring gross motor movement. As actigraphic technology continues to evolve, it is important for data analysis to keep pace with new variables and features. Our objective is to study the behavior of two variables, interdaily stability and intradaily variability, to describe rest activity rhythm. Simulated data and actigraphy data of humans, rats, and marmosets were used in this study. We modified the method of calculation for IV and IS by modifying the time intervals of analysis. For each variable, we calculated the average value (IVm and ISm) results for each time interval. Simulated data showed that (1) synchronization analysis depends on sample size, and (2) fragmentation is independent of the amplitude of the generated noise. We were able to obtain a significant difference in the fragmentation patterns of stroke patients using an IVm variable, while the variable IV60 was not identified. Rhythmic synchronization of activity and rest was significantly higher in young than adults with Parkinson׳s when using the ISM variable; however, this difference was not seen using IS60. We propose an updated format to calculate rhythmic fragmentation, including two additional optional variables. These alternative methods of nonparametric analysis aim to more precisely detect sleep–wake cycle fragmentation and synchronization. PMID:26483921

  7. Supratentorial lesions contribute to trigeminal neuralgia in multiple sclerosis.

    PubMed

    Fröhlich, Kilian; Winder, Klemens; Linker, Ralf A; Engelhorn, Tobias; Dörfler, Arnd; Lee, De-Hyung; Hilz, Max J; Schwab, Stefan; Seifert, Frank

    2018-06-01

    Background It has been proposed that multiple sclerosis lesions afflicting the pontine trigeminal afferents contribute to trigeminal neuralgia in multiple sclerosis. So far, there are no imaging studies that have evaluated interactions between supratentorial lesions and trigeminal neuralgia in multiple sclerosis patients. Methods We conducted a retrospective study and sought multiple sclerosis patients with trigeminal neuralgia and controls in a local database. Multiple sclerosis lesions were manually outlined and transformed into stereotaxic space. We determined the lesion overlap and performed a voxel-wise subtraction analysis. Secondly, we conducted a voxel-wise non-parametric analysis using the Liebermeister test. Results From 12,210 multiple sclerosis patient records screened, we identified 41 patients with trigeminal neuralgia. The voxel-wise subtraction analysis yielded associations between trigeminal neuralgia and multiple sclerosis lesions in the pontine trigeminal afferents, as well as larger supratentorial lesion clusters in the contralateral insula and hippocampus. The non-parametric statistical analysis using the Liebermeister test yielded similar areas to be associated with multiple sclerosis-related trigeminal neuralgia. Conclusions Our study confirms previous data on associations between multiple sclerosis-related trigeminal neuralgia and pontine lesions, and showed for the first time an association with lesions in the insular region, a region involved in pain processing and endogenous pain modulation.

  8. Autosomal Dominant Nonsyndromic Cleft Lip and Palate: Significant Evidence of Linkage at 18q21.1

    PubMed Central

    Beiraghi, Soraya ; Nath, Swapan K. ; Gaines, Matthew ; Mandhyan, Desh D. ; Hutchings, David ; Ratnamala, Uppala ; McElreavey, Ken ; Bartoloni, Lucia ; Antonarakis, Gregory S. ; Antonarakis, Stylianos E. ; Radhakrishna, Uppala 

    2007-01-01

    Nonsyndromic cleft lip with or without cleft palate (NSCL/P) is one of the most common congenital facial defects, with an incidence of 1 in 700–1,000 live births among individuals of European descent. Several linkage and association studies of NSCL/P have suggested numerous candidate genes and genomic regions. A genomewide linkage analysis of a large multigenerational family (UR410) with NSCL/P was performed using a single-nucleotide–polymorphism array. Nonparametric linkage (NPL) analysis provided significant evidence of linkage for marker rs728683 on chromosome 18q21.1 (NPL=43.33 and P=.000061; nonparametric LOD=3.97 and P=.00001). Parametric linkage analysis with a dominant mode of inheritance and reduced penetrance resulted in a maximum LOD score of 3.61 at position 47.4 Mb on chromosome 18q21.1. Haplotype analysis with informative crossovers defined a 5.7-Mb genomic region spanned by proximal marker rs1824683 (42,403,918 bp) and distal marker rs768206 (48,132,862 bp). Thus, a novel genomic region on 18q21.1 was identified that most likely harbors a high-risk variant for NSCL/P in this family; we propose to name this locus “OFC11” (orofacial cleft 11). PMID:17564975

  9. A Review of DIMPACK Version 1.0: Conditional Covariance-Based Test Dimensionality Analysis Package

    ERIC Educational Resources Information Center

    Deng, Nina; Han, Kyung T.; Hambleton, Ronald K.

    2013-01-01

    DIMPACK Version 1.0 for assessing test dimensionality based on a nonparametric conditional covariance approach is reviewed. This software was originally distributed by Assessment Systems Corporation and now can be freely accessed online. The software consists of Windows-based interfaces of three components: DIMTEST, DETECT, and CCPROX/HAC, which…

  10. The Effect of Sample Size on Parametric and Nonparametric Factor Analytical Methods

    ERIC Educational Resources Information Center

    Kalkan, Ömür Kaya; Kelecioglu, Hülya

    2016-01-01

    Linear factor analysis models used to examine constructs underlying the responses are not very suitable for dichotomous or polytomous response formats. The associated problems cannot be eliminated by polychoric or tetrachoric correlations in place of the Pearson correlation. Therefore, we considered parameters obtained from the NOHARM and FACTOR…

  11. Efficiency, Technology and Productivity Change in Australian Universities, 1998-2003

    ERIC Educational Resources Information Center

    Worthington, Andrew C.; Lee, Boon L.

    2008-01-01

    In this study, productivity growth in 35 Australian universities is investigated using non-parametric frontier techniques over the period 1998-2003. The five inputs included in the analysis are full-time equivalent academic and non-academic staff, non-labor expenditure and undergraduate and postgraduate student load while the six outputs are…

  12. Forest Stand Canopy Structure Attribute Estimation from High Resolution Digital Airborne Imagery

    Treesearch

    Demetrios Gatziolis

    2006-01-01

    A study of forest stand canopy variable assessment using digital, airborne, multispectral imagery is presented. Variable estimation involves stem density, canopy closure, and mean crown diameter, and it is based on quantification of spatial autocorrelation among pixel digital numbers (DN) using variogram analysis and an alternative, non-parametric approach known as...

  13. The Efficiency of Higher Education Institutions in England Revisited: Comparing Alternative Measures

    ERIC Educational Resources Information Center

    Johnes, Geraint; Tone, Kaoru

    2017-01-01

    Data envelopment analysis (DEA) has often been used to evaluate efficiency in the context of higher education institutions. Yet there are numerous alternative non-parametric measures of efficiency available. This paper compares efficiency scores obtained for institutions of higher education in England, 2013-2014, using three different methods: the…

  14. Conditional Covariance-Based Subtest Selection for DIMTEST

    ERIC Educational Resources Information Center

    Froelich, Amy G.; Habing, Brian

    2008-01-01

    DIMTEST is a nonparametric hypothesis-testing procedure designed to test the assumptions of a unidimensional and locally independent item response theory model. Several previous Monte Carlo studies have found that using linear factor analysis to select the assessment subtest for DIMTEST results in a moderate to severe loss of power when the exam…

  15. Applying Multivariate Adaptive Splines to Identify Genes With Expressions Varying After Diagnosis in Microarray Experiments.

    PubMed

    Duan, Fenghai; Xu, Ye

    2017-01-01

    To analyze a microarray experiment to identify the genes with expressions varying after the diagnosis of breast cancer. A total of 44 928 probe sets in an Affymetrix microarray data publicly available on Gene Expression Omnibus from 249 patients with breast cancer were analyzed by the nonparametric multivariate adaptive splines. Then, the identified genes with turning points were grouped by K-means clustering, and their network relationship was subsequently analyzed by the Ingenuity Pathway Analysis. In total, 1640 probe sets (genes) were reliably identified to have turning points along with the age at diagnosis in their expression profiling, of which 927 expressed lower after turning points and 713 expressed higher after the turning points. K-means clustered them into 3 groups with turning points centering at 54, 62.5, and 72, respectively. The pathway analysis showed that the identified genes were actively involved in various cancer-related functions or networks. In this article, we applied the nonparametric multivariate adaptive splines method to a publicly available gene expression data and successfully identified genes with expressions varying before and after breast cancer diagnosis.

  16. Investigation of the dynamic stress–strain response of compressible polymeric foam using a non-parametric analysis

    DOE PAGES

    Koohbor, Behrad; Kidane, Addis; Lu, Wei -Yang; ...

    2016-01-25

    Dynamic stress–strain response of rigid closed-cell polymeric foams is investigated in this work by subjecting high toughness polyurethane foam specimens to direct impact with different projectile velocities and quantifying their deformation response with high speed stereo-photography together with 3D digital image correlation. The measured transient displacement field developed in the specimens during high stain rate loading is used to calculate the transient axial acceleration field throughout the specimen. A simple mathematical formulation based on conservation of mass is also proposed to determine the local change of density in the specimen during deformation. By obtaining the full-field acceleration and density distributions,more » the inertia stresses at each point in the specimen are determined through a non-parametric analysis and superimposed on the stress magnitudes measured at specimen ends to obtain the full-field stress distribution. Furthermore, the process outlined above overcomes a major challenge in high strain rate experiments with low impedance polymeric foam specimens, i.e. the delayed equilibrium conditions can be quantified.« less

  17. Gini estimation under infinite variance

    NASA Astrophysics Data System (ADS)

    Fontanari, Andrea; Taleb, Nassim Nicholas; Cirillo, Pasquale

    2018-07-01

    We study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α ∈(1 , 2)). We show that, in such a case, the Gini coefficient cannot be reliably estimated using conventional nonparametric methods, because of a downward bias that emerges under fat tails. This has important implications for the ongoing discussion about economic inequality. We start by discussing how the nonparametric estimator of the Gini index undergoes a phase transition in the symmetry structure of its asymptotic distribution, as the data distribution shifts from the domain of attraction of a light-tailed distribution to that of a fat-tailed one, especially in the case of infinite variance. We also show how the nonparametric Gini bias increases with lower values of α. We then prove that maximum likelihood estimation outperforms nonparametric methods, requiring a much smaller sample size to reach efficiency. Finally, for fat-tailed data, we provide a simple correction mechanism to the small sample bias of the nonparametric estimator based on the distance between the mode and the mean of its asymptotic distribution.

  18. A simulation study of nonparametric total deviation index as a measure of agreement based on quantile regression.

    PubMed

    Lin, Lawrence; Pan, Yi; Hedayat, A S; Barnhart, Huiman X; Haber, Michael

    2016-01-01

    Total deviation index (TDI) captures a prespecified quantile of the absolute deviation of paired observations from raters, observers, methods, assays, instruments, etc. We compare the performance of TDI using nonparametric quantile regression to the TDI assuming normality (Lin, 2000). This simulation study considers three distributions: normal, Poisson, and uniform at quantile levels of 0.8 and 0.9 for cases with and without contamination. Study endpoints include the bias of TDI estimates (compared with their respective theoretical values), standard error of TDI estimates (compared with their true simulated standard errors), and test size (compared with 0.05), and power. Nonparametric TDI using quantile regression, although it slightly underestimates and delivers slightly less power for data without contamination, works satisfactorily under all simulated cases even for moderate (say, ≥40) sample sizes. The performance of the TDI based on a quantile of 0.8 is in general superior to that of 0.9. The performances of nonparametric and parametric TDI methods are compared with a real data example. Nonparametric TDI can be very useful when the underlying distribution on the difference is not normal, especially when it has a heavy tail.

  19. Statistical analysis of the electric energy production from photovoltaic conversion using mobile and fixed constructions

    NASA Astrophysics Data System (ADS)

    Bugała, Artur; Bednarek, Karol; Kasprzyk, Leszek; Tomczewski, Andrzej

    2017-10-01

    The paper presents the most representative - from the three-year measurement time period - characteristics of daily and monthly electricity production from a photovoltaic conversion using modules installed in a fixed and 2-axis tracking construction. Results are presented for selected summer, autumn, spring and winter days. Analyzed measuring stand is located on the roof of the Faculty of Electrical Engineering Poznan University of Technology building. The basic parameters of the statistical analysis like mean value, standard deviation, skewness, kurtosis, median, range, or coefficient of variation were used. It was found that the asymmetry factor can be useful in the analysis of the daily electricity production from a photovoltaic conversion. In order to determine the repeatability of monthly electricity production, occurring between the summer, and summer and winter months, a non-parametric Mann-Whitney U test was used as a statistical solution. In order to analyze the repeatability of daily peak hours, describing the largest value of the hourly electricity production, a non-parametric Kruskal-Wallis test was applied as an extension of the Mann-Whitney U test. Based on the analysis of the electric energy distribution from a prepared monitoring system it was found that traditional forecasting methods of the electricity production from a photovoltaic conversion, like multiple regression models, should not be the preferred methods of the analysis.

  20. Nonparametric method for failures diagnosis in the actuating subsystem of aircraft control system

    NASA Astrophysics Data System (ADS)

    Terentev, M. N.; Karpenko, S. S.; Zybin, E. Yu; Kosyanchuk, V. V.

    2018-02-01

    In this paper we design a nonparametric method for failures diagnosis in the aircraft control system that uses the measurements of the control signals and the aircraft states only. It doesn’t require a priori information of the aircraft model parameters, training or statistical calculations, and is based on analytical nonparametric one-step-ahead state prediction approach. This makes it possible to predict the behavior of unidentified and failure dynamic systems, to weaken the requirements to control signals, and to reduce the diagnostic time and problem complexity.

  1. Nonparametric estimation of the multivariate survivor function: the multivariate Kaplan-Meier estimator.

    PubMed

    Prentice, Ross L; Zhao, Shanshan

    2018-01-01

    The Dabrowska (Ann Stat 16:1475-1489, 1988) product integral representation of the multivariate survivor function is extended, leading to a nonparametric survivor function estimator for an arbitrary number of failure time variates that has a simple recursive formula for its calculation. Empirical process methods are used to sketch proofs for this estimator's strong consistency and weak convergence properties. Summary measures of pairwise and higher-order dependencies are also defined and nonparametrically estimated. Simulation evaluation is given for the special case of three failure time variates.

  2. Phylogenetic relationships of the dwarf boas and a comparison of Bayesian and bootstrap measures of phylogenetic support.

    PubMed

    Wilcox, Thomas P; Zwickl, Derrick J; Heath, Tracy A; Hillis, David M

    2002-11-01

    Four New World genera of dwarf boas (Exiliboa, Trachyboa, Tropidophis, and Ungaliophis) have been placed by many systematists in a single group (traditionally called Tropidophiidae). However, the monophyly of this group has been questioned in several studies. Moreover, the overall relationships among basal snake lineages, including the placement of the dwarf boas, are poorly understood. We obtained mtDNA sequence data for 12S, 16S, and intervening tRNA-val genes from 23 species of snakes representing most major snake lineages, including all four genera of New World dwarf boas. We then examined the phylogenetic position of these species by estimating the phylogeny of the basal snakes. Our phylogenetic analysis suggests that New World dwarf boas are not monophyletic. Instead, we find Exiliboa and Ungaliophis to be most closely related to sand boas (Erycinae), boas (Boinae), and advanced snakes (Caenophidea), whereas Tropidophis and Trachyboa form an independent clade that separated relatively early in snake radiation. Our estimate of snake phylogeny differs significantly in other ways from some previous estimates of snake phylogeny. For instance, pythons do not cluster with boas and sand boas, but instead show a strong relationship with Loxocemus and Xenopeltis. Additionally, uropeltids cluster strongly with Cylindrophis, and together are embedded in what has previously been considered the macrostomatan radiation. These relationships are supported by both bootstrapping (parametric and nonparametric approaches) and Bayesian analysis, although Bayesian support values are consistently higher than those obtained from nonparametric bootstrapping. Simulations show that Bayesian support values represent much better estimates of phylogenetic accuracy than do nonparametric bootstrap support values, at least under the conditions of our study. Copyright 2002 Elsevier Science (USA)

  3. An appraisal of statistical procedures used in derivation of reference intervals.

    PubMed

    Ichihara, Kiyoshi; Boyd, James C

    2010-11-01

    When conducting studies to derive reference intervals (RIs), various statistical procedures are commonly applied at each step, from the planning stages to final computation of RIs. Determination of the necessary sample size is an important consideration, and evaluation of at least 400 individuals in each subgroup has been recommended to establish reliable common RIs in multicenter studies. Multiple regression analysis allows identification of the most important factors contributing to variation in test results, while accounting for possible confounding relationships among these factors. Of the various approaches proposed for judging the necessity of partitioning reference values, nested analysis of variance (ANOVA) is the likely method of choice owing to its ability to handle multiple groups and being able to adjust for multiple factors. Box-Cox power transformation often has been used to transform data to a Gaussian distribution for parametric computation of RIs. However, this transformation occasionally fails. Therefore, the non-parametric method based on determination of the 2.5 and 97.5 percentiles following sorting of the data, has been recommended for general use. The performance of the Box-Cox transformation can be improved by introducing an additional parameter representing the origin of transformation. In simulations, the confidence intervals (CIs) of reference limits (RLs) calculated by the parametric method were narrower than those calculated by the non-parametric approach. However, the margin of difference was rather small owing to additional variability in parametrically-determined RLs introduced by estimation of parameters for the Box-Cox transformation. The parametric calculation method may have an advantage over the non-parametric method in allowing identification and exclusion of extreme values during RI computation.

  4. Waterproof stretchable optoelectronics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rogers, John A.; Kim, Rak-Hwan; Kim, Dae-Hyeong

    Described herein are flexible and stretchable LED arrays and methods utilizing flexible and stretchable LED arrays. Assembly of flexible LED arrays alongside flexible plasmonic crystals is useful for construction of fluid monitors, permitting sensitive detection of fluid refractive index and composition. Co-integration of flexible LED arrays with flexible photodetector arrays is useful for construction of flexible proximity sensors. Application of stretchable LED arrays onto flexible threads as light emitting sutures provides novel means for performing radiation therapy on wounds.

  5. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    PubMed

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  6. Predicting Market Impact Costs Using Nonparametric Machine Learning Models

    PubMed Central

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235

  7. Greenhouse Gas Emissions, Energy Consumption and Economic Growth: A Panel Cointegration Analysis for 16 Asian Countries.

    PubMed

    Lu, Wen-Cheng

    2017-11-22

    This research investigates the co-movement and causality relationships between greenhouse gas emissions, energy consumption and economic growth for 16 Asian countries over the period 1990-2012. The empirical findings suggest that in the long run, bidirectional Granger causality between energy consumption, GDP and greenhouse gas emissions and between GDP, greenhouse gas emissions and energy consumption is established. A non-linear, quadratic relationship is revealed between greenhouse gas emissions, energy consumption and economic growth, consistent with the environmental Kuznets curve for these 16 Asian countries and a subsample of the Asian new industrial economy. Short-run relationships are regionally specific across the Asian continent. From the viewpoint of energy policy in Asia, various governments support low-carbon or renewable energy use and are reducing fossil fuel combustion to sustain economic growth, but in some countries, evidence suggests that energy conservation might only be marginal.

  8. Greenhouse Gas Emissions, Energy Consumption and Economic Growth: A Panel Cointegration Analysis for 16 Asian Countries

    PubMed Central

    2017-01-01

    This research investigates the co-movement and causality relationships between greenhouse gas emissions, energy consumption and economic growth for 16 Asian countries over the period 1990–2012. The empirical findings suggest that in the long run, bidirectional Granger causality between energy consumption, GDP and greenhouse gas emissions and between GDP, greenhouse gas emissions and energy consumption is established. A non-linear, quadratic relationship is revealed between greenhouse gas emissions, energy consumption and economic growth, consistent with the environmental Kuznets curve for these 16 Asian countries and a subsample of the Asian new industrial economy. Short-run relationships are regionally specific across the Asian continent. From the viewpoint of energy policy in Asia, various governments support low-carbon or renewable energy use and are reducing fossil fuel combustion to sustain economic growth, but in some countries, evidence suggests that energy conservation might only be marginal. PMID:29165399

  9. Effects of export concentration on CO2 emissions in developed countries: an empirical analysis.

    PubMed

    Apergis, Nicholas; Can, Muhlis; Gozgor, Giray; Lau, Chi Keung Marco

    2018-03-08

    This paper provides the evidence on the short- and the long-run effects of the export product concentration on the level of CO 2 emissions in 19 developed (high-income) economies, spanning the period 1962-2010. To this end, the paper makes use of the nonlinear panel unit root and cointegration tests with multiple endogenous structural breaks. It also considers the mean group estimations, the autoregressive distributed lag model, and the panel quantile regression estimations. The findings illustrate that the environmental Kuznets curve (EKC) hypothesis is valid in the panel dataset of 19 developed economies. In addition, it documents that a higher level of the product concentration of exports leads to lower CO 2 emissions. The results from the panel quantile regressions also indicate that the effect of the export product concentration upon the per capita CO 2 emissions is relatively high at the higher quantiles.

  10. The relationship between pollutant emissions, renewable energy, nuclear energy and GDP: empirical evidence from 18 developed and developing countries

    NASA Astrophysics Data System (ADS)

    Ben Mbarek, Mounir; Saidi, Kais; Amamri, Mounira

    2018-07-01

    This document investigates the causal relationship between nuclear energy (NE), pollutant emissions (CO2 emissions), gross domestic product (GDP) and renewable energy (RE) using dynamic panel data models for a global panel consisting of 18 countries (developed and developing) covering the 1990-2013 period. Our results indicate that there is a co-integration between variables. The unit root test suggests that all the variables are stationary in first differences. The paper further examines the link using the Granger causality analysis of vector error correction model, which indicates a unidirectional relationship running from GDP per capita to pollutant emissions for the developed and developing countries. However, there is a unidirectional causality from GDP per capita to RE in the short and long run. This finding confirms the conservation hypothesis. Similarly, there is no causality between NE and GDP per capita.

  11. [The effect of tobacco prices on consumption: a time series data analysis for Mexico].

    PubMed

    Olivera-Chávez, Rosa Itandehui; Cermeño-Bazán, Rodolfo; de Miera-Juárez, Belén Sáenz; Jiménez-Ruiz, Jorge Alberto; Reynales-Shigematsu, Luz Myriam

    2010-01-01

    To estimate the price elasticity of the demand for cigarettes in Mexico based on data sources and a methodology different from the ones used in previous studies on the topic. Quarterly time series of consumption, income and price for the time period 1994 to 2005 were used. A long-run demand model was estimated using Ordinary Least Squares (OLS) and the existence of a cointegration relationship was investigated. Also, a model using Dinamic Ordinary Least Squares (DOLS) was estimated to correct for potential endogeneity of independent variables and autocorrelation of the residuals. DOLS estimates showed that a 10% increase in cigarette prices could reduce consumption in 2.5% (p<0.05) and increase government revenue in 16.11%. The results confirmed the effectiveness of taxes as an instrument for tobacco control in Mexico. An increase in taxes can be used to increase cigarette prices and therefore to reduce consumption and increase government revenue.

  12. Causal relationship between CO₂ emissions, real GDP, energy consumption, financial development, trade openness, and urbanization in Tunisia.

    PubMed

    Farhani, Sahbi; Ozturk, Ilhan

    2015-10-01

    The aim of this paper is to examine the causal relationship between CO2 emissions, real GDP, energy consumption, financial development, trade openness, and urbanization in Tunisia over the period of 1971-2012. The long-run relationship is investigated by the auto-regressive distributed lag (ARDL) bounds testing approach to cointegration and error correction method (ECM). The results of the analysis reveal a positive sign for the coefficient of financial development, suggesting that the financial development in Tunisia has taken place at the expense of environmental pollution. The Tunisian case also shows a positive monotonic relationship between real GDP and CO2 emissions. This means that the results do not support the validity of environmental Kuznets curve (EKC) hypothesis. In addition, the paper explores causal relationship between the variables by using Granger causality models and it concludes that financial development plays a vital role in the Tunisian economy.

  13. Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.

    PubMed

    Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence

    2012-12-01

    A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA.

  14. Using exogenous variables in testing for monotonic trends in hydrologic time series

    USGS Publications Warehouse

    Alley, William M.

    1988-01-01

    One approach that has been used in performing a nonparametric test for monotonic trend in a hydrologic time series consists of a two-stage analysis. First, a regression equation is estimated for the variable being tested as a function of an exogenous variable. A nonparametric trend test such as the Kendall test is then performed on the residuals from the equation. By analogy to stagewise regression and through Monte Carlo experiments, it is demonstrated that this approach will tend to underestimate the magnitude of the trend and to result in some loss in power as a result of ignoring the interaction between the exogenous variable and time. An alternative approach, referred to as the adjusted variable Kendall test, is demonstrated to generally have increased statistical power and to provide more reliable estimates of the trend slope. In addition, the utility of including an exogenous variable in a trend test is examined under selected conditions.

  15. On an additive partial correlation operator and nonparametric estimation of graphical models.

    PubMed

    Lee, Kuang-Yao; Li, Bing; Zhao, Hongyu

    2016-09-01

    We introduce an additive partial correlation operator as an extension of partial correlation to the nonlinear setting, and use it to develop a new estimator for nonparametric graphical models. Our graphical models are based on additive conditional independence, a statistical relation that captures the spirit of conditional independence without having to resort to high-dimensional kernels for its estimation. The additive partial correlation operator completely characterizes additive conditional independence, and has the additional advantage of putting marginal variation on appropriate scales when evaluating interdependence, which leads to more accurate statistical inference. We establish the consistency of the proposed estimator. Through simulation experiments and analysis of the DREAM4 Challenge dataset, we demonstrate that our method performs better than existing methods in cases where the Gaussian or copula Gaussian assumption does not hold, and that a more appropriate scaling for our method further enhances its performance.

  16. On an additive partial correlation operator and nonparametric estimation of graphical models

    PubMed Central

    Li, Bing; Zhao, Hongyu

    2016-01-01

    Abstract We introduce an additive partial correlation operator as an extension of partial correlation to the nonlinear setting, and use it to develop a new estimator for nonparametric graphical models. Our graphical models are based on additive conditional independence, a statistical relation that captures the spirit of conditional independence without having to resort to high-dimensional kernels for its estimation. The additive partial correlation operator completely characterizes additive conditional independence, and has the additional advantage of putting marginal variation on appropriate scales when evaluating interdependence, which leads to more accurate statistical inference. We establish the consistency of the proposed estimator. Through simulation experiments and analysis of the DREAM4 Challenge dataset, we demonstrate that our method performs better than existing methods in cases where the Gaussian or copula Gaussian assumption does not hold, and that a more appropriate scaling for our method further enhances its performance. PMID:29422689

  17. Determining the multi-scale hedge ratios of stock index futures using the lower partial moments method

    NASA Astrophysics Data System (ADS)

    Dai, Jun; Zhou, Haigang; Zhao, Shaoquan

    2017-01-01

    This paper considers a multi-scale future hedge strategy that minimizes lower partial moments (LPM). To do this, wavelet analysis is adopted to decompose time series data into different components. Next, different parametric estimation methods with known distributions are applied to calculate the LPM of hedged portfolios, which is the key to determining multi-scale hedge ratios over different time scales. Then these parametric methods are compared with the prevailing nonparametric kernel metric method. Empirical results indicate that in the China Securities Index 300 (CSI 300) index futures and spot markets, hedge ratios and hedge efficiency estimated by the nonparametric kernel metric method are inferior to those estimated by parametric hedging model based on the features of sequence distributions. In addition, if minimum-LPM is selected as a hedge target, the hedging periods, degree of risk aversion, and target returns can affect the multi-scale hedge ratios and hedge efficiency, respectively.

  18. Nonparametric estimation of stochastic differential equations with sparse Gaussian processes.

    PubMed

    García, Constantino A; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G

    2017-08-01

    The application of stochastic differential equations (SDEs) to the analysis of temporal data has attracted increasing attention, due to their ability to describe complex dynamics with physically interpretable equations. In this paper, we introduce a nonparametric method for estimating the drift and diffusion terms of SDEs from a densely observed discrete time series. The use of Gaussian processes as priors permits working directly in a function-space view and thus the inference takes place directly in this space. To cope with the computational complexity that requires the use of Gaussian processes, a sparse Gaussian process approximation is provided. This approximation permits the efficient computation of predictions for the drift and diffusion terms by using a distribution over a small subset of pseudosamples. The proposed method has been validated using both simulated data and real data from economy and paleoclimatology. The application of the method to real data demonstrates its ability to capture the behavior of complex systems.

  19. A SAS macro for testing differences among three or more independent groups using Kruskal-Wallis and Nemenyi tests.

    PubMed

    Liu, Yuewei; Chen, Weihong

    2012-02-01

    As a nonparametric method, the Kruskal-Wallis test is widely used to compare three or more independent groups when an ordinal or interval level of data is available, especially when the assumptions of analysis of variance (ANOVA) are not met. If the Kruskal-Wallis statistic is statistically significant, Nemenyi test is an alternative method for further pairwise multiple comparisons to locate the source of significance. Unfortunately, most popular statistical packages do not integrate the Nemenyi test, which is not easy to be calculated by hand. We described the theory and applications of the Kruskal-Wallis and Nemenyi tests, and presented a flexible SAS macro to implement the two tests. The SAS macro was demonstrated by two examples from our cohort study in occupational epidemiology. It provides a useful tool for SAS users to test the differences among three or more independent groups using a nonparametric method.

  20. Traffic flow forecasting using approximate nearest neighbor nonparametric regression

    DOT National Transportation Integrated Search

    2000-12-01

    The purpose of this research is to enhance nonparametric regression (NPR) for use in real-time systems by first reducing execution time using advanced data structures and imprecise computations and then developing a methodology for applying NPR. Due ...

  1. Radioactivity Registered With a Small Number of Events

    NASA Astrophysics Data System (ADS)

    Zlokazov, Victor; Utyonkov, Vladimir

    2018-02-01

    The synthesis of superheavy elements asks for the analysis of low statistics experimental data presumably obeying an unknown exponential distribution and to take the decision whether they originate from one source or have admixtures. Here we analyze predictions following from non-parametrical methods, employing only such fundamental sample properties as the sample mean, the median and the mode.

  2. A Multidimensional Item Response Model: Constrained Latent Class Analysis Using the Gibbs Sampler and Posterior Predictive Checks.

    ERIC Educational Resources Information Center

    Hoijtink, Herbert; Molenaar, Ivo W.

    1997-01-01

    This paper shows that a certain class of constrained latent class models may be interpreted as a special case of nonparametric multidimensional item response models. Parameters of this latent class model are estimated using an application of the Gibbs sampler, and model fit is investigated using posterior predictive checks. (SLD)

  3. Tallying Differences between Demographic Subgroups from Multiple Institutions: The Practical Utility of Nonparametric Analysis

    ERIC Educational Resources Information Center

    Yorke, Mantz

    2017-01-01

    When analysing course-level data by subgroups based upon some demographic characteristics, the numbers in analytical cells are often too small to allow inferences to be drawn that might help in the enhancement of practices. However, relatively simple analyses can provide useful pointers. This article draws upon a study involving a partnership with…

  4. Least Squares Distance Method of Cognitive Validation and Analysis for Binary Items Using Their Item Response Theory Parameters

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.

    2007-01-01

    The validation of cognitive attributes required for correct answers on binary test items or tasks has been addressed in previous research through the integration of cognitive psychology and psychometric models using parametric or nonparametric item response theory, latent class modeling, and Bayesian modeling. All previous models, each with their…

  5. Evaluating kriging as a tool to improve moderate resolution maps of forest biomass

    Treesearch

    Elizabeth A. Freeman; Gretchen G. Moisen

    2007-01-01

    The USDA Forest Service, Forest Inventory and Analysis program (FIA) recently produced a nationwide map of forest biomass by modeling biomass collected on forest inventory plots as nonparametric functions of moderate resolution satellite data and other environmental variables using Cubist software. Efforts are underway to develop methods to enhance this initial map. We...

  6. Nonparametric model validations for hidden Markov models with applications in financial econometrics.

    PubMed

    Zhao, Zhibiao

    2011-06-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.

  7. Nonparametric Transfer Function Models

    PubMed Central

    Liu, Jun M.; Chen, Rong; Yao, Qiwei

    2009-01-01

    In this paper a class of nonparametric transfer function models is proposed to model nonlinear relationships between ‘input’ and ‘output’ time series. The transfer function is smooth with unknown functional forms, and the noise is assumed to be a stationary autoregressive-moving average (ARMA) process. The nonparametric transfer function is estimated jointly with the ARMA parameters. By modeling the correlation in the noise, the transfer function can be estimated more efficiently. The parsimonious ARMA structure improves the estimation efficiency in finite samples. The asymptotic properties of the estimators are investigated. The finite-sample properties are illustrated through simulations and one empirical example. PMID:20628584

  8. A semi-nonparametric Poisson regression model for analyzing motor vehicle crash data.

    PubMed

    Ye, Xin; Wang, Ke; Zou, Yajie; Lord, Dominique

    2018-01-01

    This paper develops a semi-nonparametric Poisson regression model to analyze motor vehicle crash frequency data collected from rural multilane highway segments in California, US. Motor vehicle crash frequency on rural highway is a topic of interest in the area of transportation safety due to higher driving speeds and the resultant severity level. Unlike the traditional Negative Binomial (NB) model, the semi-nonparametric Poisson regression model can accommodate an unobserved heterogeneity following a highly flexible semi-nonparametric (SNP) distribution. Simulation experiments are conducted to demonstrate that the SNP distribution can well mimic a large family of distributions, including normal distributions, log-gamma distributions, bimodal and trimodal distributions. Empirical estimation results show that such flexibility offered by the SNP distribution can greatly improve model precision and the overall goodness-of-fit. The semi-nonparametric distribution can provide a better understanding of crash data structure through its ability to capture potential multimodality in the distribution of unobserved heterogeneity. When estimated coefficients in empirical models are compared, SNP and NB models are found to have a substantially different coefficient for the dummy variable indicating the lane width. The SNP model with better statistical performance suggests that the NB model overestimates the effect of lane width on crash frequency reduction by 83.1%.

  9. Robust multivariate nonparametric tests for detection of two-sample location shift in clinical trials

    PubMed Central

    Jiang, Xuejun; Guo, Xu; Zhang, Ning; Wang, Bo

    2018-01-01

    This article presents and investigates performance of a series of robust multivariate nonparametric tests for detection of location shift between two multivariate samples in randomized controlled trials. The tests are built upon robust estimators of distribution locations (medians, Hodges-Lehmann estimators, and an extended U statistic) with both unscaled and scaled versions. The nonparametric tests are robust to outliers and do not assume that the two samples are drawn from multivariate normal distributions. Bootstrap and permutation approaches are introduced for determining the p-values of the proposed test statistics. Simulation studies are conducted and numerical results are reported to examine performance of the proposed statistical tests. The numerical results demonstrate that the robust multivariate nonparametric tests constructed from the Hodges-Lehmann estimators are more efficient than those based on medians and the extended U statistic. The permutation approach can provide a more stringent control of Type I error and is generally more powerful than the bootstrap procedure. The proposed robust nonparametric tests are applied to detect multivariate distributional difference between the intervention and control groups in the Thai Healthy Choices study and examine the intervention effect of a four-session motivational interviewing-based intervention developed in the study to reduce risk behaviors among youth living with HIV. PMID:29672555

  10. Comparison of methods for estimating the attributable risk in the context of survival analysis.

    PubMed

    Gassama, Malamine; Bénichou, Jacques; Dartois, Laureen; Thiébaut, Anne C M

    2017-01-23

    The attributable risk (AR) measures the proportion of disease cases that can be attributed to an exposure in the population. Several definitions and estimation methods have been proposed for survival data. Using simulations, we compared four methods for estimating AR defined in terms of survival functions: two nonparametric methods based on Kaplan-Meier's estimator, one semiparametric based on Cox's model, and one parametric based on the piecewise constant hazards model, as well as one simpler method based on estimated exposure prevalence at baseline and Cox's model hazard ratio. We considered a fixed binary exposure with varying exposure probabilities and strengths of association, and generated event times from a proportional hazards model with constant or monotonic (decreasing or increasing) Weibull baseline hazard, as well as from a nonproportional hazards model. We simulated 1,000 independent samples of size 1,000 or 10,000. The methods were compared in terms of mean bias, mean estimated standard error, empirical standard deviation and 95% confidence interval coverage probability at four equally spaced time points. Under proportional hazards, all five methods yielded unbiased results regardless of sample size. Nonparametric methods displayed greater variability than other approaches. All methods showed satisfactory coverage except for nonparametric methods at the end of follow-up for a sample size of 1,000 especially. With nonproportional hazards, nonparametric methods yielded similar results to those under proportional hazards, whereas semiparametric and parametric approaches that both relied on the proportional hazards assumption performed poorly. These methods were applied to estimate the AR of breast cancer due to menopausal hormone therapy in 38,359 women of the E3N cohort. In practice, our study suggests to use the semiparametric or parametric approaches to estimate AR as a function of time in cohort studies if the proportional hazards assumption appears appropriate.

  11. Uncertainty in determining extreme precipitation thresholds

    NASA Astrophysics Data System (ADS)

    Liu, Bingjun; Chen, Junfan; Chen, Xiaohong; Lian, Yanqing; Wu, Lili

    2013-10-01

    Extreme precipitation events are rare and occur mostly on a relatively small and local scale, which makes it difficult to set the thresholds for extreme precipitations in a large basin. Based on the long term daily precipitation data from 62 observation stations in the Pearl River Basin, this study has assessed the applicability of the non-parametric, parametric, and the detrended fluctuation analysis (DFA) methods in determining extreme precipitation threshold (EPT) and the certainty to EPTs from each method. Analyses from this study show the non-parametric absolute critical value method is easy to use, but unable to reflect the difference of spatial rainfall distribution. The non-parametric percentile method can account for the spatial distribution feature of precipitation, but the problem with this method is that the threshold value is sensitive to the size of rainfall data series and is subjected to the selection of a percentile thus make it difficult to determine reasonable threshold values for a large basin. The parametric method can provide the most apt description of extreme precipitations by fitting extreme precipitation distributions with probability distribution functions; however, selections of probability distribution functions, the goodness-of-fit tests, and the size of the rainfall data series can greatly affect the fitting accuracy. In contrast to the non-parametric and the parametric methods which are unable to provide information for EPTs with certainty, the DFA method although involving complicated computational processes has proven to be the most appropriate method that is able to provide a unique set of EPTs for a large basin with uneven spatio-temporal precipitation distribution. The consistency between the spatial distribution of DFA-based thresholds with the annual average precipitation, the coefficient of variation (CV), and the coefficient of skewness (CS) for the daily precipitation further proves that EPTs determined by the DFA method are more reasonable and applicable for the Pearl River Basin.

  12. A note on the correlation between circular and linear variables with an application to wind direction and air temperature data in a Mediterranean climate

    NASA Astrophysics Data System (ADS)

    Lototzis, M.; Papadopoulos, G. K.; Droulia, F.; Tseliou, A.; Tsiros, I. X.

    2018-04-01

    There are several cases where a circular variable is associated with a linear one. A typical example is wind direction that is often associated with linear quantities such as air temperature and air humidity. The analysis of a statistical relationship of this kind can be tested by the use of parametric and non-parametric methods, each of which has its own advantages and drawbacks. This work deals with correlation analysis using both the parametric and the non-parametric procedure on a small set of meteorological data of air temperature and wind direction during a summer period in a Mediterranean climate. Correlations were examined between hourly, daily and maximum-prevailing values, under typical and non-typical meteorological conditions. Both tests indicated a strong correlation between mean hourly wind directions and mean hourly air temperature, whereas mean daily wind direction and mean daily air temperature do not seem to be correlated. In some cases, however, the two procedures were found to give quite dissimilar levels of significance on the rejection or not of the null hypothesis of no correlation. The simple statistical analysis presented in this study, appropriately extended in large sets of meteorological data, may be a useful tool for estimating effects of wind on local climate studies.

  13. A permutation-based non-parametric analysis of CRISPR screen data.

    PubMed

    Jia, Gaoxiang; Wang, Xinlei; Xiao, Guanghua

    2017-07-19

    Clustered regularly-interspaced short palindromic repeats (CRISPR) screens are usually implemented in cultured cells to identify genes with critical functions. Although several methods have been developed or adapted to analyze CRISPR screening data, no single specific algorithm has gained popularity. Thus, rigorous procedures are needed to overcome the shortcomings of existing algorithms. We developed a Permutation-Based Non-Parametric Analysis (PBNPA) algorithm, which computes p-values at the gene level by permuting sgRNA labels, and thus it avoids restrictive distributional assumptions. Although PBNPA is designed to analyze CRISPR data, it can also be applied to analyze genetic screens implemented with siRNAs or shRNAs and drug screens. We compared the performance of PBNPA with competing methods on simulated data as well as on real data. PBNPA outperformed recent methods designed for CRISPR screen analysis, as well as methods used for analyzing other functional genomics screens, in terms of Receiver Operating Characteristics (ROC) curves and False Discovery Rate (FDR) control for simulated data under various settings. Remarkably, the PBNPA algorithm showed better consistency and FDR control on published real data as well. PBNPA yields more consistent and reliable results than its competitors, especially when the data quality is low. R package of PBNPA is available at: https://cran.r-project.org/web/packages/PBNPA/ .

  14. New analysis methods to push the boundaries of diagnostic techniques in the environmental sciences

    NASA Astrophysics Data System (ADS)

    Lungaroni, M.; Murari, A.; Peluso, E.; Gelfusa, M.; Malizia, A.; Vega, J.; Talebzadeh, S.; Gaudio, P.

    2016-04-01

    In the last years, new and more sophisticated measurements have been at the basis of the major progress in various disciplines related to the environment, such as remote sensing and thermonuclear fusion. To maximize the effectiveness of the measurements, new data analysis techniques are required. First data processing tasks, such as filtering and fitting, are of primary importance, since they can have a strong influence on the rest of the analysis. Even if Support Vector Regression is a method devised and refined at the end of the 90s, a systematic comparison with more traditional non parametric regression methods has never been reported. In this paper, a series of systematic tests is described, which indicates how SVR is a very competitive method of non-parametric regression that can usefully complement and often outperform more consolidated approaches. The performance of Support Vector Regression as a method of filtering is investigated first, comparing it with the most popular alternative techniques. Then Support Vector Regression is applied to the problem of non-parametric regression to analyse Lidar surveys for the environments measurement of particulate matter due to wildfires. The proposed approach has given very positive results and provides new perspectives to the interpretation of the data.

  15. Comparative Analysis of a Principal Component Analysis-Based and an Artificial Neural Network-Based Method for Baseline Removal.

    PubMed

    Carvajal, Roberto C; Arias, Luis E; Garces, Hugo O; Sbarbaro, Daniel G

    2016-04-01

    This work presents a non-parametric method based on a principal component analysis (PCA) and a parametric one based on artificial neural networks (ANN) to remove continuous baseline features from spectra. The non-parametric method estimates the baseline based on a set of sampled basis vectors obtained from PCA applied over a previously composed continuous spectra learning matrix. The parametric method, however, uses an ANN to filter out the baseline. Previous studies have demonstrated that this method is one of the most effective for baseline removal. The evaluation of both methods was carried out by using a synthetic database designed for benchmarking baseline removal algorithms, containing 100 synthetic composed spectra at different signal-to-baseline ratio (SBR), signal-to-noise ratio (SNR), and baseline slopes. In addition to deomonstrating the utility of the proposed methods and to compare them in a real application, a spectral data set measured from a flame radiation process was used. Several performance metrics such as correlation coefficient, chi-square value, and goodness-of-fit coefficient were calculated to quantify and compare both algorithms. Results demonstrate that the PCA-based method outperforms the one based on ANN both in terms of performance and simplicity. © The Author(s) 2016.

  16. Frequency Analysis Using Bootstrap Method and SIR Algorithm for Prevention of Natural Disasters

    NASA Astrophysics Data System (ADS)

    Kim, T.; Kim, Y. S.

    2017-12-01

    The frequency analysis of hydrometeorological data is one of the most important factors in response to natural disaster damage, and design standards for a disaster prevention facilities. In case of frequency analysis of hydrometeorological data, it assumes that observation data have statistical stationarity, and a parametric method considering the parameter of probability distribution is applied. For a parametric method, it is necessary to sufficiently collect reliable data; however, snowfall observations are needed to compensate for insufficient data in Korea, because of reducing the number of days for snowfall observations and mean maximum daily snowfall depth due to climate change. In this study, we conducted the frequency analysis for snowfall using the Bootstrap method and SIR algorithm which are the resampling methods that can overcome the problems of insufficient data. For the 58 meteorological stations distributed evenly in Korea, the probability of snowfall depth was estimated by non-parametric frequency analysis using the maximum daily snowfall depth data. The results show that probabilistic daily snowfall depth by frequency analysis is decreased at most stations, and most stations representing the rate of change were found to be consistent in both parametric and non-parametric frequency analysis. This study shows that the resampling methods can do the frequency analysis of the snowfall depth that has insufficient observed samples, which can be applied to interpretation of other natural disasters such as summer typhoons with seasonal characteristics. Acknowledgment.This research was supported by a grant(MPSS-NH-2015-79) from Disaster Prediction and Mitigation Technology Development Program funded by Korean Ministry of Public Safety and Security(MPSS).

  17. A Nonparametric Statistical Approach to the Validation of Computer Simulation Models

    DTIC Science & Technology

    1985-11-01

    Ballistic Research Laboratory, the Experimental Design and Analysis Branch of the Systems Engineering and Concepts Analysis Division was funded to...2 Winter. E M. Wisemiler. D P. azd UjiharmJ K. Venrgcation ad Validatiot of Engineering Simulatiots with Minimal D2ta." Pmeedinr’ of the 1976 Summer...used by numerous authors. Law%6 has augmented their approach with specific suggestions for each of the three stage’s: 1. develop high face-validity

  18. Serum and Plasma Metabolomic Biomarkers for Lung Cancer.

    PubMed

    Kumar, Nishith; Shahjaman, Md; Mollah, Md Nurul Haque; Islam, S M Shahinul; Hoque, Md Aminul

    2017-01-01

    In drug invention and early disease prediction of lung cancer, metabolomic biomarker detection is very important. Mortality rate can be decreased, if cancer is predicted at the earlier stage. Recent diagnostic techniques for lung cancer are not prognosis diagnostic techniques. However, if we know the name of the metabolites, whose intensity levels are considerably changing between cancer subject and control subject, then it will be easy to early diagnosis the disease as well as to discover the drug. Therefore, in this paper we have identified the influential plasma and serum blood sample metabolites for lung cancer and also identified the biomarkers that will be helpful for early disease prediction as well as for drug invention. To identify the influential metabolites, we considered a parametric and a nonparametric test namely student׳s t-test as parametric and Kruskal-Wallis test as non-parametric test. We also categorized the up-regulated and down-regulated metabolites by the heatmap plot and identified the biomarkers by support vector machine (SVM) classifier and pathway analysis. From our analysis, we got 27 influential (p-value<0.05) metabolites from plasma sample and 13 influential (p-value<0.05) metabolites from serum sample. According to the importance plot through SVM classifier, pathway analysis and correlation network analysis, we declared 4 metabolites (taurine, aspertic acid, glutamine and pyruvic acid) as plasma biomarker and 3 metabolites (aspartic acid, taurine and inosine) as serum biomarker.

  19. Thirty Years of Nonparametric Item Response Theory.

    ERIC Educational Resources Information Center

    Molenaar, Ivo W.

    2001-01-01

    Discusses relationships between a mathematical measurement model and its real-world applications. Makes a distinction between large-scale data matrices commonly found in educational measurement and smaller matrices found in attitude and personality measurement. Also evaluates nonparametric methods for estimating item response functions and…

  20. Conditional Covariance-Based Nonparametric Multidimensionality Assessment.

    ERIC Educational Resources Information Center

    Stout, William; And Others

    1996-01-01

    Three nonparametric procedures that use estimates of covariances of item-pair responses conditioned on examinee trait level for assessing dimensionality of a test are described. The HCA/CCPROX, DIMTEST, and DETECT are applied to a dimensionality study of the Law School Admission Test. (SLD)

  1. Carbon dioxide emission and economic growth of China-the role of international trade.

    PubMed

    Boamah, Kofi Baah; Du, Jianguo; Bediako, Isaac Asare; Boamah, Angela Jacinta; Abdul-Rasheed, Alhassan Alolo; Owusu, Samuel Mensah

    2017-05-01

    This study investigates the role of international trade in mitigating carbon dioxide emission as a nation economically advances. This study disaggregated the international trade into total exports and total imports. A multivariate model framework was estimated for the time series data for the period of 1970-2014. The quantile regression detected all the essential relationship, which hitherto, the traditional ordinary least squares could not capture. A cointegration relationship was confirmed using the Johansen cointegration model. The findings of the Granger causality revealed the presence of a uni-directional Granger causality running from energy consumption to economic growth; from import to economic growth; from imports to exports; and from urbanisation to economic growth, exports and imports. Our study established the presence of long-run relationships amongst carbon dioxide emission, economic growth, energy consumption, imports, exports and urbanisation. A bootstrap method was further utilised to reassess the evidence of the Granger causality, of which the results affirmed the Granger causality in the long run. This study confirmed a long-run N-shaped relationship between economic growth and carbon emission, under the estimated cubic environmental Kuznet curve framework, from the perspective of China. The recommendation therefore is that China as export leader should transform its trade growth mode by reducing the level of carbon dioxide emission and strengthening its international cooperation as it embraces more environmental protectionisms.

  2. An improved ternary vector system for Agrobacterium-mediated rapid maize transformation.

    PubMed

    Anand, Ajith; Bass, Steven H; Wu, Emily; Wang, Ning; McBride, Kevin E; Annaluru, Narayana; Miller, Michael; Hua, Mo; Jones, Todd J

    2018-05-01

    A simple and versatile ternary vector system that utilizes improved accessory plasmids for rapid maize transformation is described. This system facilitates high-throughput vector construction and plant transformation. The super binary plasmid pSB1 is a mainstay of maize transformation. However, the large size of the base vector makes it challenging to clone, the process of co-integration is cumbersome and inefficient, and some Agrobacterium strains are known to give rise to spontaneous mutants resistant to tetracycline. These limitations present substantial barriers to high throughput vector construction. Here we describe a smaller, simpler and versatile ternary vector system for maize transformation that utilizes improved accessory plasmids requiring no co-integration step. In addition, the newly described accessory plasmids have restored virulence genes found to be defective in pSB1, as well as added virulence genes. Testing of different configurations of the accessory plasmids in combination with T-DNA binary vector as ternary vectors nearly doubles both the raw transformation frequency and the number of transformation events of usable quality in difficult-to-transform maize inbreds. The newly described ternary vectors enabled the development of a rapid maize transformation method for elite inbreds. This vector system facilitated screening different origins of replication on the accessory plasmid and T-DNA vector, and four combinations were identified that have high (86-103%) raw transformation frequency in an elite maize inbred.

  3. Decomposing the trade-environment nexus for Malaysia: what do the technique, scale, composition, and comparative advantage effect indicate?

    PubMed

    Ling, Chong Hui; Ahmed, Khalid; Binti Muhamad, Rusnah; Shahbaz, Muhammad

    2015-12-01

    This paper investigates the impact of trade openness on CO2 emissions using time series data over the period of 1970QI-2011QIV for Malaysia. We disintegrate the trade effect into scale, technique, composition, and comparative advantage effects to check the environmental consequence of trade at four different transition points. To achieve the purpose, we have employed augmented Dickey-Fuller (ADF) and Phillips-Perron (PP) unit root tests in order to examine the stationary properties of the variables. Later, the long-run association among the variables is examined by applying autoregressive distributed lag (ARDL) bounds testing approach to cointegration. Our results confirm the presence of cointegration. Further, we find that scale effect has positive and technique effect has negative impact on CO2 emissions after threshold income level and form inverted U-shaped relationship-hence validates the environmental Kuznets curve hypothesis. Energy consumption adds in CO2 emissions. Trade openness and composite effect improve environmental quality by lowering CO2 emissions. The comparative advantage effect increases CO2 emissions and impairs environmental quality. The results provide the innovative approach to see the impact of trade openness in four sub-dimensions of trade liberalization. Hence, this study attributes more comprehensive policy tool for trade economists to better design environmentally sustainable trade rules and agreements.

  4. A New Hybrid Model FPA-SVM Considering Cointegration for Particular Matter Concentration Forecasting: A Case Study of Kunming and Yuxi, China.

    PubMed

    Li, Weide; Kong, Demeng; Wu, Jinran

    2017-01-01

    Air pollution in China is becoming more serious especially for the particular matter (PM) because of rapid economic growth and fast expansion of urbanization. To solve the growing environment problems, daily PM2.5 and PM10 concentration data form January 1, 2015, to August 23, 2016, in Kunming and Yuxi (two important cities in Yunnan Province, China) are used to present a new hybrid model CI-FPA-SVM to forecast air PM2.5 and PM10 concentration in this paper. The proposed model involves two parts. Firstly, due to its deficiency to assess the possible correlation between different variables, the cointegration theory is introduced to get the input-output relationship and then obtain the nonlinear dynamical system with support vector machine (SVM), in which the parameters c and g are optimized by flower pollination algorithm (FPA). Six benchmark models, including FPA-SVM, CI-SVM, CI-GA-SVM, CI-PSO-SVM, CI-FPA-NN, and multiple linear regression model, are considered to verify the superiority of the proposed hybrid model. The empirical study results demonstrate that the proposed model CI-FPA-SVM is remarkably superior to all considered benchmark models for its high prediction accuracy, and the application of the model for forecasting can give effective monitoring and management of further air quality.

  5. A New Hybrid Model FPA-SVM Considering Cointegration for Particular Matter Concentration Forecasting: A Case Study of Kunming and Yuxi, China

    PubMed Central

    Wu, Jinran

    2017-01-01

    Air pollution in China is becoming more serious especially for the particular matter (PM) because of rapid economic growth and fast expansion of urbanization. To solve the growing environment problems, daily PM2.5 and PM10 concentration data form January 1, 2015, to August 23, 2016, in Kunming and Yuxi (two important cities in Yunnan Province, China) are used to present a new hybrid model CI-FPA-SVM to forecast air PM2.5 and PM10 concentration in this paper. The proposed model involves two parts. Firstly, due to its deficiency to assess the possible correlation between different variables, the cointegration theory is introduced to get the input-output relationship and then obtain the nonlinear dynamical system with support vector machine (SVM), in which the parameters c and g are optimized by flower pollination algorithm (FPA). Six benchmark models, including FPA-SVM, CI-SVM, CI-GA-SVM, CI-PSO-SVM, CI-FPA-NN, and multiple linear regression model, are considered to verify the superiority of the proposed hybrid model. The empirical study results demonstrate that the proposed model CI-FPA-SVM is remarkably superior to all considered benchmark models for its high prediction accuracy, and the application of the model for forecasting can give effective monitoring and management of further air quality. PMID:28932237

  6. An integrated specification for the nexus of water pollution and economic growth in China: Panel cointegration, long-run causality and environmental Kuznets curve.

    PubMed

    Zhang, Chen; Wang, Yuan; Song, Xiaowei; Kubota, Jumpei; He, Yanmin; Tojo, Junji; Zhu, Xiaodong

    2017-12-31

    This paper concentrates on a Chinese context and makes efforts to develop an integrated process to explicitly elucidate the relationship between economic growth and water pollution discharge-chemical oxygen demand (COD) discharge and ammonia nitrogen (NH 3 -N), using two unbalanced panel data sets covering the period separately from 1990 to 2014, and 2001 to 2014. In our present study, the panel unit root tests, cointegration tests, and Granger causality tests allowing for cross-sectional dependence, nonstationary, and heterogeneity are conducted to examine the causal effects of economic growth on COD/NH 3 -N discharge. Further, we simultaneously apply semi-parametric fixed effects estimation and parametric fixed effects estimation to investigate environmental Kuznets curve relationship for COD/NH 3 -N discharge. Our empirical results show a long-term bidirectional causality between economic growth and COD/NH 3 -N discharge in China. Within the Stochastic Impacts by Regression on Population, Affluence and Technology framework, we find evidence in support of an inverted U-shaped curved link between economic growth and COD/NH 3 -N discharge. To the best of our knowledge, there have not been any efforts made in investigating the nexus of economic growth and water pollution in such an integrated manner. Therefore, this study takes a fresh look on this topic. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Characterization of a TOL-like plasmid from Alcaligenes eutrophus that controls expression of a chromosomally encoded p-cresol pathway.

    PubMed Central

    Hughes, E J; Bayly, R C; Skurray, R A

    1984-01-01

    Alcaligenes eutrophus wild-type strain 345 metabolizes m- and p-toluate via a catechol meta-cleavage pathway. DNA analysis, curing studies, and transfer of this phenotype by conjugation and transformation showed that the degradative genes are encoded on a self-transmissible 85-kilobase plasmid, pRA1000. HindIII and XhoI restriction endonuclease analysis of pRA1000 showed it to be similar to the archetypal TOL plasmid, pWWO, differing in the case of HindIII only by the absence of fragments B and D present in pWWO. In strain 345, the presence of pRA1000 prevented the expression of chromosomally encoded enzymes required for the degradation of p-cresol, whereas these enzymes were expressed in strains cured of pRA1000. On the basis of studies with an R68.45-pRA1000 cointegrate plasmid, pRA1001, we conclude that the gene(s) responsible for the effect of p-cresol degradation resides within or near the m- and p-toluate degradative region on pRA1000. Images PMID:6325399

  8. Nonparametric Regression and the Parametric Bootstrap for Local Dependence Assessment.

    ERIC Educational Resources Information Center

    Habing, Brian

    2001-01-01

    Discusses ideas underlying nonparametric regression and the parametric bootstrap with an overview of their application to item response theory and the assessment of local dependence. Illustrates the use of the method in assessing local dependence that varies with examinee trait levels. (SLD)

  9. Non-parametric wall model and methods of identifying boundary conditions for moments in gas flow equations

    NASA Astrophysics Data System (ADS)

    Liao, Meng; To, Quy-Dong; Léonard, Céline; Monchiet, Vincent

    2018-03-01

    In this paper, we use the molecular dynamics simulation method to study gas-wall boundary conditions. Discrete scattering information of gas molecules at the wall surface is obtained from collision simulations. The collision data can be used to identify the accommodation coefficients for parametric wall models such as Maxwell and Cercignani-Lampis scattering kernels. Since these scattering kernels are based on a limited number of accommodation coefficients, we adopt non-parametric statistical methods to construct the kernel to overcome these issues. Different from parametric kernels, the non-parametric kernels require no parameter (i.e. accommodation coefficients) and no predefined distribution. We also propose approaches to derive directly the Navier friction and Kapitza thermal resistance coefficients as well as other interface coefficients associated with moment equations from the non-parametric kernels. The methods are applied successfully to systems composed of CH4 or CO2 and graphite, which are of interest to the petroleum industry.

  10. Nonparametric Simulation of Signal Transduction Networks with Semi-Synchronized Update

    PubMed Central

    Nassiri, Isar; Masoudi-Nejad, Ali; Jalili, Mahdi; Moeini, Ali

    2012-01-01

    Simulating signal transduction in cellular signaling networks provides predictions of network dynamics by quantifying the changes in concentration and activity-level of the individual proteins. Since numerical values of kinetic parameters might be difficult to obtain, it is imperative to develop non-parametric approaches that combine the connectivity of a network with the response of individual proteins to signals which travel through the network. The activity levels of signaling proteins computed through existing non-parametric modeling tools do not show significant correlations with the observed values in experimental results. In this work we developed a non-parametric computational framework to describe the profile of the evolving process and the time course of the proportion of active form of molecules in the signal transduction networks. The model is also capable of incorporating perturbations. The model was validated on four signaling networks showing that it can effectively uncover the activity levels and trends of response during signal transduction process. PMID:22737250

  11. Modeling Predictors of Duties Not Including Flying Status.

    PubMed

    Tvaryanas, Anthony P; Griffith, Converse

    2018-01-01

    The purpose of this study was to reuse available datasets to conduct an analysis of potential predictors of U.S. Air Force aircrew nonavailability in terms of being in "duties not to include flying" (DNIF) status. This study was a retrospective cohort analysis of U.S. Air Force aircrew on active duty during the period from 2003-2012. Predictor variables included age, Air Force Specialty Code (AFSC), clinic location, diagnosis, gender, pay grade, and service component. The response variable was DNIF duration. Nonparametric methods were used for the exploratory analysis and parametric methods were used for model building and statistical inference. Out of a set of 783 potential predictor variables, 339 variables were identified from the nonparametric exploratory analysis for inclusion in the parametric analysis. Of these, 54 variables had significant associations with DNIF duration in the final model fitted to the validation data set. The predicted results of this model for DNIF duration had a correlation of 0.45 with the actual number of DNIF days. Predictor variables included age, 6 AFSCs, 7 clinic locations, and 40 primary diagnosis categories. Specific demographic (i.e., age), occupational (i.e., AFSC), and health (i.e., clinic location and primary diagnosis category) DNIF drivers were identified. Subsequent research should focus on the application of primary, secondary, and tertiary prevention measures to ameliorate the potential impact of these DNIF drivers where possible.Tvaryanas AP, Griffith C Jr. Modeling predictors of duties not including flying status. Aerosp Med Hum Perform. 2018; 89(1):52-57.

  12. Nonparametric model validations for hidden Markov models with applications in financial econometrics

    PubMed Central

    Zhao, Zhibiao

    2011-01-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise. PMID:21750601

  13. Variable Selection for Nonparametric Quantile Regression via Smoothing Spline AN OVA

    PubMed Central

    Lin, Chen-Yen; Bondell, Howard; Zhang, Hao Helen; Zou, Hui

    2014-01-01

    Quantile regression provides a more thorough view of the effect of covariates on a response. Nonparametric quantile regression has become a viable alternative to avoid restrictive parametric assumption. The problem of variable selection for quantile regression is challenging, since important variables can influence various quantiles in different ways. We tackle the problem via regularization in the context of smoothing spline ANOVA models. The proposed sparse nonparametric quantile regression (SNQR) can identify important variables and provide flexible estimates for quantiles. Our numerical study suggests the promising performance of the new procedure in variable selection and function estimation. Supplementary materials for this article are available online. PMID:24554792

  14. Non-parametric analysis of LANDSAT maps using neural nets and parallel computers

    NASA Technical Reports Server (NTRS)

    Salu, Yehuda; Tilton, James

    1991-01-01

    Nearest neighbor approaches and a new neural network, the Binary Diamond, are used for the classification of images of ground pixels obtained by LANDSAT satellite. The performances are evaluated by comparing classifications of a scene in the vicinity of Washington DC. The problem of optimal selection of categories is addressed as a step in the classification process.

  15. To Math or Not to Math: The Algebra-Calculus Pipeline and Postsecondary Mathematics Remediation

    ERIC Educational Resources Information Center

    Showalter, Daniel A.

    2017-01-01

    This article reports on a study designed to estimate the effect of high school coursetaking in the algebra-calculus pipeline on the likelihood of placing out of postsecondary remedial mathematics. A nonparametric variant of propensity score analysis was used on a nationally representative data set to remove selection bias and test for an effect…

  16. FORTRAN implementation of Friedman's test for several related samples

    NASA Technical Reports Server (NTRS)

    Davidson, S. A.

    1982-01-01

    The FRIEDMAN program is a FORTRAN-coded implementation of Friedman's nonparametric test for several related samples with one observation per treatment/-block combination, or as it is sometimes called, the two-way analysis of variance by ranks. The FRIEDMAN program is described and a test data set and its results are presented to aid potential users of this program.

  17. Multivariate Density Estimation and Remote Sensing

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1983-01-01

    Current efforts to develop methods and computer algorithms to effectively represent multivariate data commonly encountered in remote sensing applications are described. While this may involve scatter diagrams, multivariate representations of nonparametric probability density estimates are emphasized. The density function provides a useful graphical tool for looking at data and a useful theoretical tool for classification. This approach is called a thunderstorm data analysis.

  18. Efficiently Assessing Negative Cognition in Depression: An Item Response Theory Analysis of the Dysfunctional Attitude Scale

    ERIC Educational Resources Information Center

    Beevers, Christopher G.; Strong, David R.; Meyer, Bjorn; Pilkonis, Paul A.; Miller, Ivan R.

    2007-01-01

    Despite a central role for dysfunctional attitudes in cognitive theories of depression and the widespread use of the Dysfunctional Attitude Scale, form A (DAS-A; A. Weissman, 1979), the psychometric development of the DAS-A has been relatively limited. The authors used nonparametric item response theory methods to examine the DAS-A items and…

  19. Constructing a cosmological model-independent Hubble diagram of type Ia supernovae with cosmic chronometers

    NASA Astrophysics Data System (ADS)

    Li, Zhengxiang; Gonzalez, J. E.; Yu, Hongwei; Zhu, Zong-Hong; Alcaniz, J. S.

    2016-02-01

    We apply two methods, i.e., the Gaussian processes and the nonparametric smoothing procedure, to reconstruct the Hubble parameter H (z ) as a function of redshift from 15 measurements of the expansion rate obtained from age estimates of passively evolving galaxies. These reconstructions enable us to derive the luminosity distance to a certain redshift z , calibrate the light-curve fitting parameters accounting for the (unknown) intrinsic magnitude of type Ia supernova (SNe Ia), and construct cosmological model-independent Hubble diagrams of SNe Ia. In order to test the compatibility between the reconstructed functions of H (z ), we perform a statistical analysis considering the latest SNe Ia sample, the so-called joint light-curve compilation. We find that, for the Gaussian processes, the reconstructed functions of Hubble parameter versus redshift, and thus the following analysis on SNe Ia calibrations and cosmological implications, are sensitive to prior mean functions. However, for the nonparametric smoothing method, the reconstructed functions are not dependent on initial guess models, and consistently require high values of H0, which are in excellent agreement with recent measurements of this quantity from Cepheids and other local distance indicators.

  20. Separating the air quality impact of a major highway and nearby sources by nonparametric trajectory analysis.

    PubMed

    Henry, Ronald C; Vette, Alan; Norris, Gary; Vedantham, Ram; Kimbrough, Sue; Shores, Richard C

    2011-12-15

    Nonparametric Trajectory Analysis (NTA), a receptor-oriented model, was used to assess the impact of local sources of air pollution at monitoring sites located adjacent to highway I-15 in Las Vegas, NV. Measurements of black carbon, carbon monoxide, nitrogen oxides, and sulfur dioxide concentrations were collected from December 2008 to December 2009. The purpose of the study was to determine the impact of the highway at three downwind monitoring stations using an upwind station to measure background concentrations. NTA was used to precisely determine the contribution of the highway to the average concentrations measured at the monitoring stations accounting for the spatially heterogeneous contributions of other local urban sources. NTA uses short time average concentrations, 5 min in this case, and constructed local back-trajectories from similarly short time average wind speed and direction to locate and quantify contributions from local source regions. Averaged over an entire year, the decrease of concentrations with distance from the highway was found to be consistent with previous studies. For this study, the NTA model is shown to be a reliable approach to quantify the impact of the highway on local air quality in an urban area with other local sources.

  1. Regionalizing nonparametric models of precipitation amounts on different temporal scales

    NASA Astrophysics Data System (ADS)

    Mosthaf, Tobias; Bárdossy, András

    2017-05-01

    Parametric distribution functions are commonly used to model precipitation amounts corresponding to different durations. The precipitation amounts themselves are crucial for stochastic rainfall generators and weather generators. Nonparametric kernel density estimates (KDEs) offer a more flexible way to model precipitation amounts. As already stated in their name, these models do not exhibit parameters that can be easily regionalized to run rainfall generators at ungauged locations as well as at gauged locations. To overcome this deficiency, we present a new interpolation scheme for nonparametric models and evaluate it for different temporal resolutions ranging from hourly to monthly. During the evaluation, the nonparametric methods are compared to commonly used parametric models like the two-parameter gamma and the mixed-exponential distribution. As water volume is considered to be an essential parameter for applications like flood modeling, a Lorenz-curve-based criterion is also introduced. To add value to the estimation of data at sub-daily resolutions, we incorporated the plentiful daily measurements in the interpolation scheme, and this idea was evaluated. The study region is the federal state of Baden-Württemberg in the southwest of Germany with more than 500 rain gauges. The validation results show that the newly proposed nonparametric interpolation scheme provides reasonable results and that the incorporation of daily values in the regionalization of sub-daily models is very beneficial.

  2. A simple randomisation procedure for validating discriminant analysis: a methodological note.

    PubMed

    Wastell, D G

    1987-04-01

    Because the goal of discriminant analysis (DA) is to optimise classification, it designedly exaggerates between-group differences. This bias complicates validation of DA. Jack-knifing has been used for validation but is inappropriate when stepwise selection (SWDA) is employed. A simple randomisation test is presented which is shown to give correct decisions for SWDA. The general superiority of randomisation tests over orthodox significance tests is discussed. Current work on non-parametric methods of estimating the error rates of prediction rules is briefly reviewed.

  3. Estimating the number of tree species in forest populations using current vegetation survey and forest inventory and analysis approximation plots and grid intensities

    Treesearch

    Hans T. Schreuder; Jin-Mann S. Lin; John Teply

    2000-01-01

    We estimate number of tree species in National Forest populations using the nonparametric estimator. Data from the Current Vegetation Survey (CVS) of Region 6 of the USDA Forest Service were used to estimate the number of tree species with a plot close in size to the Forest Inventory and Analysis (FIA) plot and the actual CVS plot for the 5.5 km FIA grid and the 2.7 km...

  4. Nonparametric method for genomics-based prediction of performance of quantitative traits involving epistasis in plant breeding.

    PubMed

    Sun, Xiaochun; Ma, Ping; Mumm, Rita H

    2012-01-01

    Genomic selection (GS) procedures have proven useful in estimating breeding value and predicting phenotype with genome-wide molecular marker information. However, issues of high dimensionality, multicollinearity, and the inability to deal effectively with epistasis can jeopardize accuracy and predictive ability. We, therefore, propose a new nonparametric method, pRKHS, which combines the features of supervised principal component analysis (SPCA) and reproducing kernel Hilbert spaces (RKHS) regression, with versions for traits with no/low epistasis, pRKHS-NE, to high epistasis, pRKHS-E. Instead of assigning a specific relationship to represent the underlying epistasis, the method maps genotype to phenotype in a nonparametric way, thus requiring fewer genetic assumptions. SPCA decreases the number of markers needed for prediction by filtering out low-signal markers with the optimal marker set determined by cross-validation. Principal components are computed from reduced marker matrix (called supervised principal components, SPC) and included in the smoothing spline ANOVA model as independent variables to fit the data. The new method was evaluated in comparison with current popular methods for practicing GS, specifically RR-BLUP, BayesA, BayesB, as well as a newer method by Crossa et al., RKHS-M, using both simulated and real data. Results demonstrate that pRKHS generally delivers greater predictive ability, particularly when epistasis impacts trait expression. Beyond prediction, the new method also facilitates inferences about the extent to which epistasis influences trait expression.

  5. Nonparametric Method for Genomics-Based Prediction of Performance of Quantitative Traits Involving Epistasis in Plant Breeding

    PubMed Central

    Sun, Xiaochun; Ma, Ping; Mumm, Rita H.

    2012-01-01

    Genomic selection (GS) procedures have proven useful in estimating breeding value and predicting phenotype with genome-wide molecular marker information. However, issues of high dimensionality, multicollinearity, and the inability to deal effectively with epistasis can jeopardize accuracy and predictive ability. We, therefore, propose a new nonparametric method, pRKHS, which combines the features of supervised principal component analysis (SPCA) and reproducing kernel Hilbert spaces (RKHS) regression, with versions for traits with no/low epistasis, pRKHS-NE, to high epistasis, pRKHS-E. Instead of assigning a specific relationship to represent the underlying epistasis, the method maps genotype to phenotype in a nonparametric way, thus requiring fewer genetic assumptions. SPCA decreases the number of markers needed for prediction by filtering out low-signal markers with the optimal marker set determined by cross-validation. Principal components are computed from reduced marker matrix (called supervised principal components, SPC) and included in the smoothing spline ANOVA model as independent variables to fit the data. The new method was evaluated in comparison with current popular methods for practicing GS, specifically RR-BLUP, BayesA, BayesB, as well as a newer method by Crossa et al., RKHS-M, using both simulated and real data. Results demonstrate that pRKHS generally delivers greater predictive ability, particularly when epistasis impacts trait expression. Beyond prediction, the new method also facilitates inferences about the extent to which epistasis influences trait expression. PMID:23226325

  6. Establishment of Biological Reference Intervals and Reference Curve for Urea by Exploratory Parametric and Non-Parametric Quantile Regression Models.

    PubMed

    Sarkar, Rajarshi

    2013-07-01

    The validity of the entire renal function tests as a diagnostic tool depends substantially on the Biological Reference Interval (BRI) of urea. Establishment of BRI of urea is difficult partly because exclusion criteria for selection of reference data are quite rigid and partly due to the compartmentalization considerations regarding age and sex of the reference individuals. Moreover, construction of Biological Reference Curve (BRC) of urea is imperative to highlight the partitioning requirements. This a priori study examines the data collected by measuring serum urea of 3202 age and sex matched individuals, aged between 1 and 80 years, by a kinetic UV Urease/GLDH method on a Roche Cobas 6000 auto-analyzer. Mann-Whitney U test of the reference data confirmed the partitioning requirement by both age and sex. Further statistical analysis revealed the incompatibility of the data for a proposed parametric model. Hence the data was non-parametrically analysed. BRI was found to be identical for both sexes till the 2(nd) decade, and the BRI for males increased progressively 6(th) decade onwards. Four non-parametric models were postulated for construction of BRC: Gaussian kernel, double kernel, local mean and local constant, of which the last one generated the best-fitting curves. Clinical decision making should become easier and diagnostic implications of renal function tests should become more meaningful if this BRI is followed and the BRC is used as a desktop tool in conjunction with similar data for serum creatinine.

  7. A Bayesian Nonparametric Approach to Test Equating

    ERIC Educational Resources Information Center

    Karabatsos, George; Walker, Stephen G.

    2009-01-01

    A Bayesian nonparametric model is introduced for score equating. It is applicable to all major equating designs, and has advantages over previous equating models. Unlike the previous models, the Bayesian model accounts for positive dependence between distributions of scores from two tests. The Bayesian model and the previous equating models are…

  8. Reliability of Test Scores in Nonparametric Item Response Theory.

    ERIC Educational Resources Information Center

    Sijtsma, Klaas; Molenaar, Ivo W.

    1987-01-01

    Three methods for estimating reliability are studied within the context of nonparametric item response theory. Two were proposed originally by Mokken and a third is developed in this paper. Using a Monte Carlo strategy, these three estimation methods are compared with four "classical" lower bounds to reliability. (Author/JAZ)

  9. A Simulation Comparison of Parametric and Nonparametric Dimensionality Detection Procedures

    ERIC Educational Resources Information Center

    Mroch, Andrew A.; Bolt, Daniel M.

    2006-01-01

    Recently, nonparametric methods have been proposed that provide a dimensionally based description of test structure for tests with dichotomous items. Because such methods are based on different notions of dimensionality than are assumed when using a psychometric model, it remains unclear whether these procedures might lead to a different…

  10. Bayesian Unimodal Density Regression for Causal Inference

    ERIC Educational Resources Information Center

    Karabatsos, George; Walker, Stephen G.

    2011-01-01

    Karabatsos and Walker (2011) introduced a new Bayesian nonparametric (BNP) regression model. Through analyses of real and simulated data, they showed that the BNP regression model outperforms other parametric and nonparametric regression models of common use, in terms of predictive accuracy of the outcome (dependent) variable. The other,…

  11. A comparative study between nonlinear regression and nonparametric approaches for modelling Phalaris paradoxa seedling emergence

    USDA-ARS?s Scientific Manuscript database

    Parametric non-linear regression (PNR) techniques commonly are used to develop weed seedling emergence models. Such techniques, however, require statistical assumptions that are difficult to meet. To examine and overcome these limitations, we compared PNR with a nonparametric estimation technique. F...

  12. A Comparison of Methods for Nonparametric Estimation of Item Characteristic Curves for Binary Items

    ERIC Educational Resources Information Center

    Lee, Young-Sun

    2007-01-01

    This study compares the performance of three nonparametric item characteristic curve (ICC) estimation procedures: isotonic regression, smoothed isotonic regression, and kernel smoothing. Smoothed isotonic regression, employed along with an appropriate kernel function, provides better estimates and also satisfies the assumption of strict…

  13. Order-Constrained Bayes Inference for Dichotomous Models of Unidimensional Nonparametric IRT

    ERIC Educational Resources Information Center

    Karabatsos, George; Sheu, Ching-Fan

    2004-01-01

    This study introduces an order-constrained Bayes inference framework useful for analyzing data containing dichotomous scored item responses, under the assumptions of either the monotone homogeneity model or the double monotonicity model of nonparametric item response theory (NIRT). The framework involves the implementation of Gibbs sampling to…

  14. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  15. Illiquidity premium and expected stock returns in the UK: A new approach

    NASA Astrophysics Data System (ADS)

    Chen, Jiaqi; Sherif, Mohamed

    2016-09-01

    This study examines the relative importance of liquidity risk for the time-series and cross-section of stock returns in the UK. We propose a simple way to capture the multidimensionality of illiquidity. Our analysis indicates that existing illiquidity measures have considerable asset specific components, which justifies our new approach. Further, we use an alternative test of the Amihud (2002) measure and parametric and non-parametric methods to investigate whether liquidity risk is priced in the UK. We find that the inclusion of the illiquidity factor in the capital asset pricing model plays a significant role in explaining the cross-sectional variation in stock returns, in particular with the Fama-French three-factor model. Further, using Hansen-Jagannathan non-parametric bounds, we find that the illiquidity-augmented capital asset pricing models yield a small distance error, other non-liquidity based models fail to yield economically plausible distance values. Our findings have important implications for managing the liquidity risk of equity portfolios.

  16. Gender Wage Disparities among the Highly Educated.

    PubMed

    Black, Dan A; Haviland, Amelia; Sanders, Seth G; Taylor, Lowell J

    2008-01-01

    In the U.S. college-educated women earn approximately 30 percent less than their non-Hispanic white male counterparts. We conduct an empirical examination of this wage disparity for four groups of women-non-Hispanic white, black, Hispanic, and Asian-using the National Survey of College Graduates, a large data set that provides unusually detailed information on higher-level education. Nonparametric matching analysis indicates that among men and women who speak English at home, between 44 and 73 percent of the gender wage gaps are accounted for by such pre-market factors as highest degree and major. When we restrict attention further to women who have "high labor force attachment" (i.e., work experience that is similar to male comparables) we account for 54 to 99 percent of gender wage gaps. Our nonparametric approach differs from familiar regression-based decompositions, so for the sake of comparison we conduct parametric analyses as well. Inferences drawn from these latter decompositions can be quite misleading.

  17. Mixed-effects Gaussian process functional regression models with application to dose-response curve prediction.

    PubMed

    Shi, J Q; Wang, B; Will, E J; West, R M

    2012-11-20

    We propose a new semiparametric model for functional regression analysis, combining a parametric mixed-effects model with a nonparametric Gaussian process regression model, namely a mixed-effects Gaussian process functional regression model. The parametric component can provide explanatory information between the response and the covariates, whereas the nonparametric component can add nonlinearity. We can model the mean and covariance structures simultaneously, combining the information borrowed from other subjects with the information collected from each individual subject. We apply the model to dose-response curves that describe changes in the responses of subjects for differing levels of the dose of a drug or agent and have a wide application in many areas. We illustrate the method for the management of renal anaemia. An individual dose-response curve is improved when more information is included by this mechanism from the subject/patient over time, enabling a patient-specific treatment regime. Copyright © 2012 John Wiley & Sons, Ltd.

  18. Power calculation for comparing diagnostic accuracies in a multi-reader, multi-test design.

    PubMed

    Kim, Eunhee; Zhang, Zheng; Wang, Youdan; Zeng, Donglin

    2014-12-01

    Receiver operating characteristic (ROC) analysis is widely used to evaluate the performance of diagnostic tests with continuous or ordinal responses. A popular study design for assessing the accuracy of diagnostic tests involves multiple readers interpreting multiple diagnostic test results, called the multi-reader, multi-test design. Although several different approaches to analyzing data from this design exist, few methods have discussed the sample size and power issues. In this article, we develop a power formula to compare the correlated areas under the ROC curves (AUC) in a multi-reader, multi-test design. We present a nonparametric approach to estimate and compare the correlated AUCs by extending DeLong et al.'s (1988, Biometrics 44, 837-845) approach. A power formula is derived based on the asymptotic distribution of the nonparametric AUCs. Simulation studies are conducted to demonstrate the performance of the proposed power formula and an example is provided to illustrate the proposed procedure. © 2014, The International Biometric Society.

  19. Correlations between Lymphocytes, Mid-Cell Fractions and Granulocytes with Human Blood Characteristics Using Low Power He-Ne Laser Radiation

    NASA Astrophysics Data System (ADS)

    Houssein, Hend A. A.; Jaafar, M. S.; Ramli, R. M.; Ismail, N. E.; Ahmad, A. L.; Bermakai, M. Y.

    2010-07-01

    In this study, the subpopulations of human blood parameters including lymphocytes, the mid-cell fractions (eosinophils, basophils, and monocytes), and granulocytes were determined by electronic sizing in the Health Centre of Universiti Sains Malaysia. These parameters have been correlated with human blood characteristics such as age, gender, ethnicity, and blood types; before and after irradiation with 0.95 mW He-Ne laser (λ = 632.8 nm). The correlations were obtained by finding patterns, paired non-parametric tests, and an independent non-parametric tests using the SPSS version 11.5, centroid and peak positions, and flux variations. The findings show that the centroid and peak positions, flux peak and total flux, were very much correlated and can become a significant indicator for blood analyses. Furthermore, the encircled flux analysis demonstrated a good future prospect in blood research, thus leading the way as a vibrant diagnosis tool to clarify diseases associated with blood.

  20. Gender Wage Disparities among the Highly Educated

    PubMed Central

    Black, Dan A.; Haviland, Amelia; Sanders, Seth G.; Taylor, Lowell J.

    2015-01-01

    In the U.S. college-educated women earn approximately 30 percent less than their non-Hispanic white male counterparts. We conduct an empirical examination of this wage disparity for four groups of women—non-Hispanic white, black, Hispanic, and Asian—using the National Survey of College Graduates, a large data set that provides unusually detailed information on higher-level education. Nonparametric matching analysis indicates that among men and women who speak English at home, between 44 and 73 percent of the gender wage gaps are accounted for by such pre-market factors as highest degree and major. When we restrict attention further to women who have “high labor force attachment” (i.e., work experience that is similar to male comparables) we account for 54 to 99 percent of gender wage gaps. Our nonparametric approach differs from familiar regression-based decompositions, so for the sake of comparison we conduct parametric analyses as well. Inferences drawn from these latter decompositions can be quite misleading. PMID:26097255

  1. Spectral decompositions of multiple time series: a Bayesian non-parametric approach.

    PubMed

    Macaro, Christian; Prado, Raquel

    2014-01-01

    We consider spectral decompositions of multiple time series that arise in studies where the interest lies in assessing the influence of two or more factors. We write the spectral density of each time series as a sum of the spectral densities associated to the different levels of the factors. We then use Whittle's approximation to the likelihood function and follow a Bayesian non-parametric approach to obtain posterior inference on the spectral densities based on Bernstein-Dirichlet prior distributions. The prior is strategically important as it carries identifiability conditions for the models and allows us to quantify our degree of confidence in such conditions. A Markov chain Monte Carlo (MCMC) algorithm for posterior inference within this class of frequency-domain models is presented.We illustrate the approach by analyzing simulated and real data via spectral one-way and two-way models. In particular, we present an analysis of functional magnetic resonance imaging (fMRI) brain responses measured in individuals who participated in a designed experiment to study pain perception in humans.

  2. Monitoring the Level of Students' GPAs over Time

    ERIC Educational Resources Information Center

    Bakir, Saad T.; McNeal, Bob

    2010-01-01

    A nonparametric (or distribution-free) statistical quality control chart is used to monitor the cumulative grade point averages (GPAs) of students over time. The chart is designed to detect any statistically significant positive or negative shifts in student GPAs from a desired target level. This nonparametric control chart is based on the…

  3. An Examination of Parametric and Nonparametric Dimensionality Assessment Methods with Exploratory and Confirmatory Mode

    ERIC Educational Resources Information Center

    Kogar, Hakan

    2018-01-01

    The aim of the present research study was to compare the findings from the nonparametric MSA, DIMTEST and DETECT and the parametric dimensionality determining methods in various simulation conditions by utilizing exploratory and confirmatory methods. For this purpose, various simulation conditions were established based on number of dimensions,…

  4. Three Classes of Nonparametric Differential Step Functioning Effect Estimators

    ERIC Educational Resources Information Center

    Penfield, Randall D.

    2008-01-01

    The examination of measurement invariance in polytomous items is complicated by the possibility that the magnitude and sign of lack of invariance may vary across the steps underlying the set of polytomous response options, a concept referred to as differential step functioning (DSF). This article describes three classes of nonparametric DSF effect…

  5. A Nonparametric Framework for Comparing Trends and Gaps across Tests

    ERIC Educational Resources Information Center

    Ho, Andrew Dean

    2009-01-01

    Problems of scale typically arise when comparing test score trends, gaps, and gap trends across different tests. To overcome some of these difficulties, test score distributions on the same score scale can be represented by nonparametric graphs or statistics that are invariant under monotone scale transformations. This article motivates and then…

  6. A Nonparametric K-Sample Test for Equality of Slopes.

    ERIC Educational Resources Information Center

    Penfield, Douglas A.; Koffler, Stephen L.

    1986-01-01

    The development of a nonparametric K-sample test for equality of slopes using Puri's generalized L statistic is presented. The test is recommended when the assumptions underlying the parametric model are violated. This procedure replaces original data with either ranks (for data with heavy tails) or normal scores (for data with light tails).…

  7. A Note on the Assumption of Identical Distributions for Nonparametric Tests of Location

    ERIC Educational Resources Information Center

    Nordstokke, David W.; Colp, S. Mitchell

    2018-01-01

    Often, when testing for shift in location, researchers will utilize nonparametric statistical tests in place of their parametric counterparts when there is evidence or belief that the assumptions of the parametric test are not met (i.e., normally distributed dependent variables). An underlying and often unattended to assumption of nonparametric…

  8. A New Nonparametric Levene Test for Equal Variances

    ERIC Educational Resources Information Center

    Nordstokke, David W.; Zumbo, Bruno D.

    2010-01-01

    Tests of the equality of variances are sometimes used on their own to compare variability across groups of experimental or non-experimental conditions but they are most often used alongside other methods to support assumptions made about variances. A new nonparametric test of equality of variances is described and compared to current "gold…

  9. A Bayesian Beta-Mixture Model for Nonparametric IRT (BBM-IRT)

    ERIC Educational Resources Information Center

    Arenson, Ethan A.; Karabatsos, George

    2017-01-01

    Item response models typically assume that the item characteristic (step) curves follow a logistic or normal cumulative distribution function, which are strictly monotone functions of person test ability. Such assumptions can be overly-restrictive for real item response data. We propose a simple and more flexible Bayesian nonparametric IRT model…

  10. LOCATING NEARBY SOURCES OF AIR POLLUTION BY NONPARAMETRIC REGRESSION OF ATMOSPHERIC CONCENTRATIONS ON WIND DIRECTION. (R826238)

    EPA Science Inventory

    The relationship of the concentration of air pollutants to wind direction has been determined by nonparametric regression using a Gaussian kernel. The results are smooth curves with error bars that allow for the accurate determination of the wind direction where the concentrat...

  11. A Unifying Framework for Teaching Nonparametric Statistical Tests

    ERIC Educational Resources Information Center

    Bargagliotti, Anna E.; Orrison, Michael E.

    2014-01-01

    Increased importance is being placed on statistics at both the K-12 and undergraduate level. Research divulging effective methods to teach specific statistical concepts is still widely sought after. In this paper, we focus on best practices for teaching topics in nonparametric statistics at the undergraduate level. To motivate the work, we…

  12. Teaching Nonparametric Statistics Using Student Instrumental Values.

    ERIC Educational Resources Information Center

    Anderson, Jonathan W.; Diddams, Margaret

    Nonparametric statistics are often difficult to teach in introduction to statistics courses because of the lack of real-world examples. This study demonstrated how teachers can use differences in the rankings and ratings of undergraduate and graduate values to discuss: (1) ipsative and normative scaling; (2) uses of the Mann-Whitney U-test; and…

  13. Multiresponse semiparametric regression for modelling the effect of regional socio-economic variables on the use of information technology

    NASA Astrophysics Data System (ADS)

    Wibowo, Wahyu; Wene, Chatrien; Budiantara, I. Nyoman; Permatasari, Erma Oktania

    2017-03-01

    Multiresponse semiparametric regression is simultaneous equation regression model and fusion of parametric and nonparametric model. The regression model comprise several models and each model has two components, parametric and nonparametric. The used model has linear function as parametric and polynomial truncated spline as nonparametric component. The model can handle both linearity and nonlinearity relationship between response and the sets of predictor variables. The aim of this paper is to demonstrate the application of the regression model for modeling of effect of regional socio-economic on use of information technology. More specific, the response variables are percentage of households has access to internet and percentage of households has personal computer. Then, predictor variables are percentage of literacy people, percentage of electrification and percentage of economic growth. Based on identification of the relationship between response and predictor variable, economic growth is treated as nonparametric predictor and the others are parametric predictors. The result shows that the multiresponse semiparametric regression can be applied well as indicate by the high coefficient determination, 90 percent.

  14. Potency control of modified live viral vaccines for veterinary use.

    PubMed

    Terpstra, C; Kroese, A H

    1996-04-01

    This paper reviews various aspects of efficacy, and methods for assaying the potency of modified live viral vaccines. The pros and cons of parametric versus non-parametric methods for analysis of potency assays are discussed and critical levels of protection, as determined by the target(s) of vaccination, are exemplified. Recommendations are presented for designing potency assays on master virus seeds and vaccine batches.

  15. Potency control of modified live viral vaccines for veterinary use.

    PubMed

    Terpstra, C; Kroese, A H

    1996-01-01

    This paper reviews various aspects of efficacy, and methods for assaying the potency of modified live viral vaccines. The pros and cons of parametric versus non-parametric methods for analysis of potency assays are discussed and critical levels of protection, as determined by the target(s) of vaccination, are exemplified. Recommendations are presented for designing potency assays on master virus seeds and vaccine batches.

  16. The Impact of ICT on Educational Performance and its Efficiency in Selected EU and OECD Countries: A Non-Parametric Analysis

    ERIC Educational Resources Information Center

    Aristovnik, Aleksander

    2012-01-01

    The purpose of the paper is to review some previous researches examining ICT efficiency and the impact of ICT on educational output/outcome as well as different conceptual and methodological issues related to performance measurement. Moreover, a definition, measurements and the empirical application of a model measuring the efficiency of ICT use…

  17. SigEMD: A powerful method for differential gene expression analysis in single-cell RNA sequencing data.

    PubMed

    Wang, Tianyu; Nabavi, Sheida

    2018-04-24

    Differential gene expression analysis is one of the significant efforts in single cell RNA sequencing (scRNAseq) analysis to discover the specific changes in expression levels of individual cell types. Since scRNAseq exhibits multimodality, large amounts of zero counts, and sparsity, it is different from the traditional bulk RNA sequencing (RNAseq) data. The new challenges of scRNAseq data promote the development of new methods for identifying differentially expressed (DE) genes. In this study, we proposed a new method, SigEMD, that combines a data imputation approach, a logistic regression model and a nonparametric method based on the Earth Mover's Distance, to precisely and efficiently identify DE genes in scRNAseq data. The regression model and data imputation are used to reduce the impact of large amounts of zero counts, and the nonparametric method is used to improve the sensitivity of detecting DE genes from multimodal scRNAseq data. By additionally employing gene interaction network information to adjust the final states of DE genes, we further reduce the false positives of calling DE genes. We used simulated datasets and real datasets to evaluate the detection accuracy of the proposed method and to compare its performance with those of other differential expression analysis methods. Results indicate that the proposed method has an overall powerful performance in terms of precision in detection, sensitivity, and specificity. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Comparison of parametric and bootstrap method in bioequivalence test.

    PubMed

    Ahn, Byung-Jin; Yim, Dong-Seok

    2009-10-01

    The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption.

  19. Comparison of Parametric and Bootstrap Method in Bioequivalence Test

    PubMed Central

    Ahn, Byung-Jin

    2009-01-01

    The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption. PMID:19915699

  20. Experiment Design for Nonparametric Models Based On Minimizing Bayes Risk: Application to Voriconazole1

    PubMed Central

    Bayard, David S.; Neely, Michael

    2016-01-01

    An experimental design approach is presented for individualized therapy in the special case where the prior information is specified by a nonparametric (NP) population model. Here, a nonparametric model refers to a discrete probability model characterized by a finite set of support points and their associated weights. An important question arises as to how to best design experiments for this type of model. Many experimental design methods are based on Fisher Information or other approaches originally developed for parametric models. While such approaches have been used with some success across various applications, it is interesting to note that they largely fail to address the fundamentally discrete nature of the nonparametric model. Specifically, the problem of identifying an individual from a nonparametric prior is more naturally treated as a problem of classification, i.e., to find a support point that best matches the patient’s behavior. This paper studies the discrete nature of the NP experiment design problem from a classification point of view. Several new insights are provided including the use of Bayes Risk as an information measure, and new alternative methods for experiment design. One particular method, denoted as MMopt (Multiple-Model Optimal), will be examined in detail and shown to require minimal computation while having distinct advantages compared to existing approaches. Several simulated examples, including a case study involving oral voriconazole in children, are given to demonstrate the usefulness of MMopt in pharmacokinetics applications. PMID:27909942

  1. Modeling SF-6D Hong Kong standard gamble health state preference data using a nonparametric Bayesian method.

    PubMed

    Kharroubi, Samer A; Brazier, John E; McGhee, Sarah

    2013-01-01

    This article reports on the findings from applying a recently described approach to modeling health state valuation data and the impact of the respondent characteristics on health state valuations. The approach applies a nonparametric model to estimate a Bayesian six-dimensional health state short form (derived from short-form 36 health survey) health state valuation algorithm. A sample of 197 states defined by the six-dimensional health state short form (derived from short-form 36 health survey)has been valued by a representative sample of the Hong Kong general population by using standard gamble. The article reports the application of the nonparametric model and compares it to the original model estimated by using a conventional parametric random effects model. The two models are compared theoretically and in terms of empirical performance. Advantages of the nonparametric model are that it can be used to predict scores in populations with different distributions of characteristics than observed in the survey sample and that it allows for the impact of respondent characteristics to vary by health state (while ensuring that full health passes through unity). The results suggest an important age effect with sex, having some effect, but the remaining covariates having no discernible effect. The nonparametric Bayesian model is argued to be more theoretically appropriate than previously used parametric models. Furthermore, it is more flexible to take into account the impact of covariates. Copyright © 2013, International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc.

  2. Total recognition discriminability in Huntington's and Alzheimer's disease.

    PubMed

    Graves, Lisa V; Holden, Heather M; Delano-Wood, Lisa; Bondi, Mark W; Woods, Steven Paul; Corey-Bloom, Jody; Salmon, David P; Delis, Dean C; Gilbert, Paul E

    2017-03-01

    Both the original and second editions of the California Verbal Learning Test (CVLT) provide an index of total recognition discriminability (TRD) but respectively utilize nonparametric and parametric formulas to compute the index. However, the degree to which population differences in TRD may vary across applications of these nonparametric and parametric formulas has not been explored. We evaluated individuals with Huntington's disease (HD), individuals with Alzheimer's disease (AD), healthy middle-aged adults, and healthy older adults who were administered the CVLT-II. Yes/no recognition memory indices were generated, including raw nonparametric TRD scores (as used in CVLT-I) and raw and standardized parametric TRD scores (as used in CVLT-II), as well as false positive (FP) rates. Overall, the patient groups had significantly lower TRD scores than their comparison groups. The application of nonparametric and parametric formulas resulted in comparable effect sizes for all group comparisons on raw TRD scores. Relative to the HD group, the AD group showed comparable standardized parametric TRD scores (despite lower raw nonparametric and parametric TRD scores), whereas the previous CVLT literature has shown that standardized TRD scores are lower in AD than in HD. Possible explanations for the similarity in standardized parametric TRD scores in the HD and AD groups in the present study are discussed, with an emphasis on the importance of evaluating TRD scores in the context of other indices such as FP rates in an effort to fully capture recognition memory function using the CVLT-II.

  3. New approaches to the analysis of population trends in land birds: Comment

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.

    1997-01-01

    James et al. (1996, Ecology 77:13-27) used data from the North American Breeding Bird Survey (BBS) to examine geographic variability in patterns of population change for 26 species of wood warblers. They emphasized the importance of evaluating nonlinear patterns of change in bird populations, proposed LOESS-based non-parametric and semi-parametric analyses of BBS data, and contrasted their results with other analyses, including those of Robbins et al. (1989, Proceedings of the National Academy of Sciences 86: 7658-7662) and Peterjohn et al. (1995, Pages 3-39 in T. E. Martin and D. M. Finch, eds. Ecology and management of Neotropical migratory birds: a synthesis and review of critical issues. Oxford University Press, New York.). In this note, we briefly comment on some of the issues that arose from their analysis of BBS data, suggest a few aspects of the survey that should inspire caution in analysts, and review the differences between the LOESS-based procedures and other procedures (e.g., Link and Sauer 1994). We strongly discourage the use of James et al.'s completely non-parametric procedure, which fails to account for observer effects. Our comparisons of estimators adds to the evidence already present in the literature of the bias associated with omitting observer information in analyses of BBS data. Bias resulting from change in observer abilities should be a consideration in any analysis of BBS data.

  4. How relevant is environmental quality to per capita health expenditures? Empirical evidence from panel of developing countries.

    PubMed

    Yahaya, Adamu; Nor, Norashidah Mohamed; Habibullah, Muzafar Shah; Ghani, Judhiana Abd; Noor, Zaleha Mohd

    2016-01-01

    Developing countries have witnessed economic growth as their GDP keeps increasing steadily over the years. The growth led to higher energy consumption which eventually leads to increase in air pollutions that pose a danger to human health. People's healthcare demand, in turn, increase due to the changes in the socioeconomic life and improvement in the health technology. This study is an attempt to investigate the impact of environmental quality on per capital health expenditure in 125 developing countries within a panel cointegration framework from 1995 to 2012. We found out that a long-run relationship exists between per capita health expenditure and all explanatory variables as they were panel cointegrated. The explanatory variables were found to be statistically significant in explaining the per capita health expenditure. The result further revealed that CO2 has the highest explanatory power on the per capita health expenditure. The impact of the explanatory power of the variables is greater in the long-run compared to the short-run. Based on this result, we conclude that environmental quality is a powerful determinant of health expenditure in developing countries. Therefore, developing countries should as a matter of health care policy give provision of healthy air a priority via effective policy implementation on environmental management and control measures to lessen the pressure on health care expenditure. Moreover more environmental proxies with alternative methods should be considered in the future research.

  5. Economic growth and CO2 emissions: an investigation with smooth transition autoregressive distributed lag models for the 1800-2014 period in the USA.

    PubMed

    Bildirici, Melike; Ersin, Özgür Ömer

    2018-01-01

    The study aims to combine the autoregressive distributed lag (ARDL) cointegration framework with smooth transition autoregressive (STAR)-type nonlinear econometric models for causal inference. Further, the proposed STAR distributed lag (STARDL) models offer new insights in terms of modeling nonlinearity in the long- and short-run relations between analyzed variables. The STARDL method allows modeling and testing nonlinearity in the short-run and long-run parameters or both in the short- and long-run relations. To this aim, the relation between CO 2 emissions and economic growth rates in the USA is investigated for the 1800-2014 period, which is one of the largest data sets available. The proposed hybrid models are the logistic, exponential, and second-order logistic smooth transition autoregressive distributed lag (LSTARDL, ESTARDL, and LSTAR2DL) models combine the STAR framework with nonlinear ARDL-type cointegration to augment the linear ARDL approach with smooth transitional nonlinearity. The proposed models provide a new approach to the relevant econometrics and environmental economics literature. Our results indicated the presence of asymmetric long-run and short-run relations between the analyzed variables that are from the GDP towards CO 2 emissions. By the use of newly proposed STARDL models, the results are in favor of important differences in terms of the response of CO 2 emissions in regimes 1 and 2 for the estimated LSTAR2DL and LSTARDL models.

  6. The analysis of professional competencies of a lecturer in adult education.

    PubMed

    Žeravíková, Iveta; Tirpáková, Anna; Markechová, Dagmar

    2015-01-01

    In this article, we present the andragogical research project and evaluation of its results using nonparametric statistical methods and the semantic differential method. The presented research was realized in the years 2012-2013 in the dissertation of I. Žeravíková: Analysis of professional competencies of lecturer and creating his competence profile (Žeravíková 2013), and its purpose was based on the analysis of work activities of a lecturer to identify his most important professional competencies and to create a suggestion of competence profile of a lecturer in adult education.

  7. Testing the Hypothesis of a Homoscedastic Error Term in Simple, Nonparametric Regression

    ERIC Educational Resources Information Center

    Wilcox, Rand R.

    2006-01-01

    Consider the nonparametric regression model Y = m(X)+ [tau](X)[epsilon], where X and [epsilon] are independent random variables, [epsilon] has a median of zero and variance [sigma][squared], [tau] is some unknown function used to model heteroscedasticity, and m(X) is an unknown function reflecting some conditional measure of location associated…

  8. Examination of Polytomous Items' Psychometric Properties According to Nonparametric Item Response Theory Models in Different Test Conditions

    ERIC Educational Resources Information Center

    Sengul Avsar, Asiye; Tavsancil, Ezel

    2017-01-01

    This study analysed polytomous items' psychometric properties according to nonparametric item response theory (NIRT) models. Thus, simulated datasets--three different test lengths (10, 20 and 30 items), three sample distributions (normal, right and left skewed) and three samples sizes (100, 250 and 500)--were generated by conducting 20…

  9. Investigation of a Nonparametric Procedure for Assessing Goodness-of-Fit in Item Response Theory

    ERIC Educational Resources Information Center

    Wells, Craig S.; Bolt, Daniel M.

    2008-01-01

    Tests of model misfit are often performed to validate the use of a particular model in item response theory. Douglas and Cohen (2001) introduced a general nonparametric approach for detecting misfit under the two-parameter logistic model. However, the statistical properties of their approach, and empirical comparisons to other methods, have not…

  10. Joint Entropy Minimization for Learning in Nonparametric Framework

    DTIC Science & Technology

    2006-06-09

    Tibshirani, G. Sherlock , W. C. Chan, T. C. Greiner, D. D. Weisenburger, J. O. Armitage, R. Warnke, R. Levy, W. Wilson, M. R. Grever, J. C. Byrd, D. Botstein, P...Entropy Minimization for Learning in Nonparametric Framework 33 [11] D.L. Collins, A.P. Zijdenbos, J.G. Kollokian, N.J. Sled, C.J. Kabani, C.J. Holmes

  11. How to Compare Parametric and Nonparametric Person-Fit Statistics Using Real Data

    ERIC Educational Resources Information Center

    Sinharay, Sandip

    2017-01-01

    Person-fit assessment (PFA) is concerned with uncovering atypical test performance as reflected in the pattern of scores on individual items on a test. Existing person-fit statistics (PFSs) include both parametric and nonparametric statistics. Comparison of PFSs has been a popular research topic in PFA, but almost all comparisons have employed…

  12. A Comparative Study of Test Data Dimensionality Assessment Procedures Under Nonparametric IRT Models

    ERIC Educational Resources Information Center

    van Abswoude, Alexandra A. H.; van der Ark, L. Andries; Sijtsma, Klaas

    2004-01-01

    In this article, an overview of nonparametric item response theory methods for determining the dimensionality of item response data is provided. Four methods were considered: MSP, DETECT, HCA/CCPROX, and DIMTEST. First, the methods were compared theoretically. Second, a simulation study was done to compare the effectiveness of MSP, DETECT, and…

  13. An entropy-based nonparametric test for the validation of surrogate endpoints.

    PubMed

    Miao, Xiaopeng; Wang, Yong-Cheng; Gangopadhyay, Ashis

    2012-06-30

    We present a nonparametric test to validate surrogate endpoints based on measure of divergence and random permutation. This test is a proposal to directly verify the Prentice statistical definition of surrogacy. The test does not impose distributional assumptions on the endpoints, and it is robust to model misspecification. Our simulation study shows that the proposed nonparametric test outperforms the practical test of the Prentice criterion in terms of both robustness of size and power. We also evaluate the performance of three leading methods that attempt to quantify the effect of surrogate endpoints. The proposed method is applied to validate magnetic resonance imaging lesions as the surrogate endpoint for clinical relapses in a multiple sclerosis trial. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Nonparametric estimation of plant density by the distance method

    USGS Publications Warehouse

    Patil, S.A.; Burnham, K.P.; Kovner, J.L.

    1979-01-01

    A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

  15. Mechanics analysis and design of fractal interconnects for stretchable batteries

    NASA Astrophysics Data System (ADS)

    Huang, Yonggang

    2014-03-01

    An important trend in electronics involves the development of materials, mechanical designs and manufacturing strategies that enable the use of unconventional substrates, such as polymer films, metal foils, paper sheets or rubber slabs. The last possibility is particularly challenging because the systems must accommodate not only bending but also stretching. Although several approaches are available for the electronics, a persistent difficulty is in power supplies that have similar mechanical properties, to allow their co-integration with the electronics. Here we introduce a set of materials and design concepts for a rechargeable lithium ion battery technology that exploits thin, low modulus silicone elastomers as substrates, with a segmented design in the active materials, and unusual ``self-similar'' interconnect structures between them. The result enables reversible levels of stretchability up to 300%, while maintaining capacity densities of ~1.1 mAh cm-2. Stretchable wireless power transmission systems provide the means to charge these types of batteries, without direct physical contact.

  16. [Public spending on health and population health in Algeria: an econometric analysis].

    PubMed

    Messaili, Moussa; Kaïd Tlilane, Nouara

    2017-07-10

    Objective: The objective of this study was to estimate the impact of public spending on health, among other determinants of health, on the health of the population in Algeria, using life expectancy (men and women) and infant mortality rates as indicators of health status. Methods: We conducted a longitudinal study over the period from 1974 to 2010 using the ARDL (Autoregressive Distributed Lags) approach to co-integration to estimate the short-term and long-term relationship. Results: Public spending on health has a positive, but not statistically significant impact, in the long and short term, on life expectancy (men and women). However, public spending significantly reduces the infant mortality rate. The long-term impact of the number of hospital beds is significant for the life expectancy of men, but not for women and infant mortality, but is significant for all indicators in the short-term relationship. The most important variables in improving the health of the population are real GDP per capita and fertility rate.

  17. Rapid Economic Growth and Natural Gas Consumption Nexus: Looking forward from Perspective of 11th Malaysian Plan

    NASA Astrophysics Data System (ADS)

    Bekhet, H. A.; Yasmin, T.

    2016-03-01

    The present study investigates the relationship between economic growth and energy consumption by incorporating CO2 emissions, natural gas consumption and population in Malaysia. Annual data and F-bound test and granger causality have applied to test the existence of long run relationship between the series. The results show that variables are cointegrated for long run relationship. The results also indicate that natural gas consumption is an important contributing factor to energy demand and hence economic growth in case of Malaysia. The causality analysis highlights that the feedback hypothesis exists between economic growth and energy consumption. While, conservative hypothesis is validated between natural gas consumption and economic growth which implies that economic growth will push natural gas consumption policies in future. This study opens up new direction for policy makers to formulate a comprehensive natural gas policy to sustain environment for long span of time in case to achieve 11th MP targets.

  18. Export product diversification and the environmental Kuznets curve: evidence from Turkey.

    PubMed

    Gozgor, Giray; Can, Muhlis

    2016-11-01

    Countries try to stabilize the demand for energy on one hand and sustain economic growth on the other, but the worsening global warming and climate change problems have put pressure on them. This paper estimates the environmental Kuznets curve over the period 1971-2010 in Turkey both in the short and the long run. For this purpose, the unit root test with structural breaks and the cointegration analysis with multiple endogenous structural breaks are used. The effects of energy consumption and export product diversification on CO 2 emissions are also controlled in the dynamic empirical models. It is observed that the environmental Kuznets curve hypothesis is valid in Turkey in both the short run and the long run. The positive effect on energy consumption on CO 2 emissions is also obtained in the long run. In addition, it is found that a greater product diversification of exports yields higher CO 2 emissions in the long run. Inferences and policy implications are also discussed.

  19. A bias-corrected estimator in multiple imputation for missing data.

    PubMed

    Tomita, Hiroaki; Fujisawa, Hironori; Henmi, Masayuki

    2018-05-29

    Multiple imputation (MI) is one of the most popular methods to deal with missing data, and its use has been rapidly increasing in medical studies. Although MI is rather appealing in practice since it is possible to use ordinary statistical methods for a complete data set once the missing values are fully imputed, the method of imputation is still problematic. If the missing values are imputed from some parametric model, the validity of imputation is not necessarily ensured, and the final estimate for a parameter of interest can be biased unless the parametric model is correctly specified. Nonparametric methods have been also proposed for MI, but it is not so straightforward as to produce imputation values from nonparametrically estimated distributions. In this paper, we propose a new method for MI to obtain a consistent (or asymptotically unbiased) final estimate even if the imputation model is misspecified. The key idea is to use an imputation model from which the imputation values are easily produced and to make a proper correction in the likelihood function after the imputation by using the density ratio between the imputation model and the true conditional density function for the missing variable as a weight. Although the conditional density must be nonparametrically estimated, it is not used for the imputation. The performance of our method is evaluated by both theory and simulation studies. A real data analysis is also conducted to illustrate our method by using the Duke Cardiac Catheterization Coronary Artery Disease Diagnostic Dataset. Copyright © 2018 John Wiley & Sons, Ltd.

  20. Researches of fruit quality prediction model based on near infrared spectrum

    NASA Astrophysics Data System (ADS)

    Shen, Yulin; Li, Lian

    2018-04-01

    With the improvement in standards for food quality and safety, people pay more attention to the internal quality of fruits, therefore the measurement of fruit internal quality is increasingly imperative. In general, nondestructive soluble solid content (SSC) and total acid content (TAC) analysis of fruits is vital and effective for quality measurement in global fresh produce markets, so in this paper, we aim at establishing a novel fruit internal quality prediction model based on SSC and TAC for Near Infrared Spectrum. Firstly, the model of fruit quality prediction based on PCA + BP neural network, PCA + GRNN network, PCA + BP adaboost strong classifier, PCA + ELM and PCA + LS_SVM classifier are designed and implemented respectively; then, in the NSCT domain, the median filter and the SavitzkyGolay filter are used to preprocess the spectral signal, Kennard-Stone algorithm is used to automatically select the training samples and test samples; thirdly, we achieve the optimal models by comparing 15 kinds of prediction model based on the theory of multi-classifier competition mechanism, specifically, the non-parametric estimation is introduced to measure the effectiveness of proposed model, the reliability and variance of nonparametric estimation evaluation of each prediction model to evaluate the prediction result, while the estimated value and confidence interval regard as a reference, the experimental results demonstrate that this model can better achieve the optimal evaluation of the internal quality of fruit; finally, we employ cat swarm optimization to optimize two optimal models above obtained from nonparametric estimation, empirical testing indicates that the proposed method can provide more accurate and effective results than other forecasting methods.

  1. Statistical Models and Inference Procedures for Structural and Materials Reliability

    DTIC Science & Technology

    1990-12-01

    as an official Department of the Army positio~n, policy, or decision, unless sD designated by other documentazion. 12a. DISTRIBUTION /AVAILABILITY...Some general stress-strength models were also developed and applied to the failure of systems subject to cyclic loading. Involved in the failure of...process control ideas and sequential design and analysis methods. Finally, smooth nonparametric quantile .wJ function estimators were studied. All of

  2. The use of generalized linear models and generalized estimating equations in bioarchaeological studies.

    PubMed

    Nikita, Efthymia

    2014-03-01

    The current article explores whether the application of generalized linear models (GLM) and generalized estimating equations (GEE) can be used in place of conventional statistical analyses in the study of ordinal data that code an underlying continuous variable, like entheseal changes. The analysis of artificial data and ordinal data expressing entheseal changes in archaeological North African populations gave the following results. Parametric and nonparametric tests give convergent results particularly for P values <0.1, irrespective of whether the underlying variable is normally distributed or not under the condition that the samples involved in the tests exhibit approximately equal sizes. If this prerequisite is valid and provided that the samples are of equal variances, analysis of covariance may be adopted. GLM are not subject to constraints and give results that converge to those obtained from all nonparametric tests. Therefore, they can be used instead of traditional tests as they give the same amount of information as them, but with the advantage of allowing the study of the simultaneous impact of multiple predictors and their interactions and the modeling of the experimental data. However, GLM should be replaced by GEE for the study of bilateral asymmetry and in general when paired samples are tested, because GEE are appropriate for correlated data. Copyright © 2013 Wiley Periodicals, Inc.

  3. Introducing Hurst exponent in pair trading

    NASA Astrophysics Data System (ADS)

    Ramos-Requena, J. P.; Trinidad-Segovia, J. E.; Sánchez-Granero, M. A.

    2017-12-01

    In this paper we introduce a new methodology for pair trading. This new method is based on the calculation of the Hurst exponent of a pair. Our approach is inspired by the classical concepts of co-integration and mean reversion but joined under a unique strategy. We will show how Hurst approach presents better results than classical Distance Method and Correlation strategies in different scenarios. Results obtained prove that this new methodology is consistent and suitable by reducing the drawdown of trading over the classical ones getting as a result a better performance.

  4. Understanding price discovery in interconnected markets: Generalized Langevin process approach and simulation

    NASA Astrophysics Data System (ADS)

    Schenck, Natalya A.; Horvath, Philip A.; Sinha, Amit K.

    2018-02-01

    While the literature on price discovery process and information flow between dominant and satellite market is exhaustive, most studies have applied an approach that can be traced back to Hasbrouck (1995) or Gonzalo and Granger (1995). In this paper, however, we propose a Generalized Langevin process with asymmetric double-well potential function, with co-integrated time series and interconnected diffusion processes to model the information flow and price discovery process in two, a dominant and a satellite, interconnected markets. A simulated illustration of the model is also provided.

  5. Nonparametric Statistics Test Software Package.

    DTIC Science & Technology

    1983-09-01

    statis- tics because of their acceptance in the academic world, the availability of computer support, and flexibility in model builling. Nonparametric...25 I1l,lCELL WRITE(NCF,12 ) IvE (I ,RCCT(I) 122 FORMAT(IlXt 3(H5 9 1) IF( IeLT *NCELL) WRITE (NOF1123 J PARTV(I1J 123 FORMAT( Xll----’,FIo.3J 25 CONT

  6. Comparison of Parametric and Nonparametric Bootstrap Methods for Estimating Random Error in Equipercentile Equating

    ERIC Educational Resources Information Center

    Cui, Zhongmin; Kolen, Michael J.

    2008-01-01

    This article considers two methods of estimating standard errors of equipercentile equating: the parametric bootstrap method and the nonparametric bootstrap method. Using a simulation study, these two methods are compared under three sample sizes (300, 1,000, and 3,000), for two test content areas (the Iowa Tests of Basic Skills Maps and Diagrams…

  7. Further Empirical Results on Parametric Versus Non-Parametric IRT Modeling of Likert-Type Personality Data

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Albert

    2005-01-01

    Chernyshenko, Stark, Chan, Drasgow, and Williams (2001) investigated the fit of Samejima's logistic graded model and Levine's non-parametric MFS model to the scales of two personality questionnaires and found that the graded model did not fit well. We attribute the poor fit of the graded model to small amounts of multidimensionality present in…

  8. The Impact of Ignoring the Level of Nesting Structure in Nonparametric Multilevel Latent Class Models

    ERIC Educational Resources Information Center

    Park, Jungkyu; Yu, Hsiu-Ting

    2016-01-01

    The multilevel latent class model (MLCM) is a multilevel extension of a latent class model (LCM) that is used to analyze nested structure data structure. The nonparametric version of an MLCM assumes a discrete latent variable at a higher-level nesting structure to account for the dependency among observations nested within a higher-level unit. In…

  9. Nonparametric projections of forest and rangeland condition indicators: A technical document supporting the 2005 USDA Forest Service RPA Assessment Update

    Treesearch

    John Hof; Curtis Flather; Tony Baltic; Rudy King

    2006-01-01

    The 2005 Forest and Rangeland Condition Indicator Model is a set of classification trees for forest and rangeland condition indicators at the national scale. This report documents the development of the database and the nonparametric statistical estimation for this analytical structure, with emphasis on three special characteristics of condition indicator production...

  10. An Assessment of the Nonparametric Approach for Evaluating the Fit of Item Response Models

    ERIC Educational Resources Information Center

    Liang, Tie; Wells, Craig S.; Hambleton, Ronald K.

    2014-01-01

    As item response theory has been more widely applied, investigating the fit of a parametric model becomes an important part of the measurement process. There is a lack of promising solutions to the detection of model misfit in IRT. Douglas and Cohen introduced a general nonparametric approach, RISE (Root Integrated Squared Error), for detecting…

  11. Robust estimation for ordinary differential equation models.

    PubMed

    Cao, J; Wang, L; Xu, J

    2011-12-01

    Applied scientists often like to use ordinary differential equations (ODEs) to model complex dynamic processes that arise in biology, engineering, medicine, and many other areas. It is interesting but challenging to estimate ODE parameters from noisy data, especially when the data have some outliers. We propose a robust method to address this problem. The dynamic process is represented with a nonparametric function, which is a linear combination of basis functions. The nonparametric function is estimated by a robust penalized smoothing method. The penalty term is defined with the parametric ODE model, which controls the roughness of the nonparametric function and maintains the fidelity of the nonparametric function to the ODE model. The basis coefficients and ODE parameters are estimated in two nested levels of optimization. The coefficient estimates are treated as an implicit function of ODE parameters, which enables one to derive the analytic gradients for optimization using the implicit function theorem. Simulation studies show that the robust method gives satisfactory estimates for the ODE parameters from noisy data with outliers. The robust method is demonstrated by estimating a predator-prey ODE model from real ecological data. © 2011, The International Biometric Society.

  12. Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Kumar, Sricharan; Srivistava, Ashok N.

    2012-01-01

    Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.

  13. A Monte Carlo Study of the Effect of Item Characteristic Curve Estimation on the Accuracy of Three Person-Fit Statistics

    ERIC Educational Resources Information Center

    St-Onge, Christina; Valois, Pierre; Abdous, Belkacem; Germain, Stephane

    2009-01-01

    To date, there have been no studies comparing parametric and nonparametric Item Characteristic Curve (ICC) estimation methods on the effectiveness of Person-Fit Statistics (PFS). The primary aim of this study was to determine if the use of ICCs estimated by nonparametric methods would increase the accuracy of item response theory-based PFS for…

  14. Using a Nonparametric Bootstrap to Obtain a Confidence Interval for Pearson's "r" with Cluster Randomized Data: A Case Study

    ERIC Educational Resources Information Center

    Wagstaff, David A.; Elek, Elvira; Kulis, Stephen; Marsiglia, Flavio

    2009-01-01

    A nonparametric bootstrap was used to obtain an interval estimate of Pearson's "r," and test the null hypothesis that there was no association between 5th grade students' positive substance use expectancies and their intentions to not use substances. The students were participating in a substance use prevention program in which the unit of…

  15. A Nonparametric Approach for Assessing Goodness-of-Fit of IRT Models in a Mixed Format Test

    ERIC Educational Resources Information Center

    Liang, Tie; Wells, Craig S.

    2015-01-01

    Investigating the fit of a parametric model plays a vital role in validating an item response theory (IRT) model. An area that has received little attention is the assessment of multiple IRT models used in a mixed-format test. The present study extends the nonparametric approach, proposed by Douglas and Cohen (2001), to assess model fit of three…

  16. Learning Circulant Sensing Kernels

    DTIC Science & Technology

    2014-03-01

    Furthermore, we test learning the circulant sensing matrix/operator and the nonparametric dictionary altogether and obtain even better performance. We...scale. Furthermore, we test learning the circulant sensing matrix/operator and the nonparametric dictionary altogether and obtain even better performance...matrices, Tropp et al.[28] de - scribes a random filter for acquiring a signal x̄; Haupt et al.[12] describes a channel estimation problem to identify a

  17. Goodness-of-Fit Tests and Nonparametric Adaptive Estimation for Spike Train Analysis

    PubMed Central

    2014-01-01

    When dealing with classical spike train analysis, the practitioner often performs goodness-of-fit tests to test whether the observed process is a Poisson process, for instance, or if it obeys another type of probabilistic model (Yana et al. in Biophys. J. 46(3):323–330, 1984; Brown et al. in Neural Comput. 14(2):325–346, 2002; Pouzat and Chaffiol in Technical report, http://arxiv.org/abs/arXiv:0909.2785, 2009). In doing so, there is a fundamental plug-in step, where the parameters of the supposed underlying model are estimated. The aim of this article is to show that plug-in has sometimes very undesirable effects. We propose a new method based on subsampling to deal with those plug-in issues in the case of the Kolmogorov–Smirnov test of uniformity. The method relies on the plug-in of good estimates of the underlying model that have to be consistent with a controlled rate of convergence. Some nonparametric estimates satisfying those constraints in the Poisson or in the Hawkes framework are highlighted. Moreover, they share adaptive properties that are useful from a practical point of view. We show the performance of those methods on simulated data. We also provide a complete analysis with these tools on single unit activity recorded on a monkey during a sensory-motor task. Electronic Supplementary Material The online version of this article (doi:10.1186/2190-8567-4-3) contains supplementary material. PMID:24742008

  18. Diabetes ongoing sustainable care and treatment (DOST): A strategy for informational deliverance through visual dynamic modules sustained by near peer mentoring.

    PubMed

    Joshi, Ankur; Arutagi, Vishwanath; Nahar, Nitin; Tiwari, Sharad; Singh, Daneshwar; Sethia, Soumitra

    2016-01-01

    The informational continuity for a diabetic patient is of paramount importance. This study on a pilot basis explores the process utility of structured educational modular sessions grounded on the principle of near-peer mentoring. Visual modules were prepared for diabetic patients. These modules were instituted to 25 diabetic patients in logical sequences. In the next phase, 4 persons of these 25 patients were designated as diabetic-diabetes ongoing sustainable care and treatment (DOST). Each diabetic-DOST was clubbed with two patients for modular session and informational deliverance during the next 7 days. Process analysis was performed with "proxy-indicators," namely, monthly glycemic status, knowledge assessment scores, and quality of life. Data were analyzed by interval estimates and through nonparametric analysis. Nonparametric analysis indicated a significant improvement in glycemic status in terms with fasting blood sugar (W = 78 z = 3.04, P = 0.002), 2 h-postprandial blood sugar (W = 54, z = 2.01, P = 0.035), and in knowledge score (χ 2 = 19.53, df = 3; P = 0.0002). Quality of life score showed significant improvement in 2 out of 7 domains, namely, satisfaction with treatment ([difference in mean score = 1.40 [1.94 to 0.85]) and symptom botherness (difference in mean score = 0.98 [1.3-0.65]). Because of inherent methodological limitations and innate biases, at this juncture no conclusive statement can be drawn. Although, primitive process evidences indicate the promising role of the diabetic-DOST strategy.

  19. Single molecule force spectroscopy at high data acquisition: A Bayesian nonparametric analysis

    NASA Astrophysics Data System (ADS)

    Sgouralis, Ioannis; Whitmore, Miles; Lapidus, Lisa; Comstock, Matthew J.; Pressé, Steve

    2018-03-01

    Bayesian nonparametrics (BNPs) are poised to have a deep impact in the analysis of single molecule data as they provide posterior probabilities over entire models consistent with the supplied data, not just model parameters of one preferred model. Thus they provide an elegant and rigorous solution to the difficult problem encountered when selecting an appropriate candidate model. Nevertheless, BNPs' flexibility to learn models and their associated parameters from experimental data is a double-edged sword. Most importantly, BNPs are prone to increasing the complexity of the estimated models due to artifactual features present in time traces. Thus, because of experimental challenges unique to single molecule methods, naive application of available BNP tools is not possible. Here we consider traces with time correlations and, as a specific example, we deal with force spectroscopy traces collected at high acquisition rates. While high acquisition rates are required in order to capture dwells in short-lived molecular states, in this setup, a slow response of the optical trap instrumentation (i.e., trapped beads, ambient fluid, and tethering handles) distorts the molecular signals introducing time correlations into the data that may be misinterpreted as true states by naive BNPs. Our adaptation of BNP tools explicitly takes into consideration these response dynamics, in addition to drift and noise, and makes unsupervised time series analysis of correlated single molecule force spectroscopy measurements possible, even at acquisition rates similar to or below the trap's response times.

  20. AHIMSA - Ad hoc histogram information measure sensing algorithm for feature selection in the context of histogram inspired clustering techniques

    NASA Technical Reports Server (NTRS)

    Dasarathy, B. V.

    1976-01-01

    An algorithm is proposed for dimensionality reduction in the context of clustering techniques based on histogram analysis. The approach is based on an evaluation of the hills and valleys in the unidimensional histograms along the different features and provides an economical means of assessing the significance of the features in a nonparametric unsupervised data environment. The method has relevance to remote sensing applications.

  1. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions

    PubMed Central

    Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji

    2017-01-01

    We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data. PMID:29049392

  2. Diffeomorphic demons: efficient non-parametric image registration.

    PubMed

    Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas

    2009-03-01

    We propose an efficient non-parametric diffeomorphic image registration algorithm based on Thirion's demons algorithm. In the first part of this paper, we show that Thirion's demons algorithm can be seen as an optimization procedure on the entire space of displacement fields. We provide strong theoretical roots to the different variants of Thirion's demons algorithm. This analysis predicts a theoretical advantage for the symmetric forces variant of the demons algorithm. We show on controlled experiments that this advantage is confirmed in practice and yields a faster convergence. In the second part of this paper, we adapt the optimization procedure underlying the demons algorithm to a space of diffeomorphic transformations. In contrast to many diffeomorphic registration algorithms, our solution is computationally efficient since in practice it only replaces an addition of displacement fields by a few compositions. Our experiments show that in addition to being diffeomorphic, our algorithm provides results that are similar to the ones from the demons algorithm but with transformations that are much smoother and closer to the gold standard, available in controlled experiments, in terms of Jacobians.

  3. Bayesian nonparametric clustering in phylogenetics: modeling antigenic evolution in influenza.

    PubMed

    Cybis, Gabriela B; Sinsheimer, Janet S; Bedford, Trevor; Rambaut, Andrew; Lemey, Philippe; Suchard, Marc A

    2018-01-30

    Influenza is responsible for up to 500,000 deaths every year, and antigenic variability represents much of its epidemiological burden. To visualize antigenic differences across many viral strains, antigenic cartography methods use multidimensional scaling on binding assay data to map influenza antigenicity onto a low-dimensional space. Analysis of such assay data ideally leads to natural clustering of influenza strains of similar antigenicity that correlate with sequence evolution. To understand the dynamics of these antigenic groups, we present a framework that jointly models genetic and antigenic evolution by combining multidimensional scaling of binding assay data, Bayesian phylogenetic machinery and nonparametric clustering methods. We propose a phylogenetic Chinese restaurant process that extends the current process to incorporate the phylogenetic dependency structure between strains in the modeling of antigenic clusters. With this method, we are able to use the genetic information to better understand the evolution of antigenicity throughout epidemics, as shown in applications of this model to H1N1 influenza. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Temporal changes and variability in temperature series over Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Suhaila, Jamaludin

    2015-02-01

    With the current concern over climate change, the descriptions on how temperature series changed over time are very useful. Annual mean temperature has been analyzed for several stations over Peninsular Malaysia. Non-parametric statistical techniques such as Mann-Kendall test and Theil-Sen slope estimation are used primarily for assessing the significance and detection of trends, while a nonparametric Pettitt's test and sequential Mann-Kendall test are adopted to detect any abrupt climate change. Statistically significance increasing trends for annual mean temperature are detected for almost all studied stations with the magnitude of significant trend varied from 0.02°C to 0.05°C per year. The results shows that climate over Peninsular Malaysia is getting warmer than before. In addition, the results of the abrupt changes in temperature using Pettitt's and sequential Mann-Kendall test reveal the beginning of trends which can be related to El Nino episodes that occur in Malaysia. In general, the analysis results can help local stakeholders and water managers to understand the risks and vulnerabilities related to climate change in terms of mean events in the region.

  5. Multi-object segmentation using coupled nonparametric shape and relative pose priors

    NASA Astrophysics Data System (ADS)

    Uzunbas, Mustafa Gökhan; Soldea, Octavian; Çetin, Müjdat; Ünal, Gözde; Erçil, Aytül; Unay, Devrim; Ekin, Ahmet; Firat, Zeynep

    2009-02-01

    We present a new method for multi-object segmentation in a maximum a posteriori estimation framework. Our method is motivated by the observation that neighboring or coupling objects in images generate configurations and co-dependencies which could potentially aid in segmentation if properly exploited. Our approach employs coupled shape and inter-shape pose priors that are computed using training images in a nonparametric multi-variate kernel density estimation framework. The coupled shape prior is obtained by estimating the joint shape distribution of multiple objects and the inter-shape pose priors are modeled via standard moments. Based on such statistical models, we formulate an optimization problem for segmentation, which we solve by an algorithm based on active contours. Our technique provides significant improvements in the segmentation of weakly contrasted objects in a number of applications. In particular for medical image analysis, we use our method to extract brain Basal Ganglia structures, which are members of a complex multi-object system posing a challenging segmentation problem. We also apply our technique to the problem of handwritten character segmentation. Finally, we use our method to segment cars in urban scenes.

  6. Comparison of Salmonella enteritidis phage types isolated from layers and humans in Belgium in 2005.

    PubMed

    Welby, Sarah; Imberechts, Hein; Riocreux, Flavien; Bertrand, Sophie; Dierick, Katelijne; Wildemauwe, Christa; Hooyberghs, Jozef; Van der Stede, Yves

    2011-08-01

    The aim of this study was to investigate the available results for Belgium of the European Union coordinated monitoring program (2004/665 EC) on Salmonella in layers in 2005, as well as the results of the monthly outbreak reports of Salmonella Enteritidis in humans in 2005 to identify a possible statistical significant trend in both populations. Separate descriptive statistics and univariate analysis were carried out and the parametric and/or non-parametric hypothesis tests were conducted. A time cluster analysis was performed for all Salmonella Enteritidis phage types (PTs) isolated. The proportions of each Salmonella Enteritidis PT in layers and in humans were compared and the monthly distribution of the most common PT, isolated in both populations, was evaluated. The time cluster analysis revealed significant clusters during the months May and June for layers and May, July, August, and September for humans. PT21, the most frequently isolated PT in both populations in 2005, seemed to be responsible of these significant clusters. PT4 was the second most frequently isolated PT. No significant difference was found for the monthly trend evolution of both PT in both populations based on parametric and non-parametric methods. A similar monthly trend of PT distribution in humans and layers during the year 2005 was observed. The time cluster analysis and the statistical significance testing confirmed these results. Moreover, the time cluster analysis showed significant clusters during the summer time and slightly delayed in time (humans after layers). These results suggest a common link between the prevalence of Salmonella Enteritidis in layers and the occurrence of the pathogen in humans. Phage typing was confirmed to be a useful tool for identifying temporal trends.

  7. The impact of using informative priors in a Bayesian cost-effectiveness analysis: an application of endovascular versus open surgical repair for abdominal aortic aneurysms in high-risk patients.

    PubMed

    McCarron, C Elizabeth; Pullenayegum, Eleanor M; Thabane, Lehana; Goeree, Ron; Tarride, Jean-Eric

    2013-04-01

    Bayesian methods have been proposed as a way of synthesizing all available evidence to inform decision making. However, few practical applications of the use of Bayesian methods for combining patient-level data (i.e., trial) with additional evidence (e.g., literature) exist in the cost-effectiveness literature. The objective of this study was to compare a Bayesian cost-effectiveness analysis using informative priors to a standard non-Bayesian nonparametric method to assess the impact of incorporating additional information into a cost-effectiveness analysis. Patient-level data from a previously published nonrandomized study were analyzed using traditional nonparametric bootstrap techniques and bivariate normal Bayesian models with vague and informative priors. Two different types of informative priors were considered to reflect different valuations of the additional evidence relative to the patient-level data (i.e., "face value" and "skeptical"). The impact of using different distributions and valuations was assessed in a sensitivity analysis. Models were compared in terms of incremental net monetary benefit (INMB) and cost-effectiveness acceptability frontiers (CEAFs). The bootstrapping and Bayesian analyses using vague priors provided similar results. The most pronounced impact of incorporating the informative priors was the increase in estimated life years in the control arm relative to what was observed in the patient-level data alone. Consequently, the incremental difference in life years originally observed in the patient-level data was reduced, and the INMB and CEAF changed accordingly. The results of this study demonstrate the potential impact and importance of incorporating additional information into an analysis of patient-level data, suggesting this could alter decisions as to whether a treatment should be adopted and whether more information should be acquired.

  8. BROCCOLI: Software for fast fMRI analysis on many-core CPUs and GPUs

    PubMed Central

    Eklund, Anders; Dufort, Paul; Villani, Mattias; LaConte, Stephen

    2014-01-01

    Analysis of functional magnetic resonance imaging (fMRI) data is becoming ever more computationally demanding as temporal and spatial resolutions improve, and large, publicly available data sets proliferate. Moreover, methodological improvements in the neuroimaging pipeline, such as non-linear spatial normalization, non-parametric permutation tests and Bayesian Markov Chain Monte Carlo approaches, can dramatically increase the computational burden. Despite these challenges, there do not yet exist any fMRI software packages which leverage inexpensive and powerful graphics processing units (GPUs) to perform these analyses. Here, we therefore present BROCCOLI, a free software package written in OpenCL (Open Computing Language) that can be used for parallel analysis of fMRI data on a large variety of hardware configurations. BROCCOLI has, for example, been tested with an Intel CPU, an Nvidia GPU, and an AMD GPU. These tests show that parallel processing of fMRI data can lead to significantly faster analysis pipelines. This speedup can be achieved on relatively standard hardware, but further, dramatic speed improvements require only a modest investment in GPU hardware. BROCCOLI (running on a GPU) can perform non-linear spatial normalization to a 1 mm3 brain template in 4–6 s, and run a second level permutation test with 10,000 permutations in about a minute. These non-parametric tests are generally more robust than their parametric counterparts, and can also enable more sophisticated analyses by estimating complicated null distributions. Additionally, BROCCOLI includes support for Bayesian first-level fMRI analysis using a Gibbs sampler. The new software is freely available under GNU GPL3 and can be downloaded from github (https://github.com/wanderine/BROCCOLI/). PMID:24672471

  9. Climate Variability and Yields of Major Staple Food Crops in Northern Ghana

    NASA Astrophysics Data System (ADS)

    Amikuzuno, J.

    2012-12-01

    Climate variability, the short-term fluctuations in average weather conditions, and agriculture affect each other. Climate variability affects the agroecological and growing conditions of crops and livestock, and is recently believed to be the greatest impediment to the realisation of the first Millennium Development Goal of reducing poverty and food insecurity in arid and semi-arid regions of developing countries. Conversely, agriculture is a major contributor to climate variability and change by emitting greenhouse gases and reducing the agroecology's potential for carbon sequestration. What however, is the empirical evidence of this inter-dependence of climate variability and agriculture in Sub-Sahara Africa? In this paper, we provide some insight into the long run relationship between inter-annual variations in temperature and rainfall, and annual yields of the most important staple food crops in Northern Ghana. Applying pooled panel data of rainfall, temperature and yields of the selected crops from 1976 to 2010 to cointegration and Granger causality models, there is cogent evidence of cointegration between seasonal, total rainfall and crop yields; and causality from rainfall to crop yields in the Sudano-Guinea Savannah and Guinea Savannah zones of Northern Ghana. This suggests that inter-annual yields of the crops have been influenced by the total mounts of rainfall in the planting season. Temperature variability over the study period is however stationary, and is suspected to have minimal effect if any on crop yields. Overall, the results confirm the appropriateness of our attempt in modelling long-term relationships between the climate and crop yield variables.

  10. Comparison the Effects of Health Indicators on Male and Female Labor Supply, Evidence from Panel Data of Eastern Mediterranean Countries 1995-2010

    PubMed Central

    HOMAIE RAD, Enayatollah; HADIAN, Mohamad; GHOLAMPOOR, Hanie

    2014-01-01

    Abstract Background Skilled labor force is very important in economic growth. Workers become skilled when they are healthy and able to be educated and work. In this study, we estimated the effects of health indicators on labor supply. We used labor force participation rate as the indicator of labor supply. We categorized this indicator into 2 indicators of female and male labor force participation rates and compared the results of each estimate with the other. Methods This study was done in eastern Mediterranean countries between 1995 and 2011. We used a panel cointegration approach for estimating the models. We used Pesaran cross sectional dependency, Pesaran unit root test, and Westerlund panel cointegration for this issue. At the end, after confirmation of having random effect models, we estimated them with random effects. Results Increasing the fertility rate decreased the female labor supply, but increased the male labor supply. However, public health expenditures increased the female labor supply, but decreased the male labor supply because of substitution effects. Similar results were found regarding urbanization. Gross domestic product had a positive relationship with female labor supply, but not with male labor supply. Besides, out of pocket health expenditures had a negative relationship with male labor supply, but no significant relationships with female labor supply. Conclusion The effects of the health variables were more severe in the female labor supply model compared to the male model. Countries must pay attention to women’s health more and more to change the labor supply. PMID:26060746

  11. Dynamics relationship between stock prices and economic variables in Malaysia

    NASA Astrophysics Data System (ADS)

    Chun, Ooi Po; Arsad, Zainudin; Huen, Tan Bee

    2014-07-01

    Knowledge on linkages between stock prices and macroeconomic variables are essential in the formulation of effective monetary policy. This study investigates the relationship between stock prices in Malaysia (KLCI) with four selected macroeconomic variables, namely industrial production index (IPI), quasi money supply (MS2), real exchange rate (REXR) and 3-month Treasury bill (TRB). The variables used in this study are monthly data from 1996 to 2012. Vector error correction (VEC) model and Kalman filter (KF) technique are utilized to assess the impact of macroeconomic variables on the stock prices. The results from the cointegration test revealed that the stock prices and macroeconomic variables are cointegrated. Different from the constant estimate from the static VEC model, the KF estimates noticeably exhibit time-varying attributes over the entire sample period. The varying estimates of the impact coefficients should be better reflect the changing economic environment. Surprisingly, IPI is negatively related to the KLCI with the estimates of the impact slowly increase and become positive in recent years. TRB is found to be generally negatively related to the KLCI with the impact fluctuating along the constant estimate of the VEC model. The KF estimates for REXR and MS2 show a mixture of positive and negative impact on the KLCI. The coefficients of error correction term (ECT) are negative in majority of the sample period, signifying the stock prices responded to stabilize any short term deviation in the economic system. The findings from the KF model indicate that any implication that is based on the usual static model may lead to authorities implementing less appropriate policies.

  12. Comparing nonparametric Bayesian tree priors for clonal reconstruction of tumors.

    PubMed

    Deshwar, Amit G; Vembu, Shankar; Morris, Quaid

    2015-01-01

    Statistical machine learning methods, especially nonparametric Bayesian methods, have become increasingly popular to infer clonal population structure of tumors. Here we describe the treeCRP, an extension of the Chinese restaurant process (CRP), a popular construction used in nonparametric mixture models, to infer the phylogeny and genotype of major subclonal lineages represented in the population of cancer cells. We also propose new split-merge updates tailored to the subclonal reconstruction problem that improve the mixing time of Markov chains. In comparisons with the tree-structured stick breaking prior used in PhyloSub, we demonstrate superior mixing and running time using the treeCRP with our new split-merge procedures. We also show that given the same number of samples, TSSB and treeCRP have similar ability to recover the subclonal structure of a tumor…

  13. The Probability of Exceedance as a Nonparametric Person-Fit Statistic for Tests of Moderate Length

    ERIC Educational Resources Information Center

    Tendeiro, Jorge N.; Meijer, Rob R.

    2013-01-01

    To classify an item score pattern as not fitting a nonparametric item response theory (NIRT) model, the probability of exceedance (PE) of an observed response vector x can be determined as the sum of the probabilities of all response vectors that are, at most, as likely as x, conditional on the test's total score. Vector x is to be considered…

  14. Model-free quantification of dynamic PET data using nonparametric deconvolution

    PubMed Central

    Zanderigo, Francesca; Parsey, Ramin V; Todd Ogden, R

    2015-01-01

    Dynamic positron emission tomography (PET) data are usually quantified using compartment models (CMs) or derived graphical approaches. Often, however, CMs either do not properly describe the tracer kinetics, or are not identifiable, leading to nonphysiologic estimates of the tracer binding. The PET data are modeled as the convolution of the metabolite-corrected input function and the tracer impulse response function (IRF) in the tissue. Using nonparametric deconvolution methods, it is possible to obtain model-free estimates of the IRF, from which functionals related to tracer volume of distribution and binding may be computed, but this approach has rarely been applied in PET. Here, we apply nonparametric deconvolution using singular value decomposition to simulated and test–retest clinical PET data with four reversible tracers well characterized by CMs ([11C]CUMI-101, [11C]DASB, [11C]PE2I, and [11C]WAY-100635), and systematically compare reproducibility, reliability, and identifiability of various IRF-derived functionals with that of traditional CMs outcomes. Results show that nonparametric deconvolution, completely free of any model assumptions, allows for estimates of tracer volume of distribution and binding that are very close to the estimates obtained with CMs and, in some cases, show better test–retest performance than CMs outcomes. PMID:25873427

  15. Modeling seasonal variation of hip fracture in Montreal, Canada.

    PubMed

    Modarres, Reza; Ouarda, Taha B M J; Vanasse, Alain; Orzanco, Maria Gabriela; Gosselin, Pierre

    2012-04-01

    The investigation of the association of the climate variables with hip fracture incidences is important in social health issues. This study examined and modeled the seasonal variation of monthly population based hip fracture rate (HFr) time series. The seasonal ARIMA time series modeling approach is used to model monthly HFr incidences time series of female and male patients of the ages 40-74 and 75+ of Montreal, Québec province, Canada, in the period of 1993-2004. The correlation coefficients between meteorological variables such as temperature, snow depth, rainfall depth and day length and HFr are significant. The nonparametric Mann-Kendall test for trend assessment and the nonparametric Levene's test and Wilcoxon's test for checking the difference of HFr before and after change point are also used. The seasonality in HFr indicated sharp difference between winter and summer time. The trend assessment showed decreasing trends in HFr of female and male groups. The nonparametric test also indicated a significant change of the mean HFr. A seasonal ARIMA model was applied for HFr time series without trend and a time trend ARIMA model (TT-ARIMA) was developed and fitted to HFr time series with a significant trend. The multi criteria evaluation showed the adequacy of SARIMA and TT-ARIMA models for modeling seasonal hip fracture time series with and without significant trend. In the time series analysis of HFr of the Montreal region, the effects of the seasonal variation of climate variables on hip fracture are clear. The Seasonal ARIMA model is useful for modeling HFr time series without trend. However, for time series with significant trend, the TT-ARIMA model should be applied for modeling HFr time series. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Improving salt marsh digital elevation model accuracy with full-waveform lidar and nonparametric predictive modeling

    NASA Astrophysics Data System (ADS)

    Rogers, Jeffrey N.; Parrish, Christopher E.; Ward, Larry G.; Burdick, David M.

    2018-03-01

    Salt marsh vegetation tends to increase vertical uncertainty in light detection and ranging (lidar) derived elevation data, often causing the data to become ineffective for analysis of topographic features governing tidal inundation or vegetation zonation. Previous attempts at improving lidar data collected in salt marsh environments range from simply computing and subtracting the global elevation bias to more complex methods such as computing vegetation-specific, constant correction factors. The vegetation specific corrections can be used along with an existing habitat map to apply separate corrections to different areas within a study site. It is hypothesized here that correcting salt marsh lidar data by applying location-specific, point-by-point corrections, which are computed from lidar waveform-derived features, tidal-datum based elevation, distance from shoreline and other lidar digital elevation model based variables, using nonparametric regression will produce better results. The methods were developed and tested using full-waveform lidar and ground truth for three marshes in Cape Cod, Massachusetts, U.S.A. Five different model algorithms for nonparametric regression were evaluated, with TreeNet's stochastic gradient boosting algorithm consistently producing better regression and classification results. Additionally, models were constructed to predict the vegetative zone (high marsh and low marsh). The predictive modeling methods used in this study estimated ground elevation with a mean bias of 0.00 m and a standard deviation of 0.07 m (0.07 m root mean square error). These methods appear very promising for correction of salt marsh lidar data and, importantly, do not require an existing habitat map, biomass measurements, or image based remote sensing data such as multi/hyperspectral imagery.

  17. Nonparametric identification of nonlinear dynamic systems using a synchronisation-based method

    NASA Astrophysics Data System (ADS)

    Kenderi, Gábor; Fidlin, Alexander

    2014-12-01

    The present study proposes an identification method for highly nonlinear mechanical systems that does not require a priori knowledge of the underlying nonlinearities to reconstruct arbitrary restoring force surfaces between degrees of freedom. This approach is based on the master-slave synchronisation between a dynamic model of the system as the slave and the real system as the master using measurements of the latter. As the model synchronises to the measurements, it becomes an observer of the real system. The optimal observer algorithm in a least-squares sense is given by the Kalman filter. Using the well-known state augmentation technique, the Kalman filter can be turned into a dual state and parameter estimator to identify parameters of a priori characterised nonlinearities. The paper proposes an extension of this technique towards nonparametric identification. A general system model is introduced by describing the restoring forces as bilateral spring-dampers with time-variant coefficients, which are estimated as augmented states. The estimation procedure is followed by an a posteriori statistical analysis to reconstruct noise-free restoring force characteristics using the estimated states and their estimated variances. Observability is provided using only one measured mechanical quantity per degree of freedom, which makes this approach less demanding in the number of necessary measurement signals compared with truly nonparametric solutions, which typically require displacement, velocity and acceleration signals. Additionally, due to the statistical rigour of the procedure, it successfully addresses signals corrupted by significant measurement noise. In the present paper, the method is described in detail, which is followed by numerical examples of one degree of freedom (1DoF) and 2DoF mechanical systems with strong nonlinearities of vibro-impact type to demonstrate the effectiveness of the proposed technique.

  18. A non-parametric automatic blending methodology to estimate rainfall fields from rain gauge and radar data

    NASA Astrophysics Data System (ADS)

    Velasco-Forero, Carlos A.; Sempere-Torres, Daniel; Cassiraga, Eduardo F.; Jaime Gómez-Hernández, J.

    2009-07-01

    Quantitative estimation of rainfall fields has been a crucial objective from early studies of the hydrological applications of weather radar. Previous studies have suggested that flow estimations are improved when radar and rain gauge data are combined to estimate input rainfall fields. This paper reports new research carried out in this field. Classical approaches for the selection and fitting of a theoretical correlogram (or semivariogram) model (needed to apply geostatistical estimators) are avoided in this study. Instead, a non-parametric technique based on FFT is used to obtain two-dimensional positive-definite correlograms directly from radar observations, dealing with both the natural anisotropy and the temporal variation of the spatial structure of the rainfall in the estimated fields. Because these correlation maps can be automatically obtained at each time step of a given rainfall event, this technique might easily be used in operational (real-time) applications. This paper describes the development of the non-parametric estimator exploiting the advantages of FFT for the automatic computation of correlograms and provides examples of its application on a case study using six rainfall events. This methodology is applied to three different alternatives to incorporate the radar information (as a secondary variable), and a comparison of performances is provided. In particular, their ability to reproduce in estimated rainfall fields (i) the rain gauge observations (in a cross-validation analysis) and (ii) the spatial patterns of radar fields are analyzed. Results seem to indicate that the methodology of kriging with external drift [KED], in combination with the technique of automatically computing 2-D spatial correlograms, provides merged rainfall fields with good agreement with rain gauges and with the most accurate approach to the spatial tendencies observed in the radar rainfall fields, when compared with other alternatives analyzed.

  19. Water quality analysis in rivers with non-parametric probability distributions and fuzzy inference systems: application to the Cauca River, Colombia.

    PubMed

    Ocampo-Duque, William; Osorio, Carolina; Piamba, Christian; Schuhmacher, Marta; Domingo, José L

    2013-02-01

    The integration of water quality monitoring variables is essential in environmental decision making. Nowadays, advanced techniques to manage subjectivity, imprecision, uncertainty, vagueness, and variability are required in such complex evaluation process. We here propose a probabilistic fuzzy hybrid model to assess river water quality. Fuzzy logic reasoning has been used to compute a water quality integrative index. By applying a Monte Carlo technique, based on non-parametric probability distributions, the randomness of model inputs was estimated. Annual histograms of nine water quality variables were built with monitoring data systematically collected in the Colombian Cauca River, and probability density estimations using the kernel smoothing method were applied to fit data. Several years were assessed, and river sectors upstream and downstream the city of Santiago de Cali, a big city with basic wastewater treatment and high industrial activity, were analyzed. The probabilistic fuzzy water quality index was able to explain the reduction in water quality, as the river receives a larger number of agriculture, domestic, and industrial effluents. The results of the hybrid model were compared to traditional water quality indexes. The main advantage of the proposed method is that it considers flexible boundaries between the linguistic qualifiers used to define the water status, being the belongingness of water quality to the diverse output fuzzy sets or classes provided with percentiles and histograms, which allows classify better the real water condition. The results of this study show that fuzzy inference systems integrated to stochastic non-parametric techniques may be used as complementary tools in water quality indexing methodologies. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. An Exploratory Data Analysis System for Support in Medical Decision-Making

    PubMed Central

    Copeland, J. A.; Hamel, B.; Bourne, J. R.

    1979-01-01

    An experimental system was developed to allow retrieval and analysis of data collected during a study of neurobehavioral correlates of renal disease. After retrieving data organized in a relational data base, simple bivariate statistics of parametric and nonparametric nature could be conducted. An “exploratory” mode in which the system provided guidance in selection of appropriate statistical analyses was also available to the user. The system traversed a decision tree using the inherent qualities of the data (e.g., the identity and number of patients, tests, and time epochs) to search for the appropriate analyses to employ.

  1. Liver proteomics in progressive alcoholic steatosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fernando, Harshica; Wiktorowicz, John E.; Soman, Kizhake V.

    2013-02-01

    Fatty liver is an early stage of alcoholic and nonalcoholic liver disease (ALD and NALD) that progresses to steatohepatitis and other irreversible conditions. In this study, we identified proteins that were differentially expressed in the livers of rats fed 5% ethanol in a Lieber–DeCarli diet daily for 1 and 3 months by discovery proteomics (two-dimensional gel electrophoresis and mass spectrometry) and non-parametric modeling (Multivariate Adaptive Regression Splines). Hepatic fatty infiltration was significantly higher in ethanol-fed animals as compared to controls, and more pronounced at 3 months of ethanol feeding. Discovery proteomics identified changes in the expression of proteins involved inmore » alcohol, lipid, and amino acid metabolism after ethanol feeding. At 1 and 3 months, 12 and 15 different proteins were differentially expressed. Of the identified proteins, down regulation of alcohol dehydrogenase (− 1.6) at 1 month and up regulation of aldehyde dehydrogenase (2.1) at 3 months could be a protective/adaptive mechanism against ethanol toxicity. In addition, betaine-homocysteine S-methyltransferase 2 a protein responsible for methionine metabolism and previously implicated in fatty liver development was significantly up regulated (1.4) at ethanol-induced fatty liver stage (1 month) while peroxiredoxin-1 was down regulated (− 1.5) at late fatty liver stage (3 months). Nonparametric analysis of the protein spots yielded fewer proteins and narrowed the list of possible markers and identified D-dopachrome tautomerase (− 1.7, at 3 months) as a possible marker for ethanol-induced early steatohepatitis. The observed differential regulation of proteins have potential to serve as biomarker signature for the detection of steatosis and its progression to steatohepatitis once validated in plasma/serum. -- Graphical abstract: The figure shows the Hierarchial cluster analysis of differentially expressed protein spots obtained after ethanol feeding for 1 (1–3) and 3 (4–6) months. C and E represent pair-fed control and ethanol-fed rats, respectively. Highlights: ► Proteins related to ethanol-induced steatosis and mild steatohepatitis are identified. ► ADH1C and ALDH2 involved in alcohol metabolism are differentially expressed at 1 and 3 months. ► Discovery proteomics identified a group of proteins to serve as potential biomarkers. ► Using nonparametric analysis DDT is identified as a possible marker for liver damage.« less

  2. Collective feature selection to identify crucial epistatic variants.

    PubMed

    Verma, Shefali S; Lucas, Anastasia; Zhang, Xinyuan; Veturi, Yogasudha; Dudek, Scott; Li, Binglan; Li, Ruowang; Urbanowicz, Ryan; Moore, Jason H; Kim, Dokyoon; Ritchie, Marylyn D

    2018-01-01

    Machine learning methods have gained popularity and practicality in identifying linear and non-linear effects of variants associated with complex disease/traits. Detection of epistatic interactions still remains a challenge due to the large number of features and relatively small sample size as input, thus leading to the so-called "short fat data" problem. The efficiency of machine learning methods can be increased by limiting the number of input features. Thus, it is very important to perform variable selection before searching for epistasis. Many methods have been evaluated and proposed to perform feature selection, but no single method works best in all scenarios. We demonstrate this by conducting two separate simulation analyses to evaluate the proposed collective feature selection approach. Through our simulation study we propose a collective feature selection approach to select features that are in the "union" of the best performing methods. We explored various parametric, non-parametric, and data mining approaches to perform feature selection. We choose our top performing methods to select the union of the resulting variables based on a user-defined percentage of variants selected from each method to take to downstream analysis. Our simulation analysis shows that non-parametric data mining approaches, such as MDR, may work best under one simulation criteria for the high effect size (penetrance) datasets, while non-parametric methods designed for feature selection, such as Ranger and Gradient boosting, work best under other simulation criteria. Thus, using a collective approach proves to be more beneficial for selecting variables with epistatic effects also in low effect size datasets and different genetic architectures. Following this, we applied our proposed collective feature selection approach to select the top 1% of variables to identify potential interacting variables associated with Body Mass Index (BMI) in ~ 44,000 samples obtained from Geisinger's MyCode Community Health Initiative (on behalf of DiscovEHR collaboration). In this study, we were able to show that selecting variables using a collective feature selection approach could help in selecting true positive epistatic variables more frequently than applying any single method for feature selection via simulation studies. We were able to demonstrate the effectiveness of collective feature selection along with a comparison of many methods in our simulation analysis. We also applied our method to identify non-linear networks associated with obesity.

  3. Examining the Feasibility and Utility of Estimating Partial Expected Value of Perfect Information (via a Nonparametric Approach) as Part of the Reimbursement Decision-Making Process in Ireland: Application to Drugs for Cancer.

    PubMed

    McCullagh, Laura; Schmitz, Susanne; Barry, Michael; Walsh, Cathal

    2017-11-01

    In Ireland, all new drugs for which reimbursement by the healthcare payer is sought undergo a health technology assessment by the National Centre for Pharmacoeconomics. The National Centre for Pharmacoeconomics estimate expected value of perfect information but not partial expected value of perfect information (owing to computational expense associated with typical methodologies). The objective of this study was to examine the feasibility and utility of estimating partial expected value of perfect information via a computationally efficient, non-parametric regression approach. This was a retrospective analysis of evaluations on drugs for cancer that had been submitted to the National Centre for Pharmacoeconomics (January 2010 to December 2014 inclusive). Drugs were excluded if cost effective at the submitted price. Drugs were excluded if concerns existed regarding the validity of the applicants' submission or if cost-effectiveness model functionality did not allow required modifications to be made. For each included drug (n = 14), value of information was estimated at the final reimbursement price, at a threshold equivalent to the incremental cost-effectiveness ratio at that price. The expected value of perfect information was estimated from probabilistic analysis. Partial expected value of perfect information was estimated via a non-parametric approach. Input parameters with a population value at least €1 million were identified as potential targets for research. All partial estimates were determined within minutes. Thirty parameters (across nine models) each had a value of at least €1 million. These were categorised. Collectively, survival analysis parameters were valued at €19.32 million, health state utility parameters at €15.81 million and parameters associated with the cost of treating adverse effects at €6.64 million. Those associated with drug acquisition costs and with the cost of care were valued at €6.51 million and €5.71 million, respectively. This research demonstrates that the estimation of partial expected value of perfect information via this computationally inexpensive approach could be considered feasible as part of the health technology assessment process for reimbursement purposes within the Irish healthcare system. It might be a useful tool in prioritising future research to decrease decision uncertainty.

  4. Neural network representation and learning of mappings and their derivatives

    NASA Technical Reports Server (NTRS)

    White, Halbert; Hornik, Kurt; Stinchcombe, Maxwell; Gallant, A. Ronald

    1991-01-01

    Discussed here are recent theorems proving that artificial neural networks are capable of approximating an arbitrary mapping and its derivatives as accurately as desired. This fact forms the basis for further results establishing the learnability of the desired approximations, using results from non-parametric statistics. These results have potential applications in robotics, chaotic dynamics, control, and sensitivity analysis. An example involving learning the transfer function and its derivatives for a chaotic map is discussed.

  5. [Nonparametric method of estimating survival functions containing right-censored and interval-censored data].

    PubMed

    Xu, Yonghong; Gao, Xiaohuan; Wang, Zhengxi

    2014-04-01

    Missing data represent a general problem in many scientific fields, especially in medical survival analysis. Dealing with censored data, interpolation method is one of important methods. However, most of the interpolation methods replace the censored data with the exact data, which will distort the real distribution of the censored data and reduce the probability of the real data falling into the interpolation data. In order to solve this problem, we in this paper propose a nonparametric method of estimating the survival function of right-censored and interval-censored data and compare its performance to SC (self-consistent) algorithm. Comparing to the average interpolation and the nearest neighbor interpolation method, the proposed method in this paper replaces the right-censored data with the interval-censored data, and greatly improves the probability of the real data falling into imputation interval. Then it bases on the empirical distribution theory to estimate the survival function of right-censored and interval-censored data. The results of numerical examples and a real breast cancer data set demonstrated that the proposed method had higher accuracy and better robustness for the different proportion of the censored data. This paper provides a good method to compare the clinical treatments performance with estimation of the survival data of the patients. This pro vides some help to the medical survival data analysis.

  6. A powerful nonparametric method for detecting differentially co-expressed genes: distance correlation screening and edge-count test.

    PubMed

    Zhang, Qingyang

    2018-05-16

    Differential co-expression analysis, as a complement of differential expression analysis, offers significant insights into the changes in molecular mechanism of different phenotypes. A prevailing approach to detecting differentially co-expressed genes is to compare Pearson's correlation coefficients in two phenotypes. However, due to the limitations of Pearson's correlation measure, this approach lacks the power to detect nonlinear changes in gene co-expression which is common in gene regulatory networks. In this work, a new nonparametric procedure is proposed to search differentially co-expressed gene pairs in different phenotypes from large-scale data. Our computational pipeline consisted of two main steps, a screening step and a testing step. The screening step is to reduce the search space by filtering out all the independent gene pairs using distance correlation measure. In the testing step, we compare the gene co-expression patterns in different phenotypes by a recently developed edge-count test. Both steps are distribution-free and targeting nonlinear relations. We illustrate the promise of the new approach by analyzing the Cancer Genome Atlas data and the METABRIC data for breast cancer subtypes. Compared with some existing methods, the new method is more powerful in detecting nonlinear type of differential co-expressions. The distance correlation screening can greatly improve computational efficiency, facilitating its application to large data sets.

  7. A Statistician's View of Upcoming Grand Challenges

    NASA Astrophysics Data System (ADS)

    Meng, Xiao Li

    2010-01-01

    In this session we have seen some snapshots of the broad spectrum of challenges, in this age of huge, complex, computer-intensive models, data, instruments,and questions. These challenges bridge astronomy at many wavelengths; basic physics; machine learning; -- and statistics. At one end of our spectrum, we think of 'compressing' the data with non-parametric methods. This raises the question of creating 'pseudo-replicas' of the data for uncertainty estimates. What would be involved in, e.g. boot-strap and related methods? Somewhere in the middle are these non-parametric methods for encapsulating the uncertainty information. At the far end, we find more model-based approaches, with the physics model embedded in the likelihood and analysis. The other distinctive problem is really the 'black-box' problem, where one has a complicated e.g. fundamental physics-based computer code, or 'black box', and one needs to know how changing the parameters at input -- due to uncertainties of any kind -- will map to changing the output. All of these connect to challenges in complexity of data and computation speed. Dr. Meng will highlight ways to 'cut corners' with advanced computational techniques, such as Parallel Tempering and Equal Energy methods. As well, there are cautionary tales of running automated analysis with real data -- where "30 sigma" outliers due to data artifacts can be more common than the astrophysical event of interest.

  8. Nonparametric Bayesian inference for mean residual life functions in survival analysis.

    PubMed

    Poynor, Valerie; Kottas, Athanasios

    2018-01-19

    Modeling and inference for survival analysis problems typically revolves around different functions related to the survival distribution. Here, we focus on the mean residual life (MRL) function, which provides the expected remaining lifetime given that a subject has survived (i.e. is event-free) up to a particular time. This function is of direct interest in reliability, medical, and actuarial fields. In addition to its practical interpretation, the MRL function characterizes the survival distribution. We develop general Bayesian nonparametric inference for MRL functions built from a Dirichlet process mixture model for the associated survival distribution. The resulting model for the MRL function admits a representation as a mixture of the kernel MRL functions with time-dependent mixture weights. This model structure allows for a wide range of shapes for the MRL function. Particular emphasis is placed on the selection of the mixture kernel, taken to be a gamma distribution, to obtain desirable properties for the MRL function arising from the mixture model. The inference method is illustrated with a data set of two experimental groups and a data set involving right censoring. The supplementary material available at Biostatistics online provides further results on empirical performance of the model, using simulated data examples. © The Author 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. A review of methods to estimate cause-specific mortality in presence of competing risks

    USGS Publications Warehouse

    Heisey, Dennis M.; Patterson, Brent R.

    2006-01-01

    Estimating cause-specific mortality is often of central importance for understanding the dynamics of wildlife populations. Despite such importance, methodology for estimating and analyzing cause-specific mortality has received little attention in wildlife ecology during the past 20 years. The issue of analyzing cause-specific, mutually exclusive events in time is not unique to wildlife. In fact, this general problem has received substantial attention in human biomedical applications within the context of biostatistical survival analysis. Here, we consider cause-specific mortality from a modern biostatistical perspective. This requires carefully defining what we mean by cause-specific mortality and then providing an appropriate hazard-based representation as a competing risks problem. This leads to the general solution of cause-specific mortality as the cumulative incidence function (CIF). We describe the appropriate generalization of the fully nonparametric staggered-entry Kaplan–Meier survival estimator to cause-specific mortality via the nonparametric CIF estimator (NPCIFE), which in many situations offers an attractive alternative to the Heisey–Fuller estimator. An advantage of the NPCIFE is that it lends itself readily to risk factors analysis with standard software for Cox proportional hazards model. The competing risks–based approach also clarifies issues regarding another intuitive but erroneous "cause-specific mortality" estimator based on the Kaplan–Meier survival estimator and commonly seen in the life sciences literature.

  10. Advanced Silicon Photonic Transceivers - the Case of a Wavelength Division and Polarization Multiplexed Quadrature Phase Shift Keying Receiver for Terabit/s Optical Transmission

    DTIC Science & Technology

    2017-03-10

    formats by the co- integration of a passive 90 degree optical hybrid, highspeed balanced Ge photodetectors and a high-speed two-channel transimpedance...40 Gbaud and can handle advanced modulation formats by the co-integration of a passive 90 degree optical hybrid, high- speed balanced Ge...reached at an OSNR of 12.4 dB. The hard -decision FEC (HD-FEC) threshold (BER of 3.8 × 10-3 for 7% overhead) requires 14 dB OSNR. For 16-QAM this requires

  11. Materials and fractal designs for 3D multifunctional integumentary membranes with capabilities in cardiac electrotherapy.

    PubMed

    Xu, Lizhi; Gutbrod, Sarah R; Ma, Yinji; Petrossians, Artin; Liu, Yuhao; Webb, R Chad; Fan, Jonathan A; Yang, Zijian; Xu, Renxiao; Whalen, John J; Weiland, James D; Huang, Yonggang; Efimov, Igor R; Rogers, John A

    2015-03-11

    Advanced materials and fractal design concepts form the basis of a 3D conformal electronic platform with unique capabilities in cardiac electrotherapies. Fractal geometries, advanced electrode materials, and thin, elastomeric membranes yield a class of device capable of integration with the entire 3D surface of the heart, with unique operational capabilities in low power defibrillation. Co-integrated collections of sensors allow simultaneous monitoring of physiological responses. Animal experiments on Langendorff-perfused rabbit hearts demonstrate the key features of these systems. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Nonparametric Bayesian models through probit stick-breaking processes

    PubMed Central

    Rodríguez, Abel; Dunson, David B.

    2013-01-01

    We describe a novel class of Bayesian nonparametric priors based on stick-breaking constructions where the weights of the process are constructed as probit transformations of normal random variables. We show that these priors are extremely flexible, allowing us to generate a great variety of models while preserving computational simplicity. Particular emphasis is placed on the construction of rich temporal and spatial processes, which are applied to two problems in finance and ecology. PMID:24358072

  13. Nonparametric Bayesian models through probit stick-breaking processes.

    PubMed

    Rodríguez, Abel; Dunson, David B

    2011-03-01

    We describe a novel class of Bayesian nonparametric priors based on stick-breaking constructions where the weights of the process are constructed as probit transformations of normal random variables. We show that these priors are extremely flexible, allowing us to generate a great variety of models while preserving computational simplicity. Particular emphasis is placed on the construction of rich temporal and spatial processes, which are applied to two problems in finance and ecology.

  14. Nonparametric estimation of the heterogeneity of a random medium using compound Poisson process modeling of wave multiple scattering.

    PubMed

    Le Bihan, Nicolas; Margerin, Ludovic

    2009-07-01

    In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity of waves transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using compound Poisson processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.

  15. GEE-Smoothing Spline in Semiparametric Model with Correlated Nominal Data

    NASA Astrophysics Data System (ADS)

    Ibrahim, Noor Akma; Suliadi

    2010-11-01

    In this paper we propose GEE-Smoothing spline in the estimation of semiparametric models with correlated nominal data. The method can be seen as an extension of parametric generalized estimating equation to semiparametric models. The nonparametric component is estimated using smoothing spline specifically the natural cubic spline. We use profile algorithm in the estimation of both parametric and nonparametric components. The properties of the estimators are evaluated using simulation studies.

  16. A linear programming approach to characterizing norm bounded uncertainty from experimental data

    NASA Technical Reports Server (NTRS)

    Scheid, R. E.; Bayard, D. S.; Yam, Y.

    1991-01-01

    The linear programming spectral overbounding and factorization (LPSOF) algorithm, an algorithm for finding a minimum phase transfer function of specified order whose magnitude tightly overbounds a specified nonparametric function of frequency, is introduced. This method has direct application to transforming nonparametric uncertainty bounds (available from system identification experiments) into parametric representations required for modern robust control design software (i.e., a minimum-phase transfer function multiplied by a norm-bounded perturbation).

  17. A Bayesian Nonparametric Approach to Image Super-Resolution.

    PubMed

    Polatkan, Gungor; Zhou, Mingyuan; Carin, Lawrence; Blei, David; Daubechies, Ingrid

    2015-02-01

    Super-resolution methods form high-resolution images from low-resolution images. In this paper, we develop a new Bayesian nonparametric model for super-resolution. Our method uses a beta-Bernoulli process to learn a set of recurring visual patterns, called dictionary elements, from the data. Because it is nonparametric, the number of elements found is also determined from the data. We test the results on both benchmark and natural images, comparing with several other models from the research literature. We perform large-scale human evaluation experiments to assess the visual quality of the results. In a first implementation, we use Gibbs sampling to approximate the posterior. However, this algorithm is not feasible for large-scale data. To circumvent this, we then develop an online variational Bayes (VB) algorithm. This algorithm finds high quality dictionaries in a fraction of the time needed by the Gibbs sampler.

  18. A Powerful Test for Comparing Multiple Regression Functions.

    PubMed

    Maity, Arnab

    2012-09-01

    In this article, we address the important problem of comparison of two or more population regression functions. Recently, Pardo-Fernández, Van Keilegom and González-Manteiga (2007) developed test statistics for simple nonparametric regression models: Y(ij) = θ(j)(Z(ij)) + σ(j)(Z(ij))∊(ij), based on empirical distributions of the errors in each population j = 1, … , J. In this paper, we propose a test for equality of the θ(j)(·) based on the concept of generalized likelihood ratio type statistics. We also generalize our test for other nonparametric regression setups, e.g, nonparametric logistic regression, where the loglikelihood for population j is any general smooth function [Formula: see text]. We describe a resampling procedure to obtain the critical values of the test. In addition, we present a simulation study to evaluate the performance of the proposed test and compare our results to those in Pardo-Fernández et al. (2007).

  19. Nonparametric instrumental regression with non-convex constraints

    NASA Astrophysics Data System (ADS)

    Grasmair, M.; Scherzer, O.; Vanhems, A.

    2013-03-01

    This paper considers the nonparametric regression model with an additive error that is dependent on the explanatory variables. As is common in empirical studies in epidemiology and economics, it also supposes that valid instrumental variables are observed. A classical example in microeconomics considers the consumer demand function as a function of the price of goods and the income, both variables often considered as endogenous. In this framework, the economic theory also imposes shape restrictions on the demand function, such as integrability conditions. Motivated by this illustration in microeconomics, we study an estimator of a nonparametric constrained regression function using instrumental variables by means of Tikhonov regularization. We derive rates of convergence for the regularized model both in a deterministic and stochastic setting under the assumption that the true regression function satisfies a projected source condition including, because of the non-convexity of the imposed constraints, an additional smallness condition.

  20. Site-Specific Recombination at XerC/D Sites Mediates the Formation and Resolution of Plasmid Co-integrates Carrying a blaOXA-58- and TnaphA6-Resistance Module in Acinetobacter baumannii

    PubMed Central

    Cameranesi, María M.; Morán-Barrio, Jorgelina; Limansky, Adriana S.; Repizo, Guillermo D.; Viale, Alejandro M.

    2018-01-01

    Members of the genus Acinetobacter possess distinct plasmid types which provide effective platforms for the acquisition, evolution, and dissemination of antimicrobial resistance structures. Many plasmid-borne resistance structures are bordered by short DNA sequences providing potential recognition sites for the host XerC and XerD site-specific tyrosine recombinases (XerC/D-like sites). However, whether these sites are active in recombination and how they assist the mobilization of associated resistance structures is still poorly understood. Here we characterized the plasmids carried by Acinetobacter baumannii Ab242, a multidrug-resistant clinical strain belonging to the ST104 (Oxford scheme) which produces an OXA-58 carbapenem-hydrolyzing class-D β-lactamase (CHDL). Plasmid sequencing and characterization of replication, stability, and adaptive modules revealed the presence in Ab242 of three novel plasmids lacking self-transferability functions which were designated pAb242_9, pAb242_12, and pAb242_25, respectively. Among them, only pAb242_25 was found to carry an adaptive module encompassing an ISAba825-blaOXA-58 arrangement accompanied by a TnaphA6 transposon, the whole structure conferring simultaneous resistance to carbapenems and aminoglycosides. Ab242 plasmids harbor several XerC/D-like sites, with most sites found in pAb242_25 located in the vicinity or within the adaptive module described above. Electrotransformation of susceptible A. nosocomialis cells with Ab242 plasmids followed by imipenem selection indicated that the transforming plasmid form was a co-integrate resulting from the fusion of pAb242_25 and pAb242_12. Further characterization by cloning and sequencing studies indicated that a XerC/D site in pAb242_25 and another in pAb242_12 provided the active sister pair for the inter-molecular site-specific recombination reaction mediating the fusion of these two plasmids. Moreover, the resulting co-integrate was found also to undergo intra-molecular resolution at the new pair of XerC/D sites generated during fusion thus regenerating the original pAb242_25 and pAb242_12 plasmids. These observations provide the first evidence indicating that XerC/D-like sites in A. baumannii plasmids can provide active pairs for site-specific recombination mediating inter-molecular fusions and intra-molecular resolutions. The overall results shed light on the evolutionary dynamics of A. baumannii plasmids and the underlying mechanisms of dissemination of genetic structures responsible for carbapenem and other antibiotics resistance among the Acinetobacter clinical population. PMID:29434581

  1. Tau-REx: A new look at the retrieval of exoplanetary atmospheres

    NASA Astrophysics Data System (ADS)

    Waldmann, Ingo

    2014-11-01

    The field of exoplanetary spectroscopy is as fast moving as it is new. With an increasing amount of space and ground based instruments obtaining data on a large set of extrasolar planets we are indeed entering the era of exoplanetary characterisation. Permanently at the edge of instrument feasibility, it is as important as it is difficult to find the most optimal and objective methodologies to analysing and interpreting current data. This is particularly true for smaller and fainter Earth and Super-Earth type planets.For low to mid signal to noise (SNR) observations, we are prone to two sources of biases: 1) Prior selection in the data reduction and analysis; 2) Prior constraints on the spectral retrieval. In Waldmann et al. (2013), Morello et al. (2014) and Waldmann (2012, 2014) we have shown a prior-free approach to data analysis based on non-parametric machine learning techniques. Following these approaches we will present a new take on the spectral retrieval of extrasolar planets. Tau-REx (tau-retrieval of exoplanets) is a new line-by-line, atmospheric retrieval framework. In the past the decision on what opacity sources go into an atmospheric model were usually user defined. Manual input can lead to model biases and poor convergence of the atmospheric model to the data. In Tau-REx we have set out to solve this. Through custom built pattern recognition software, Tau-REx is able to rapidly identify the most likely atmospheric opacities from a large number of possible absorbers/emitters (ExoMol or HiTran data bases) and non-parametrically constrain the prior space for the Bayesian retrieval. Unlike other (MCMC based) techniques, Tau-REx is able to fully integrate high-dimensional log-likelihood spaces and to calculate the full Bayesian Evidence of the atmospheric models. We achieve this through a combination of Nested Sampling and a high degree of code parallelisation. This allows for an exact and unbiased Bayesian model selection and a fully mapping of potential model-data degeneracies. Together with non-parametric data de-trending of exoplanetary spectra, we can reach an un- precedented level of objectivity in our atmospheric characterisation of these foreign worlds.

  2. Spatial hydrological drought characteristics in Karkheh River basin, southwest Iran using copulas

    NASA Astrophysics Data System (ADS)

    Dodangeh, Esmaeel; Shahedi, Kaka; Shiau, Jenq-Tzong; MirAkbari, Maryam

    2017-08-01

    Investigation on drought characteristics such as severity, duration, and frequency is crucial for water resources planning and management in a river basin. While the methodology for multivariate drought frequency analysis is well established by applying the copulas, the estimation on the associated parameters by various parameter estimation methods and the effects on the obtained results have not yet been investigated. This research aims at conducting a comparative analysis between the maximum likelihood parametric and non-parametric method of the Kendall τ estimation method for copulas parameter estimation. The methods were employed to study joint severity-duration probability and recurrence intervals in Karkheh River basin (southwest Iran) which is facing severe water-deficit problems. Daily streamflow data at three hydrological gauging stations (Tang Sazbon, Huleilan and Polchehr) near the Karkheh dam were used to draw flow duration curves (FDC) of these three stations. The Q_{75} index extracted from the FDC were set as threshold level to abstract drought characteristics such as drought duration and severity on the basis of the run theory. Drought duration and severity were separately modeled using the univariate probabilistic distributions and gamma-GEV, LN2-exponential, and LN2-gamma were selected as the best paired drought severity-duration inputs for copulas according to the Akaike Information Criteria (AIC), Kolmogorov-Smirnov and chi-square tests. Archimedean Clayton, Frank, and extreme value Gumbel copulas were employed to construct joint cumulative distribution functions (JCDF) of droughts for each station. Frank copula at Tang Sazbon and Gumbel at Huleilan and Polchehr stations were identified as the best copulas based on the performance evaluation criteria including AIC, BIC, log-likelihood and root mean square error (RMSE) values. Based on the RMSE values, nonparametric Kendall-τ is preferred to the parametric maximum likelihood estimation method. The results showed greater drought return periods by the parametric ML method in comparison to the nonparametric Kendall τ estimation method. The results also showed that stations located in tributaries (Huleilan and Polchehr) have close return periods, while the station along the main river (Tang Sazbon) has the smaller return periods for the drought events with identical drought duration and severity.

  3. Tensor-Dictionary Learning with Deep Kruskal-Factor Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Andrew J.; Pu, Yunchen; Sun, Yannan

    We introduce new dictionary learning methods for tensor-variate data of any order. We represent each data item as a sum of Kruskal decomposed dictionary atoms within the framework of beta-process factor analysis (BPFA). Our model is nonparametric and can infer the tensor-rank of each dictionary atom. This Kruskal-Factor Analysis (KFA) is a natural generalization of BPFA. We also extend KFA to a deep convolutional setting and develop online learning methods. We test our approach on image processing and classification tasks achieving state of the art results for 2D & 3D inpainting and Caltech 101. The experiments also show that atom-rankmore » impacts both overcompleteness and sparsity.« less

  4. Surgical Treatment for Discogenic Low-Back Pain: Lumbar Arthroplasty Results in Superior Pain Reduction and Disability Level Improvement Compared With Lumbar Fusion

    PubMed Central

    2007-01-01

    Background The US Food and Drug Administration approved the Charité artificial disc on October 26, 2004. This approval was based on an extensive analysis and review process; 20 years of disc usage worldwide; and the results of a prospective, randomized, controlled clinical trial that compared lumbar artificial disc replacement to fusion. The results of the investigational device exemption (IDE) study led to a conclusion that clinical outcomes following lumbar arthroplasty were at least as good as outcomes from fusion. Methods The author performed a new analysis of the Visual Analog Scale pain scores and the Oswestry Disability Index scores from the Charité artificial disc IDE study and used a nonparametric statistical test, because observed data distributions were not normal. The analysis included all of the enrolled subjects in both the nonrandomized and randomized phases of the study. Results Subjects from both the treatment and control groups improved from the baseline situation (P < .001) at all follow-up times (6 weeks to 24 months). Additionally, these pain and disability levels with artificial disc replacement were superior (P < .05) to the fusion treatment at all follow-up times including 2 years. Conclusions The a priori statistical plan for an IDE study may not adequately address the final distribution of the data. Therefore, statistical analyses more appropriate to the distribution may be necessary to develop meaningful statistical conclusions from the study. A nonparametric statistical analysis of the Charité artificial disc IDE outcomes scores demonstrates superiority for lumbar arthroplasty versus fusion at all follow-up time points to 24 months. PMID:25802574

  5. Bayesian inference of the number of factors in gene-expression analysis: application to human virus challenge studies.

    PubMed

    Chen, Bo; Chen, Minhua; Paisley, John; Zaas, Aimee; Woods, Christopher; Ginsburg, Geoffrey S; Hero, Alfred; Lucas, Joseph; Dunson, David; Carin, Lawrence

    2010-11-09

    Nonparametric Bayesian techniques have been developed recently to extend the sophistication of factor models, allowing one to infer the number of appropriate factors from the observed data. We consider such techniques for sparse factor analysis, with application to gene-expression data from three virus challenge studies. Particular attention is placed on employing the Beta Process (BP), the Indian Buffet Process (IBP), and related sparseness-promoting techniques to infer a proper number of factors. The posterior density function on the model parameters is computed using Gibbs sampling and variational Bayesian (VB) analysis. Time-evolving gene-expression data are considered for respiratory syncytial virus (RSV), Rhino virus, and influenza, using blood samples from healthy human subjects. These data were acquired in three challenge studies, each executed after receiving institutional review board (IRB) approval from Duke University. Comparisons are made between several alternative means of per-forming nonparametric factor analysis on these data, with comparisons as well to sparse-PCA and Penalized Matrix Decomposition (PMD), closely related non-Bayesian approaches. Applying the Beta Process to the factor scores, or to the singular values of a pseudo-SVD construction, the proposed algorithms infer the number of factors in gene-expression data. For real data the "true" number of factors is unknown; in our simulations we consider a range of noise variances, and the proposed Bayesian models inferred the number of factors accurately relative to other methods in the literature, such as sparse-PCA and PMD. We have also identified a "pan-viral" factor of importance for each of the three viruses considered in this study. We have identified a set of genes associated with this pan-viral factor, of interest for early detection of such viruses based upon the host response, as quantified via gene-expression data.

  6. Non-parametric trend analysis of the aridity index for three large arid and semi-arid basins in Iran

    NASA Astrophysics Data System (ADS)

    Ahani, Hossien; Kherad, Mehrzad; Kousari, Mohammad Reza; van Roosmalen, Lieke; Aryanfar, Ramin; Hosseini, Seyyed Mashaallah

    2013-05-01

    Currently, an important scientific challenge that researchers are facing is to gain a better understanding of climate change at the regional scale, which can be especially challenging in an area with low and highly variable precipitation amounts such as Iran. Trend analysis of the medium-term change using ground station observations of meteorological variables can enhance our knowledge of the dominant processes in an area and contribute to the analysis of future climate projections. Generally, studies focus on the long-term variability of temperature and precipitation and to a lesser extent on other important parameters such as moisture indices. In this study the recent 50-year trends (1955-2005) of precipitation (P), potential evapotranspiration (PET), and aridity index (AI) in monthly time scale were studied over 14 synoptic stations in three large Iran basins using the Mann-Kendall non-parametric test. Additionally, an analysis of the monthly, seasonal and annual trend of each parameter was performed. Results showed no significant trends in the monthly time series. However, PET showed significant, mostly decreasing trends, for the seasonal values, which resulted in a significant negative trend in annual PET at five stations. Significant negative trends in seasonal P values were only found at a number of stations in spring and summer and no station showed significant negative trends in annual P. Due to the varied positive and negative trends in annual P and to a lesser extent PET, almost as many stations with negative as positive trends in annual AI were found, indicating that both drying and wetting trends occurred in Iran. Overall, the northern part of the study area showed an increasing trend in annual AI which meant that the region became wetter, while the south showed decreasing trends in AI.

  7. Parametric vs. non-parametric statistics of low resolution electromagnetic tomography (LORETA).

    PubMed

    Thatcher, R W; North, D; Biver, C

    2005-01-01

    This study compared the relative statistical sensitivity of non-parametric and parametric statistics of 3-dimensional current sources as estimated by the EEG inverse solution Low Resolution Electromagnetic Tomography (LORETA). One would expect approximately 5% false positives (classification of a normal as abnormal) at the P < .025 level of probability (two tailed test) and approximately 1% false positives at the P < .005 level. EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) from 43 normal adult subjects were imported into the Key Institute's LORETA program. We then used the Key Institute's cross-spectrum and the Key Institute's LORETA output files (*.lor) as the 2,394 gray matter pixel representation of 3-dimensional currents at different frequencies. The mean and standard deviation *.lor files were computed for each of the 2,394 gray matter pixels for each of the 43 subjects. Tests of Gaussianity and different transforms were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of parametric vs. non-parametric statistics were compared using a "leave-one-out" cross validation method in which individual normal subjects were withdrawn and then statistically classified as being either normal or abnormal based on the remaining subjects. Log10 transforms approximated Gaussian distribution in the range of 95% to 99% accuracy. Parametric Z score tests at P < .05 cross-validation demonstrated an average misclassification rate of approximately 4.25%, and range over the 2,394 gray matter pixels was 27.66% to 0.11%. At P < .01 parametric Z score cross-validation false positives were 0.26% and ranged from 6.65% to 0% false positives. The non-parametric Key Institute's t-max statistic at P < .05 had an average misclassification error rate of 7.64% and ranged from 43.37% to 0.04% false positives. The nonparametric t-max at P < .01 had an average misclassification rate of 6.67% and ranged from 41.34% to 0% false positives of the 2,394 gray matter pixels for any cross-validated normal subject. In conclusion, adequate approximation to Gaussian distribution and high cross-validation can be achieved by the Key Institute's LORETA programs by using a log10 transform and parametric statistics, and parametric normative comparisons had lower false positive rates than the non-parametric tests.

  8. Use of Brain MRI Atlases to Determine Boundaries of Age-Related Pathology: The Importance of Statistical Method

    PubMed Central

    Dickie, David Alexander; Job, Dominic E.; Gonzalez, David Rodriguez; Shenkin, Susan D.; Wardlaw, Joanna M.

    2015-01-01

    Introduction Neurodegenerative disease diagnoses may be supported by the comparison of an individual patient’s brain magnetic resonance image (MRI) with a voxel-based atlas of normal brain MRI. Most current brain MRI atlases are of young to middle-aged adults and parametric, e.g., mean ±standard deviation (SD); these atlases require data to be Gaussian. Brain MRI data, e.g., grey matter (GM) proportion images, from normal older subjects are apparently not Gaussian. We created a nonparametric and a parametric atlas of the normal limits of GM proportions in older subjects and compared their classifications of GM proportions in Alzheimer’s disease (AD) patients. Methods Using publicly available brain MRI from 138 normal subjects and 138 subjects diagnosed with AD (all 55–90 years), we created: a mean ±SD atlas to estimate parametrically the percentile ranks and limits of normal ageing GM; and, separately, a nonparametric, rank order-based GM atlas from the same normal ageing subjects. GM images from AD patients were then classified with respect to each atlas to determine the effect statistical distributions had on classifications of proportions of GM in AD patients. Results The parametric atlas often defined the lower normal limit of the proportion of GM to be negative (which does not make sense physiologically as the lowest possible proportion is zero). Because of this, for approximately half of the AD subjects, 25–45% of voxels were classified as normal when compared to the parametric atlas; but were classified as abnormal when compared to the nonparametric atlas. These voxels were mainly concentrated in the frontal and occipital lobes. Discussion To our knowledge, we have presented the first nonparametric brain MRI atlas. In conditions where there is increasing variability in brain structure, such as in old age, nonparametric brain MRI atlases may represent the limits of normal brain structure more accurately than parametric approaches. Therefore, we conclude that the statistical method used for construction of brain MRI atlases should be selected taking into account the population and aim under study. Parametric methods are generally robust for defining central tendencies, e.g., means, of brain structure. Nonparametric methods are advisable when studying the limits of brain structure in ageing and neurodegenerative disease. PMID:26023913

  9. Assessment of contribution of Australia's energy production to CO2 emissions and environmental degradation using statistical dynamic approach.

    PubMed

    Sarkodie, Samuel Asumadu; Strezov, Vladimir

    2018-10-15

    Energy production remains the major emitter of atmospheric emissions, thus, in accordance with Australia's Emissions Projections by 2030, this study analyzed the impact of Australia's energy portfolio on environmental degradation and CO 2 emissions using locally compiled data on disaggregate energy production, energy imports and exports spanning from 1974 to 2013. This study employed the fully modified ordinary least squares, dynamic ordinary least squares, and canonical cointegrating regression estimators; statistically inspired modification of partial least squares regression analysis with a subsequent sustainability sensitivity analysis. The validity of the environmental Kuznets curve hypothesis proposes a paradigm shift from energy-intensive and carbon-intensive industries to less-energy-intensive and green energy industries and its related services, leading to a structural change in the economy. Thus, decoupling energy services provide better interpretation of the role of the energy sector portfolio in environmental degradation and CO 2 emissions assessment. The sensitivity analysis revealed that nonrenewable energy production above 10% and energy imports above 5% will dampen the goals for the 2030 emission reduction target. Increasing the share of renewable energy penetration in the energy portfolio decreases the level of CO 2 emissions, while increasing the share of non-renewable energy sources in the energy mix increases the level of atmospheric emissions, thus increasing climate change and their impacts. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Optimum nonparametric estimation of population density based on ordered distances

    USGS Publications Warehouse

    Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.

    1982-01-01

    The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.

  11. Transforming Parent-Child Interaction in Family Routines: Longitudinal Analysis with Families of Children with Developmental Disabilities.

    PubMed

    Lucyshyn, Joseph M; Fossett, Brenda; Bakeman, Roger; Cheremshynski, Christy; Miller, Lynn; Lohrmann, Sharon; Binnendyk, Lauren; Khan, Sophia; Chinn, Stephen; Kwon, Samantha; Irvin, Larry K

    2015-12-01

    The efficacy and consequential validity of an ecological approach to behavioral intervention with families of children with developmental disabilities was examined. The approach aimed to transform coercive into constructive parent-child interaction in family routines. Ten families participated, including 10 mothers and fathers and 10 children 3-8 years old with developmental disabilities. Thirty-six family routines were selected (2 to 4 per family). Dependent measures included child problem behavior, routine steps completed, and coercive and constructive parent-child interaction. For each family, a single case, multiple baseline design was employed with three phases: baseline, intervention, and follow-up. Visual analysis evaluated the functional relation between intervention and improvements in child behavior and routine participation. Nonparametric tests across families evaluated the statistical significance of these improvements. Sequential analyses within families and univariate analyses across families examined changes from baseline to intervention in the percentage and odds ratio of coercive and constructive parent-child interaction. Multiple baseline results documented functional or basic effects for 8 of 10 families. Nonparametric tests showed these changes to be significant. Follow-up showed durability at 11 to 24 months postintervention. Sequential analyses documented the transformation of coercive into constructive processes for 9 of 10 families. Univariate analyses across families showed significant improvements in 2- and 4-step coercive and constructive processes but not in odds ratio. Results offer evidence of the efficacy of the approach and consequential validity of the ecological unit of analysis, parent-child interaction in family routines. Future studies should improve efficiency, and outcomes for families experiencing family systems challenges.

  12. omicsNPC: Applying the Non-Parametric Combination Methodology to the Integrative Analysis of Heterogeneous Omics Data

    PubMed Central

    Karathanasis, Nestoras; Tsamardinos, Ioannis

    2016-01-01

    Background The advance of omics technologies has made possible to measure several data modalities on a system of interest. In this work, we illustrate how the Non-Parametric Combination methodology, namely NPC, can be used for simultaneously assessing the association of different molecular quantities with an outcome of interest. We argue that NPC methods have several potential applications in integrating heterogeneous omics technologies, as for example identifying genes whose methylation and transcriptional levels are jointly deregulated, or finding proteins whose abundance shows the same trends of the expression of their encoding genes. Results We implemented the NPC methodology within “omicsNPC”, an R function specifically tailored for the characteristics of omics data. We compare omicsNPC against a range of alternative methods on simulated as well as on real data. Comparisons on simulated data point out that omicsNPC produces unbiased / calibrated p-values and performs equally or significantly better than the other methods included in the study; furthermore, the analysis of real data show that omicsNPC (a) exhibits higher statistical power than other methods, (b) it is easily applicable in a number of different scenarios, and (c) its results have improved biological interpretability. Conclusions The omicsNPC function competitively behaves in all comparisons conducted in this study. Taking into account that the method (i) requires minimal assumptions, (ii) it can be used on different studies designs and (iii) it captures the dependences among heterogeneous data modalities, omicsNPC provides a flexible and statistically powerful solution for the integrative analysis of different omics data. PMID:27812137

  13. An Analysis of energy consumption and economic growth of Cobb-Douglas production function based on ECM

    NASA Astrophysics Data System (ADS)

    Guo, Wei-wei

    2018-02-01

    Energy is one of the important factors affecting economic growth, the motive force of the economic development of countries in the world, essential for the world economic development and people’s living material resources, an important resource of the relationship between the national economies. The paper sums up the evaluation and literatures on energy consumption and economic growth at home and abroad, thinks “southern talk” as the energy consumption and economic growth in the time division, makes a series of empirical tests on the relationship between total energy consumption and economic growth in China from 1978 to 1991 and from 1992 to 2016.The results show that total energy consumption is a one-way causal relationship between economic growths in china, Economic growth has a strong dependence on energy, there is a co-integration relationship between energy consumption and economic growth. However, economic growth depends on the energy consumption decreased year by year in China, The way of economic growth is changing from the extensive economic growth mode to intensive mode of economic growth.

  14. Time-frequency causality between stock prices and exchange rates: Further evidences from cointegration and wavelet analysis

    NASA Astrophysics Data System (ADS)

    Afshan, Sahar; Sharif, Arshian; Loganathan, Nanthakumar; Jammazi, Rania

    2018-04-01

    The current study investigates the relationship between stock prices and exchange rate by using wavelets approach and more focused the continuous, power spectrum, cross and coherence wavelet. The result of Bayer and Hanck (2013) and Gregory and Hansen (1996) confirm the presence of long-run association between stock price and exchange rate in Pakistan. The results of wavelet coherence reveal the dominance of SP during 2005-2006 and 2011-2012 in the period of 8-16 and 16-32 weeks cycle in approximately all the exchange rates against Pakistani rupees. For almost the entire studied period in long scale, the study evidences the strong coherence between both the series. The most interesting part of this coherence is the existence of bidirectional causality in the long timescale. The arrows in this long region are pointing both left up and left down. This suggests that during the time period, our variables are exhibiting out phase relationship with mutually leading and lagging the market. These results are in contrast with many earlier studies of Pakistan.

  15. Integration of narrow-host-range vectors from Escherichia coli into the genomes of amino acid-producing corynebacteria after intergeneric conjugation.

    PubMed

    Mateos, L M; Schäfer, A; Kalinowski, J; Martin, J F; Pühler, A

    1996-10-01

    Conjugative transfer of mobilizable derivatives of the Escherichia coli narrow-host-range plasmids pBR322, pBR325, pACYC177, and pACYC184 from E. coli to species of the gram-positive genera Corynebacterium and Brevibacterium resulted in the integration of the plasmids into the genomes of the recipient bacteria. Transconjugants appeared at low frequencies and reproducibly with a delay of 2 to 3 days compared with matings with replicative vectors. Southern analysis of corynebacterial transconjugants and nucleotide sequences from insertion sites revealed that integration occurs at different locations and that different parts of the vector are involved in the process. Integration is not dependent on indigenous insertion sequence elements but results from recombination between very short homologous DNA segments (8 to 12 bp) present in the vector and in the host DNA. In the majority of the cases (90%), integration led to cointegrate formation, and in some cases, deletions or rearrangements occurred during the recombination event. Insertions were found to be quite stable even in the absence of selective pressure.

  16. The role of energy in economic growth.

    PubMed

    Stern, David I

    2011-02-01

    This paper reviews the mainstream, resource economics, and ecological economics models of growth. A possible synthesis of energy-based and mainstream models is presented. This shows that when energy is scarce it imposes a strong constraint on the growth of the economy; however, when energy is abundant, its effect on economic growth is much reduced. The industrial revolution released the constraints on economic growth by the development of new methods of using coal and the discovery of new fossil fuel resources. Time-series analysis shows that energy and GDP cointegrate, and energy use Granger causes GDP when capital and other production inputs are included in the vector autoregression model. However, various mechanisms can weaken the links between energy and growth. Energy used per unit of economic output has declined in developed and some developing countries, owing to both technological change and a shift from poorer quality fuels, such as coal, to the use of higher quality fuels, especially electricity. Substitution of other inputs for energy and sectoral shifts in economic activity play smaller roles. © 2011 New York Academy of Sciences.

  17. Integration of narrow-host-range vectors from Escherichia coli into the genomes of amino acid-producing corynebacteria after intergeneric conjugation.

    PubMed Central

    Mateos, L M; Schäfer, A; Kalinowski, J; Martin, J F; Pühler, A

    1996-01-01

    Conjugative transfer of mobilizable derivatives of the Escherichia coli narrow-host-range plasmids pBR322, pBR325, pACYC177, and pACYC184 from E. coli to species of the gram-positive genera Corynebacterium and Brevibacterium resulted in the integration of the plasmids into the genomes of the recipient bacteria. Transconjugants appeared at low frequencies and reproducibly with a delay of 2 to 3 days compared with matings with replicative vectors. Southern analysis of corynebacterial transconjugants and nucleotide sequences from insertion sites revealed that integration occurs at different locations and that different parts of the vector are involved in the process. Integration is not dependent on indigenous insertion sequence elements but results from recombination between very short homologous DNA segments (8 to 12 bp) present in the vector and in the host DNA. In the majority of the cases (90%), integration led to cointegrate formation, and in some cases, deletions or rearrangements occurred during the recombination event. Insertions were found to be quite stable even in the absence of selective pressure. PMID:8824624

  18. Homeologous plastid DNA transformation in tobacco is mediated by multiple recombination events.

    PubMed Central

    Kavanagh, T A; Thanh, N D; Lao, N T; McGrath, N; Peter, S O; Horváth, E M; Dix, P J; Medgyesy, P

    1999-01-01

    Efficient plastid transformation has been achieved in Nicotiana tabacum using cloned plastid DNA of Solanum nigrum carrying mutations conferring spectinomycin and streptomycin resistance. The use of the incompletely homologous (homeologous) Solanum plastid DNA as donor resulted in a Nicotiana plastid transformation frequency comparable with that of other experiments where completely homologous plastid DNA was introduced. Physical mapping and nucleotide sequence analysis of the targeted plastid DNA region in the transformants demonstrated efficient site-specific integration of the 7.8-kb Solanum plastid DNA and the exclusion of the vector DNA. The integration of the cloned Solanum plastid DNA into the Nicotiana plastid genome involved multiple recombination events as revealed by the presence of discontinuous tracts of Solanum-specific sequences that were interspersed between Nicotiana-specific markers. Marked position effects resulted in very frequent cointegration of the nonselected peripheral donor markers located adjacent to the vector DNA. Data presented here on the efficiency and features of homeologous plastid DNA recombination are consistent with the existence of an active RecA-mediated, but a diminished mismatch, recombination/repair system in higher-plant plastids. PMID:10388829

  19. Assessment of circadian rhythms of both skin temperature and motor activity in infants during the first 6 months of life.

    PubMed

    Zornoza-Moreno, Matilde; Fuentes-Hernández, Silvia; Sánchez-Solis, Manuel; Rol, María Ángeles; Larqué, Elvira; Madrid, Juan Antonio

    2011-05-01

    The authors developed a method useful for home measurement of temperature, activity, and sleep rhythms in infants under normal-living conditions during their first 6 mos of life. In addition, parametric and nonparametric tests for assessing circadian system maturation in these infants were compared. Anthropometric parameters plus ankle skin temperature and activity were evaluated in 10 infants by means of two data loggers, Termochron iButton (DS1291H, Maxim Integrated Products, Sunnyvale, CA) for temperature and HOBO Pendant G (Hobo Pendant G Acceleration, UA-004-64, Onset Computer Corporation, Bourne, MA) for motor activity, located in special baby socks specifically designed for the study. Skin temperature and motor activity were recorded over 3 consecutive days at 15 days, 1, 3, and 6 mos of age. Circadian rhythms of skin temperature and motor activity appeared at 3 mos in most babies. Mean skin temperature decreased significantly by 3 mos of life relative to previous measurements (p = .0001), whereas mean activity continued to increase during the first 6 mos. For most of the parameters analyzed, statistically significant changes occurred at 3-6 mos relative to 0.5-1 mo of age. Major differences were found using nonparametric tests. Intradaily variability in motor activity decreased significantly at 6 mos of age relative to previous measurements, and followed a similar trend for temperature; interdaily stability increased significantly at 6 mos of age relative to previous measurements for both variables; relative amplitude increased significantly at 6 mos for temperature and at 3 mos for activity, both with respect to previous measurements. A high degree of correlation was found between chronobiological parametric and nonparametric tests for mean and mesor and also for relative amplitude versus the cosinor-derived amplitude. However, the correlation between parametric and nonparametric equivalent indices (acrophase and midpoint of M5, interdaily stability and Rayleigh test, or intradaily variability and P(1)/P(ultradian)) despite being significant, was lower for both temperature and activity. The circadian function index (CFI index), based on the integrated variable temperature-activity, increased gradually with age and was statistically significant at 6 mos of age. At 6 mos, 90% of the infants' rest period coincided with the standard sleep period of their parents, defined from 23:00 to 07:00 h (dichotomic index I < O; when I < O = 100%, there is a complete coincidence between infant nocturnal rest period and the standard rest period), whereas at 15 days of life the coincidence was only 75%. The combination of thermometry and actimetry using data loggers placed in infants' socks is a reliable method for assessing both variables and also sleep rhythms in infants under ambulatory conditions, with minimal disturbance. Using this methodological approach, circadian rhythms of skin temperature and motor activity appeared by 3 mos in most babies. Nonparametric tests provided more reliable information than cosinor analysis for circadian rhythm assessment in infants.

  20. The x-ray luminosity-redshift relationship of quasars

    PubMed Central

    Segal, I. E.; Segal, W.

    1980-01-01

    Chronometric cosmology provides an excellent fit for the phenomenological x-ray luminosity-redshift relationship for 49 quasars observed by the Einstein satellite. Analysis of the data on the basis of the Friedmann cosmology leads to a correlation of absolute x-ray luminosity with redshift of >0.8, which is increased to ∼1 in the bright envelope. Although the trend might be ascribed a priori to an observational magnitude bias, it persists after nonparametric, maximum-likelihood removal of this bias. PMID:16592826

Top