Lee, Mi Hee; Lee, Soo Bong; Eo, Yang Dam; Kim, Sun Woong; Woo, Jung-Hun; Han, Soo Hee
2017-07-01
Landsat optical images have enough spatial and spectral resolution to analyze vegetation growth characteristics. But, the clouds and water vapor degrade the image quality quite often, which limits the availability of usable images for the time series vegetation vitality measurement. To overcome this shortcoming, simulated images are used as an alternative. In this study, weighted average method, spatial and temporal adaptive reflectance fusion model (STARFM) method, and multilinear regression analysis method have been tested to produce simulated Landsat normalized difference vegetation index (NDVI) images of the Korean Peninsula. The test results showed that the weighted average method produced the images most similar to the actual images, provided that the images were available within 1 month before and after the target date. The STARFM method gives good results when the input image date is close to the target date. Careful regional and seasonal consideration is required in selecting input images. During summer season, due to clouds, it is very difficult to get the images close enough to the target date. Multilinear regression analysis gives meaningful results even when the input image date is not so close to the target date. Average R 2 values for weighted average method, STARFM, and multilinear regression analysis were 0.741, 0.70, and 0.61, respectively.
Face Hallucination with Linear Regression Model in Semi-Orthogonal Multilinear PCA Method
NASA Astrophysics Data System (ADS)
Asavaskulkiet, Krissada
2018-04-01
In this paper, we propose a new face hallucination technique, face images reconstruction in HSV color space with a semi-orthogonal multilinear principal component analysis method. This novel hallucination technique can perform directly from tensors via tensor-to-vector projection by imposing the orthogonality constraint in only one mode. In our experiments, we use facial images from FERET database to test our hallucination approach which is demonstrated by extensive experiments with high-quality hallucinated color faces. The experimental results assure clearly demonstrated that we can generate photorealistic color face images by using the SO-MPCA subspace with a linear regression model.
Hannula, Manne; Huttunen, Kerttu; Koskelo, Jukka; Laitinen, Tomi; Leino, Tuomo
2008-01-01
In this study, the performances of artificial neural network (ANN) analysis and multilinear regression (MLR) model-based estimation of heart rate were compared in an evaluation of individual cognitive workload. The data comprised electrocardiography (ECG) measurements and an evaluation of cognitive load that induces psychophysiological stress (PPS), collected from 14 interceptor fighter pilots during complex simulated F/A-18 Hornet air battles. In our data, the mean absolute error of the ANN estimate was 11.4 as a visual analog scale score, being 13-23% better than the mean absolute error of the MLR model in the estimation of cognitive workload.
Virtual Beach version 2.2 (VB 2.2) is a decision support tool. It is designed to construct site-specific Multi-Linear Regression (MLR) models to predict pathogen indicator levels (or fecal indicator bacteria, FIB) at recreational beaches. MLR analysis has outperformed persisten...
A group contribution method has been developed to correlate the acute toxicity (96 h LC50) to the fathead minnow (Pimephales promelas) for 379 organic chemicals. Multilinear regression and computational neural networks (CNNs) were used for model building. The multilinear linear m...
NASA Astrophysics Data System (ADS)
Bruno, Delia Evelina; Barca, Emanuele; Goncalves, Rodrigo Mikosz; de Araujo Queiroz, Heithor Alexandre; Berardi, Luigi; Passarella, Giuseppe
2018-01-01
In this paper, the Evolutionary Polynomial Regression data modelling strategy has been applied to study small scale, short-term coastal morphodynamics, given its capability for treating a wide database of known information, non-linearly. Simple linear and multilinear regression models were also applied to achieve a balance between the computational load and reliability of estimations of the three models. In fact, even though it is easy to imagine that the more complex the model, the more the prediction improves, sometimes a "slight" worsening of estimations can be accepted in exchange for the time saved in data organization and computational load. The models' outcomes were validated through a detailed statistical, error analysis, which revealed a slightly better estimation of the polynomial model with respect to the multilinear model, as expected. On the other hand, even though the data organization was identical for the two models, the multilinear one required a simpler simulation setting and a faster run time. Finally, the most reliable evolutionary polynomial regression model was used in order to make some conjecture about the uncertainty increase with the extension of extrapolation time of the estimation. The overlapping rate between the confidence band of the mean of the known coast position and the prediction band of the estimated position can be a good index of the weakness in producing reliable estimations when the extrapolation time increases too much. The proposed models and tests have been applied to a coastal sector located nearby Torre Colimena in the Apulia region, south Italy.
Zang, Qing-Ce; Wang, Jia-Bo; Kong, Wei-Jun; Jin, Cheng; Ma, Zhi-Jie; Chen, Jing; Gong, Qian-Feng; Xiao, Xiao-He
2011-12-01
The fingerprints of artificial Calculus bovis extracts from different solvents were established by ultra-performance liquid chromatography (UPLC) and the anti-bacterial activities of artificial C. bovis extracts on Staphylococcus aureus (S. aureus) growth were studied by microcalorimetry. The UPLC fingerprints were evaluated using hierarchical clustering analysis. Some quantitative parameters obtained from the thermogenic curves of S. aureus growth affected by artificial C. bovis extracts were analyzed using principal component analysis. The spectrum-effect relationships between UPLC fingerprints and anti-bacterial activities were investigated using multi-linear regression analysis. The results showed that peak 1 (taurocholate sodium), peak 3 (unknown compound), peak 4 (cholic acid), and peak 6 (chenodeoxycholic acid) are more significant than the other peaks with the standard parameter estimate 0.453, -0.166, 0.749, 0.025, respectively. So, compounds cholic acid, taurocholate sodium, and chenodeoxycholic acid might be the major anti-bacterial components in artificial C. bovis. Altogether, this work provides a general model of the combination of UPLC chromatography and anti-bacterial effect to study the spectrum-effect relationships of artificial C. bovis extracts, which can be used to discover the main anti-bacterial components in artificial C. bovis or other Chinese herbal medicines with anti-bacterial effects. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Li, Ming Ze; Gao, Yuan Ke; Di, Xue Ying; Fan, Wen Yi
2016-03-01
The moisture content of forest surface soil is an important parameter in forest ecosystems. It is practically significant for forest ecosystem related research to use microwave remote sensing technology for rapid and accurate estimation of the moisture content of forest surface soil. With the aid of TDR-300 soil moisture content measuring instrument, the moisture contents of forest surface soils of 120 sample plots at Tahe Forestry Bureau of Daxing'anling region in Heilongjiang Province were measured. Taking the moisture content of forest surface soil as the dependent variable and the polarization decomposition parameters of C band Quad-pol SAR data as independent variables, two types of quantitative estimation models (multilinear regression model and BP-neural network model) for predicting moisture content of forest surface soils were developed. The spatial distribution of moisture content of forest surface soil on the regional scale was then derived with model inversion. Results showed that the model precision was 86.0% and 89.4% with RMSE of 3.0% and 2.7% for the multilinear regression model and the BP-neural network model, respectively. It indicated that the BP-neural network model had a better performance than the multilinear regression model in quantitative estimation of the moisture content of forest surface soil. The spatial distribution of forest surface soil moisture content in the study area was then obtained by using the BP neural network model simulation with the Quad-pol SAR data.
On the null distribution of Bayes factors in linear regression
USDA-ARS?s Scientific Manuscript database
We show that under the null, the 2 log (Bayes factor) is asymptotically distributed as a weighted sum of chi-squared random variables with a shifted mean. This claim holds for Bayesian multi-linear regression with a family of conjugate priors, namely, the normal-inverse-gamma prior, the g-prior, and...
NASA Astrophysics Data System (ADS)
Tan, C. H.; Matjafri, M. Z.; Lim, H. S.
2015-10-01
This paper presents the prediction models which analyze and compute the CO2 emission in Malaysia. Each prediction model for CO2 emission will be analyzed based on three main groups which is transportation, electricity and heat production as well as residential buildings and commercial and public services. The prediction models were generated using data obtained from World Bank Open Data. Best subset method will be used to remove irrelevant data and followed by multi linear regression to produce the prediction models. From the results, high R-square (prediction) value was obtained and this implies that the models are reliable to predict the CO2 emission by using specific data. In addition, the CO2 emissions from these three groups are forecasted using trend analysis plots for observation purpose.
Evaluation of the validated soil moisture product from the SMAP radiometer
USDA-ARS?s Scientific Manuscript database
In this study, we used a multilinear regression approach to retrieve surface soil moisture from NASA’s Soil Moisture Active Passive (SMAP) satellite data to create a global dataset of surface soil moisture which is consistent with ESA’s Soil Moisture and Ocean Salinity (SMOS) satellite retrieved sur...
NASA Technical Reports Server (NTRS)
Burke, Michael W.; Leardi, Riccardo; Judge, Russell A.; Pusey, Marc L.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
Full factorial experimental design incorporating multi-linear regression analysis of the experimental data allows quick identification of main trends and effects using a limited number of experiments. In this study these techniques were employed to identify the effect of precipitant concentration, supersaturation, and the presence of an impurity, the physiological lysozyme dimer, on the nucleation rate and crystal dimensions of the tetragonal forin of chicken egg white lysozyme. Decreasing precipitant concentration, increasing supers aturation, and increasing impurity, were found to increase crystal numbers. The crystal axial ratio decreased with increasing precipitant concentration, independent of impurity.
Multilinear Computing and Multilinear Algebraic Geometry
2016-08-10
landmark paper titled “Most tensor problems are NP-hard” (see [14] in Section 3) in the Journal of the ACM, the premier journal in Computer Science ...Higher-order cone programming,” Machine Learning Thematic Trimester, International Centre for Mathematics and Computer Science , Toulouse, France...geometry-and-data-analysis • 2014 SIMONS INSTITUTE WORKSHOP: Workshop on Tensors in Computer Science and Geometry, University of California, Berkeley, CA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jing, Yaqi; Meng, Qinghao, E-mail: qh-meng@tju.edu.cn; Qi, Peifeng
An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classificationmore » rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively.« less
NASA Astrophysics Data System (ADS)
Kumar, Vandhna; Meyssignac, Benoit; Melet, Angélique; Ganachaud, Alexandre
2017-04-01
Rising sea levels are a critical concern in small island nations. The problem is especially serious in the western south Pacific, where the total sea level rise over the last 60 years is up to 3 times the global average. In this study, we attempt to reconstruct sea levels at selected sites in the region (Suva, Lautoka, Noumea - Fiji and New Caledonia) as a mutiple-linear regression of atmospheric and oceanic variables. We focus on interannual-to-decadal scale variability, and lower (including the global mean sea level rise) over the 1979-2014 period. Sea levels are taken from tide gauge records and the ORAS4 reanalysis dataset, and are expressed as a sum of steric and mass changes as a preliminary step. The key development in our methodology is using leading wind stress curl as a proxy for the thermosteric component. This is based on the knowledge that wind stress curl anomalies can modulate the thermocline depth and resultant sea levels via Rossby wave propagation. The analysis is primarily based on correlation between local sea level and selected predictors, the dominant one being wind stress curl. In the first step, proxy boxes for wind stress curl are determined via regions of highest correlation. The proportion of sea level explained via linear regression is then removed, leaving a residual. This residual is then correlated with other locally acting potential predictors: halosteric sea level, the zonal and meridional wind stress components, and sea surface temperature. The statistically significant predictors are used in a multi-linear regression function to simulate the observed sea level. The method is able to reproduce between 40 to 80% of the variance in observed sea level. Based on the skill of the model, it has high potential in sea level projection and downscaling studies.
NASA Astrophysics Data System (ADS)
Andersen, Anders H.; Rayens, William S.; Li, Ren-Cang; Blonder, Lee X.
2000-10-01
In this paper we describe the enormous potential that multilinear models hold for the analysis of data from neuroimaging experiments that rely on functional magnetic resonance imaging (MRI) or other imaging modalities. A case is made for why one might fully expect that the successful introduction of these models to the neuroscience community could define the next generation of structure-seeking paradigms in the area. In spite of the potential for immediate application, there is much to do from the perspective of statistical science. That is, although multilinear models have already been particularly successful in chemistry and psychology, relatively little is known about their statistical properties. To that end, our research group at the University of Kentucky has made significant progress. In particular, we are in the process of developing formal influence measures for multilinear methods as well as associated classification models and effective implementations. We believe that these problems will be among the most important and useful to the scientific community. Details are presented herein and an application is given in the context of facial emotion processing experiments.
Hulin, Anne; Blanchet, Benoît; Audard, Vincent; Barau, Caroline; Furlan, Valérie; Durrbach, Antoine; Taïeb, Fabrice; Lang, Philippe; Grimbert, Philippe; Tod, Michel
2009-04-01
A significant relationship between mycophenolic acid (MPA) area under the plasma concentration-time curve (AUC) and the risk for rejection has been reported. Based on 3 concentration measurements, 3 approaches have been proposed for the estimation of MPA AUC, involving either a multilinear regression approach model (MLRA) or a Bayesian estimation using either gamma absorption or zero-order absorption population models. The aim of the study was to compare the 3 approaches for the estimation of MPA AUC in 150 renal transplant patients treated with mycophenolate mofetil and tacrolimus. The population parameters were determined in 77 patients (learning study). The AUC estimation methods were compared in the learning population and in 73 patients from another center (validation study). In the latter study, the reference AUCs were estimated by the trapezoidal rule on 8 measurements. MPA concentrations were measured by liquid chromatography. The gamma absorption model gave the best fit. In the learning study, the AUCs estimated by both Bayesian methods were very similar, whereas the multilinear approach was highly correlated but yielded estimates about 20% lower than Bayesian methods. This resulted in dosing recommendations differing by 250 mg/12 h or more in 27% of cases. In the validation study, AUC estimates based on the Bayesian method with gamma absorption model and multilinear regression approach model were, respectively, 12% higher and 7% lower than the reference values. To conclude, the bicompartmental model with gamma absorption rate gave the best fit. The 3 AUC estimation methods are highly correlated but not concordant. For a given patient, the same estimation method should always be used.
Farías-Valenzuela, Claudio; Pérez-Luco, Cristian; Ramírez-Campillo, Rodrigo; Álvarez, Cristian; Castro-Sepúlveda, Mauricio
Handgrip strength (HS) and peak oxygen consumption (Vo2peak) are powerful predictors of cardiovascular risk, although it is unknown which of the two variables is the better predictor. The objective of the following study was to relate HS and Vo2peak to cardiovascular risk markers in older Chilean women. Physically active adult women (n=51; age, 69±4.7years) participated in this study. The HS and Vo2peak were evaluated and related to the anthropometric variables of body mass, body mass index (BMI), waist circumference (WC), hip circumference (HC), waist ratio (WR), and waist height ratio (WHR), as well as with the cardiovascular variables systolic (SBP) and diastolic (DBP) and cardiac recovery in one minute (RHR1). A multilinear regression model was used for the analysis of the associated variables (P<.05). The cardiovascular risk markers associated (P<.05) with the handgrip strength of the dominant limb (HS DL ) were body mass, BMI, WR, and WHR. The handgrip strength of the non-dominant limb (HS NDL ) was associated with body mass. Vo2peak was associated with body mass, BMI, HC and RHR1. The multilinear regression model showed a value of r=0.43 in HS DL , r=0.39 in HS NDL and r=0.69 in peak Vo2. Although HS and Vo2peak were related to cardiovascular risk markers, Vo2peak offers greater associative power with these cardiovascular risk factors. Copyright © 2017 SEGG. Publicado por Elsevier España, S.L.U. All rights reserved.
Modeling of bromate formation by ozonation of surface waters in drinking water treatment.
Legube, Bernard; Parinet, Bernard; Gelinet, Karine; Berne, Florence; Croue, Jean-Philippe
2004-04-01
The main objective of this paper is to try to develop statistically and chemically rational models for bromate formation by ozonation of clarified surface waters. The results presented here show that bromate formation by ozonation of natural waters in drinking water treatment is directly proportional to the "Ct" value ("Ctau" in this study). Moreover, this proportionality strongly depends on many parameters: increasing of pH, temperature and bromide level leading to an increase of bromate formation; ammonia and dissolved organic carbon concentrations causing a reverse effect. Taking into account limitation of theoretical modeling, we proposed to predict bromate formation by stochastic simulations (multi-linear regression and artificial neural networks methods) from 40 experiments (BrO(3)(-) vs. "Ctau") carried out with three sand filtered waters sampled on three different waterworks. With seven selected variables we used a simple architecture of neural networks, optimized by "neural connection" of SPSS Inc./Recognition Inc. The bromate modeling by artificial neural networks gives better result than multi-linear regression. The artificial neural networks model allowed us classifying variables by decreasing order of influence (for the studied cases in our variables scale): "Ctau", [N-NH(4)(+)], [Br(-)], pH, temperature, DOC, alkalinity.
Martínez Gila, Diego Manuel; Cano Marchal, Pablo; Gómez Ortega, Juan; Gámez García, Javier
2018-03-25
Normally the olive oil quality is assessed by chemical analysis according to international standards. These norms define chemical and organoleptic markers, and depending on the markers, the olive oil can be labelled as lampante, virgin, or extra virgin olive oil (EVOO), the last being an indicator of top quality. The polyphenol content is related to EVOO organoleptic features, and different scientific works have studied the positive influence that these compounds have on human health. The works carried out in this paper are focused on studying relations between the polyphenol content in olive oil samples and its spectral response in the near infrared spectra. In this context, several acquisition parameters have been assessed to optimize the measurement process within the virgin olive oil production process. The best regression model reached a mean error value of 156.14 mg/kg in leave one out cross validation, and the higher regression coefficient was 0.81 through holdout validation.
Cano Marchal, Pablo; Gómez Ortega, Juan; Gámez García, Javier
2018-01-01
Normally the olive oil quality is assessed by chemical analysis according to international standards. These norms define chemical and organoleptic markers, and depending on the markers, the olive oil can be labelled as lampante, virgin, or extra virgin olive oil (EVOO), the last being an indicator of top quality. The polyphenol content is related to EVOO organoleptic features, and different scientific works have studied the positive influence that these compounds have on human health. The works carried out in this paper are focused on studying relations between the polyphenol content in olive oil samples and its spectral response in the near infrared spectra. In this context, several acquisition parameters have been assessed to optimize the measurement process within the virgin olive oil production process. The best regression model reached a mean error value of 156.14 mg/kg in leave one out cross validation, and the higher regression coefficient was 0.81 through holdout validation. PMID:29587403
NASA Astrophysics Data System (ADS)
Singh, Veena D.; Daharwal, Sanjay J.
2017-01-01
Three multivariate calibration spectrophotometric methods were developed for simultaneous estimation of Paracetamol (PARA), Enalapril maleate (ENM) and Hydrochlorothiazide (HCTZ) in tablet dosage form; namely multi-linear regression calibration (MLRC), trilinear regression calibration method (TLRC) and classical least square (CLS) method. The selectivity of the proposed methods were studied by analyzing the laboratory prepared ternary mixture and successfully applied in their combined dosage form. The proposed methods were validated as per ICH guidelines and good accuracy; precision and specificity were confirmed within the concentration range of 5-35 μg mL- 1, 5-40 μg mL- 1 and 5-40 μg mL- 1of PARA, HCTZ and ENM, respectively. The results were statistically compared with reported HPLC method. Thus, the proposed methods can be effectively useful for the routine quality control analysis of these drugs in commercial tablet dosage form.
NASA Astrophysics Data System (ADS)
Sathiyamoorthi, K.; Mala, V.; Sakthinathan, S. P.; Kamalakkannan, D.; Suresh, R.; Vanangamudi, G.; Thirunarayanan, G.
2013-08-01
Totally 38 aryl E 2-propen-1-ones including nine substituted styryl 4-iodophenyl ketones have been synthesised using solvent-free SiO2-H3PO4 catalyzed Aldol condensation between respective methyl ketones and substituted benzaldehydes under microwave irradiation. The yields of the ketones are more than 80%. The synthesised chalcones were characterized by their analytical, physical and spectroscopic data. The spectral frequencies of synthesised substituted styryl 4-iodophenyl ketones have been correlated with Hammett substituent constants, F and R parameters using single and multi-linear regression analysis. The antimicrobial activities of 4-iodophenyl chalcones have been studied using Bauer-Kirby method.
NASA Technical Reports Server (NTRS)
Burke, Michael W.; Judge, Russell A.; Pusey, Marc L.; Rose, M. Franklin (Technical Monitor)
2000-01-01
Full factorial experiment design incorporating multi-linear regression analysis of the experimental data allows the main trends and effects to be quickly identified while using only a limited number of experiments. These techniques were used to identify the effect of precipitant concentration and the presence of an impurity, the physiological lysozyme dimer, on the nucleation rate and crystal dimensions of the tetragonal form of chicken egg white lysozyme. Increasing precipitant concentration was found to decrease crystal numbers, the magnitude of this effect also depending on the supersaturation. The presence of the dimer generally increased nucleation. The crystal axial ratio decreased with increasing precipitant concentration independent of impurity.
Mosing, Martina; Waldmann, Andreas D.; MacFarlane, Paul; Iff, Samuel; Auer, Ulrike; Bohm, Stephan H.; Bettschart-Wolfensberger, Regula; Bardell, David
2016-01-01
This study evaluated the breathing pattern and distribution of ventilation in horses prior to and following recovery from general anaesthesia using electrical impedance tomography (EIT). Six horses were anaesthetised for 6 hours in dorsal recumbency. Arterial blood gas and EIT measurements were performed 24 hours before (baseline) and 1, 2, 3, 4, 5 and 6 hours after horses stood following anaesthesia. At each time point 4 representative spontaneous breaths were analysed. The percentage of the total breath length during which impedance remained greater than 50% of the maximum inspiratory impedance change (breath holding), the fraction of total tidal ventilation within each of four stacked regions of interest (ROI) (distribution of ventilation) and the filling time and inflation period of seven ROI evenly distributed over the dorso-ventral height of the lungs were calculated. Mixed effects multi-linear regression and linear regression were used and significance was set at p<0.05. All horses demonstrated inspiratory breath holding until 5 hours after standing. No change from baseline was seen for the distribution of ventilation during inspiration. Filling time and inflation period were more rapid and shorter in ventral and slower and longer in most dorsal ROI compared to baseline, respectively. In a mixed effects multi-linear regression, breath holding was significantly correlated with PaCO2 in both the univariate and multivariate regression. Following recovery from anaesthesia, horses showed inspiratory breath holding during which gas redistributed from ventral into dorsal regions of the lungs. This suggests auto-recruitment of lung tissue which would have been dependent and likely atelectic during anaesthesia. PMID:27331910
Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.
Liu, Min; Wang, Xueping; Zhang, Hongzhong
2018-03-01
In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.
Fatemi, Mohammad Hossein; Ghorbanzad'e, Mehdi
2009-11-01
Quantitative structure-property relationship models for the prediction of the nematic transition temperature (T (N)) were developed by using multilinear regression analysis and a feedforward artificial neural network (ANN). A collection of 42 thermotropic liquid crystals was chosen as the data set. The data set was divided into three sets: for training, and an internal and external test set. Training and internal test sets were used for ANN model development, and the external test set was used for evaluation of the predictive power of the model. In order to build the models, a set of six descriptors were selected by the best multilinear regression procedure of the CODESSA program. These descriptors were: atomic charge weighted partial negatively charged surface area, relative negative charged surface area, polarity parameter/square distance, minimum most negative atomic partial charge, molecular volume, and the A component of moment of inertia, which encode geometrical and electronic characteristics of molecules. These descriptors were used as inputs to ANN. The optimized ANN model had 6:6:1 topology. The standard errors in the calculation of T (N) for the training, internal, and external test sets using the ANN model were 1.012, 4.910, and 4.070, respectively. To further evaluate the ANN model, a crossvalidation test was performed, which produced the statistic Q (2) = 0.9796 and standard deviation of 2.67 based on predicted residual sum of square. Also, the diversity test was performed to ensure the model's stability and prove its predictive capability. The obtained results reveal the suitability of ANN for the prediction of T (N) for liquid crystals using molecular structural descriptors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Qichun; Zhang, Xuesong; Xu, Xingya
Riverine carbon cycling is an important, but insufficiently investigated component of the global carbon cycle. Analyses of environmental controls on riverine carbon cycling are critical for improved understanding of mechanisms regulating carbon processing and storage along the terrestrial-aquatic continuum. Here, we compile and analyze riverine dissolved organic carbon (DOC) concentration data from 1402 United States Geological Survey (USGS) gauge stations to examine the spatial variability and environmental controls of DOC concentrations in the United States (U.S.) surface waters. DOC concentrations exhibit high spatial variability, with an average of 6.42 ± 6.47 mg C/ L (Mean ± Standard Deviation). In general,more » high DOC concentrations occur in the Upper Mississippi River basin and the Southeastern U.S., while low concentrations are mainly distributed in the Western U.S. Single-factor analysis indicates that slope of drainage areas, wetlands, forests, percentage of first-order streams, and instream nutrients (such as nitrogen and phosphorus) pronouncedly influence DOC concentrations, but the explanatory power of each bivariate model is lower than 35%. Analyses based on the general multi-linear regression models suggest DOC concentrations are jointly impacted by multiple factors. Soil properties mainly show positive correlations with DOC concentrations; forest and shrub lands have positive correlations with DOC concentrations, but urban area and croplands demonstrate negative impacts; total instream phosphorus and dam density correlate positively with DOC concentrations. Notably, the relative importance of these environmental controls varies substantially across major U.S. water resource regions. In addition, DOC concentrations and environmental controls also show significant variability from small streams to large rivers, which may be caused by changing carbon sources and removal rates by river orders. In sum, our results reveal that general multi-linear regression analysis of twenty one terrestrial and aquatic environmental factors can partially explain (56%) the DOC concentration variation. In conclusion, this study highlights the complexity of the interactions among these environmental factors in determining DOC concentrations, thus calls for processes-based, non-linear methodologies to constrain uncertainties in riverine DOC cycling.« less
Multilinear Computing and Multilinear Algebraic Geometry
2016-08-10
instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send...performance period of this project. 15. SUBJECT TERMS Tensors , multilinearity, algebraic geometry, numerical computations, computational tractability, high...Reset DISTRIBUTION A: Distribution approved for public release. DISTRIBUTION A: Distribution approved for public release. INSTRUCTIONS FOR COMPLETING
Development of Super-Ensemble techniques for ocean analyses: the Mediterranean Sea case
NASA Astrophysics Data System (ADS)
Pistoia, Jenny; Pinardi, Nadia; Oddo, Paolo; Collins, Matthew; Korres, Gerasimos; Drillet, Yann
2017-04-01
Short-term ocean analyses for Sea Surface Temperature SST in the Mediterranean Sea can be improved by a statistical post-processing technique, called super-ensemble. This technique consists in a multi-linear regression algorithm applied to a Multi-Physics Multi-Model Super-Ensemble (MMSE) dataset, a collection of different operational forecasting analyses together with ad-hoc simulations produced by modifying selected numerical model parameterizations. A new linear regression algorithm based on Empirical Orthogonal Function filtering techniques is capable to prevent overfitting problems, even if best performances are achieved when we add correlation to the super-ensemble structure using a simple spatial filter applied after the linear regression. Our outcomes show that super-ensemble performances depend on the selection of an unbiased operator and the length of the learning period, but the quality of the generating MMSE dataset has the largest impact on the MMSE analysis Root Mean Square Error (RMSE) evaluated with respect to observed satellite SST. Lower RMSE analysis estimates result from the following choices: 15 days training period, an overconfident MMSE dataset (a subset with the higher quality ensemble members), and the least square algorithm being filtered a posteriori.
Solid harmonic wavelet scattering for predictions of molecule properties
NASA Astrophysics Data System (ADS)
Eickenberg, Michael; Exarchakis, Georgios; Hirn, Matthew; Mallat, Stéphane; Thiry, Louis
2018-06-01
We present a machine learning algorithm for the prediction of molecule properties inspired by ideas from density functional theory (DFT). Using Gaussian-type orbital functions, we create surrogate electronic densities of the molecule from which we compute invariant "solid harmonic scattering coefficients" that account for different types of interactions at different scales. Multilinear regressions of various physical properties of molecules are computed from these invariant coefficients. Numerical experiments show that these regressions have near state-of-the-art performance, even with relatively few training examples. Predictions over small sets of scattering coefficients can reach a DFT precision while being interpretable.
Compressive strength of human openwedges: a selection method
NASA Astrophysics Data System (ADS)
Follet, H.; Gotteland, M.; Bardonnet, R.; Sfarghiu, A. M.; Peyrot, J.; Rumelhart, C.
2004-02-01
A series of 44 samples of bone wedges of human origin, intended for allograft openwedge osteotomy and obtained without particular precautions during hip arthroplasty were re-examined. After viral inactivity chemical treatment, lyophilisation and radio-sterilisation (intended to produce optimal health safety), the compressive strength, independent of age, sex and the height of the sample (or angle of cut), proved to be too widely dispersed [ 10{-}158 MPa] in the first study. We propose a method for selecting samples which takes into account their geometry (width, length, thicknesses, cortical surface area). Statistical methods (Principal Components Analysis PCA, Hierarchical Cluster Analysis, Multilinear regression) allowed final selection of 29 samples having a mean compressive strength σ_{max} =103 MPa ± 26 and with variation [ 61{-}158 MPa] . These results are equivalent or greater than average materials currently used in openwedge osteotomy.
Permeability Evaluation Through Chitosan Membranes Using Taguchi Design
Sharma, Vipin; Marwaha, Rakesh Kumar; Dureja, Harish
2010-01-01
In the present study, chitosan membranes capable of imitating permeation characteristics of diclofenac diethylamine across animal skin were prepared using cast drying method. The effect of concentration of chitosan, concentration of cross-linking agent (NaTPP), crosslinking time was studied using Taguchi design. Taguchi design ranked concentration of chitosan as the most important factor influencing the permeation parameters of diclofenac diethylamine. The flux of the diclofenac diethylamine solution through optimized chitosan membrane (T9) was found to be comparable to that obtained across rat skin. The mathematical model developed using multilinear regression analysis can be used to formulate chitosan membranes that can mimic the desired permeation characteristics. The developed chitosan membranes can be utilized as a substitute to animal skin for in vitro permeation studies. PMID:21179329
Permeability evaluation through chitosan membranes using taguchi design.
Sharma, Vipin; Marwaha, Rakesh Kumar; Dureja, Harish
2010-01-01
In the present study, chitosan membranes capable of imitating permeation characteristics of diclofenac diethylamine across animal skin were prepared using cast drying method. The effect of concentration of chitosan, concentration of cross-linking agent (NaTPP), crosslinking time was studied using Taguchi design. Taguchi design ranked concentration of chitosan as the most important factor influencing the permeation parameters of diclofenac diethylamine. The flux of the diclofenac diethylamine solution through optimized chitosan membrane (T9) was found to be comparable to that obtained across rat skin. The mathematical model developed using multilinear regression analysis can be used to formulate chitosan membranes that can mimic the desired permeation characteristics. The developed chitosan membranes can be utilized as a substitute to animal skin for in vitro permeation studies.
Collaborative Investigations of Shallow Water Optics Problems
2004-12-01
Appendix E. Reprint of Radiative transfer equation inversion: Theory and shape factor models for retrieval of oceanic inherent optical properties, by F ...4829-4834. 5 Hoge, F . E., P. E. Lyon, C. D. Mobley, and L. K. Sundman, 2003. Radiative transfer equation inversion: Theory and shape factor models for...multilinear regression algorithms for the inversion of synthetic ocean colour spectra,, Int. J. Remote Sensing, 25(21), 4829-4834. Hoge, F . E., P. E. Lyon
Exploiting structure: Introduction and motivation
NASA Technical Reports Server (NTRS)
Xu, Zhong Ling
1993-01-01
Research activities performed during the period of 29 June 1993 through 31 Aug. 1993 are summarized. The Robust Stability of Systems where transfer function or characteristic polynomial are multilinear affine functions of parameters of interest in two directions, Algorithmic and Theoretical, was developed. In the algorithmic direction, a new approach that reduces the computational burden of checking the robust stability of the system with multilinear uncertainty is found. This technique is called 'Stability by linear process.' In fact, the 'Stability by linear process' described gives an algorithm. In analysis, we obtained a robustness criterion for the family of polynomials with coefficients of multilinear affine function in the coefficient space and obtained the result for the robust stability of diamond families of polynomials with complex coefficients also. We obtained the limited results for SPR design and we provide a framework for solving ACS. Finally, copies of the outline of our results are provided in the appendix. Also, there is an administration issue in the appendix.
Yang, Qichun; Zhang, Xuesong; Xu, Xingya; ...
2017-05-29
Riverine carbon cycling is an important, but insufficiently investigated component of the global carbon cycle. Analyses of environmental controls on riverine carbon cycling are critical for improved understanding of mechanisms regulating carbon processing and storage along the terrestrial-aquatic continuum. Here, we compile and analyze riverine dissolved organic carbon (DOC) concentration data from 1402 United States Geological Survey (USGS) gauge stations to examine the spatial variability and environmental controls of DOC concentrations in the United States (U.S.) surface waters. DOC concentrations exhibit high spatial variability, with an average of 6.42 ± 6.47 mg C/ L (Mean ± Standard Deviation). In general,more » high DOC concentrations occur in the Upper Mississippi River basin and the Southeastern U.S., while low concentrations are mainly distributed in the Western U.S. Single-factor analysis indicates that slope of drainage areas, wetlands, forests, percentage of first-order streams, and instream nutrients (such as nitrogen and phosphorus) pronouncedly influence DOC concentrations, but the explanatory power of each bivariate model is lower than 35%. Analyses based on the general multi-linear regression models suggest DOC concentrations are jointly impacted by multiple factors. Soil properties mainly show positive correlations with DOC concentrations; forest and shrub lands have positive correlations with DOC concentrations, but urban area and croplands demonstrate negative impacts; total instream phosphorus and dam density correlate positively with DOC concentrations. Notably, the relative importance of these environmental controls varies substantially across major U.S. water resource regions. In addition, DOC concentrations and environmental controls also show significant variability from small streams to large rivers, which may be caused by changing carbon sources and removal rates by river orders. In sum, our results reveal that general multi-linear regression analysis of twenty one terrestrial and aquatic environmental factors can partially explain (56%) the DOC concentration variation. In conclusion, this study highlights the complexity of the interactions among these environmental factors in determining DOC concentrations, thus calls for processes-based, non-linear methodologies to constrain uncertainties in riverine DOC cycling.« less
Multilinear Graph Embedding: Representation and Regularization for Images.
Chen, Yi-Lei; Hsu, Chiou-Ting
2014-02-01
Given a set of images, finding a compact and discriminative representation is still a big challenge especially when multiple latent factors are hidden in the way of data generation. To represent multifactor images, although multilinear models are widely used to parameterize the data, most methods are based on high-order singular value decomposition (HOSVD), which preserves global statistics but interprets local variations inadequately. To this end, we propose a novel method, called multilinear graph embedding (MGE), as well as its kernelization MKGE to leverage the manifold learning techniques into multilinear models. Our method theoretically links the linear, nonlinear, and multilinear dimensionality reduction. We also show that the supervised MGE encodes informative image priors for image regularization, provided that an image is represented as a high-order tensor. From our experiments on face and gait recognition, the superior performance demonstrates that MGE better represents multifactor images than classic methods, including HOSVD and its variants. In addition, the significant improvement in image (or tensor) completion validates the potential of MGE for image regularization.
Multi-linear sparse reconstruction for SAR imaging based on higher-order SVD
NASA Astrophysics Data System (ADS)
Gao, Yu-Fei; Gui, Guan; Cong, Xun-Chao; Yang, Yue; Zou, Yan-Bin; Wan, Qun
2017-12-01
This paper focuses on the spotlight synthetic aperture radar (SAR) imaging for point scattering targets based on tensor modeling. In a real-world scenario, scatterers usually distribute in the block sparse pattern. Such a distribution feature has been scarcely utilized by the previous studies of SAR imaging. Our work takes advantage of this structure property of the target scene, constructing a multi-linear sparse reconstruction algorithm for SAR imaging. The multi-linear block sparsity is introduced into higher-order singular value decomposition (SVD) with a dictionary constructing procedure by this research. The simulation experiments for ideal point targets show the robustness of the proposed algorithm to the noise and sidelobe disturbance which always influence the imaging quality of the conventional methods. The computational resources requirement is further investigated in this paper. As a consequence of the algorithm complexity analysis, the present method possesses the superiority on resource consumption compared with the classic matching pursuit method. The imaging implementations for practical measured data also demonstrate the effectiveness of the algorithm developed in this paper.
NASA Technical Reports Server (NTRS)
Allen Phillip A.; Wilson, Christopher D.
2003-01-01
The development of a pressure-dependent constitutive model with combined multilinear kinematic and isotropic hardening is presented. The constitutive model is developed using the ABAQUS user material subroutine (UMAT). First the pressure-dependent plasticity model is derived. Following this, the combined bilinear and combined multilinear hardening equations are developed for von Mises plasticity theory. The hardening rule equations are then modified to include pressure dependency. The method for implementing the new constitutive model into ABAQUS is given.
Peiris, R H; Jaklewicz, M; Budman, H; Legge, R L; Moresoli, C
2013-06-15
Fluorescence excitation-emission matrix (EEM) approach together with principal component analysis (PCA) was used for assessing hydraulically irreversible fouling of three pilot-scale ultrafiltration (UF) systems containing full-scale and bench-scale hollow fiber membrane modules in drinking water treatment. These systems were operated for at least three months with extensive cycles of permeation, combination of back-pulsing and scouring and chemical cleaning. The principal component (PC) scores generated from the PCA of the fluorescence EEMs were found to be related to humic substances (HS), protein-like and colloidal/particulate matter content. PC scores of HS- and protein-like matter of the UF feed water, when considered separately, showed reasonably good correlations with the rate of hydraulically irreversible fouling for long-term UF operations. In contrast, comparatively weaker correlations for PC scores of colloidal/particulate matter and the rate of hydraulically irreversible fouling were obtained for all UF systems. Since, individual correlations could not fully explain the evolution of the rate of irreversible fouling, multi-linear regression models were developed to relate the combined effect of HS-like, protein-like and colloidal/particulate matter PC scores to the rate of hydraulically irreversible fouling for each specific UF system. These multi-linear regression models revealed significant individual and combined contribution of HS- and protein-like matter to the rate of hydraulically irreversible fouling, with protein-like matter generally showing the greatest contribution. The contribution of colloidal/particulate matter to the rate of hydraulically irreversible fouling was not as significant. The addition of polyaluminum chloride, as coagulant, to UF feed appeared to have a positive impact in reducing hydraulically irreversible fouling by these constituents. The proposed approach has applications in quantifying the individual and synergistic contribution of major natural water constituents to the rate of hydraulically irreversible membrane fouling and shows potential for controlling UF irreversible fouling in the production of drinking water. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Boeke, R.; Taylor, P. C.; Li, Y.
2017-12-01
Arctic cloud amount as simulated in CMIP5 models displays large intermodel spread- models disagree on the processes important for cloud formation as well as the radiative impact of clouds. The radiative response to cloud forcing can be better assessed when the drivers of Arctic cloud formation are known. Arctic cloud amount (CA) is a function of both atmospheric and surface conditions, and it is crucial to separate the influences of unique processes to understand why the models are different. This study uses a multilinear regression methodology to determine cloud changes using 3 variables as predictors: lower tropospheric stability (LTS), 500-hPa vertical velocity (ω500), and sea ice concentration (SIC). These three explanatory variables were chosen because their effects on clouds can be attributed to unique climate processes: LTS is a thermodynamic indicator of the relationship between clouds and atmospheric stability, SIC determines the interaction between clouds and the surface, and ω500 is a metric for dynamical change. Vertical, seasonal profiles of necessary variables are obtained from the Coupled Model Intercomparison Project 5 (CMIP5) historical simulation, an ocean-atmosphere couple model forced with the best-estimate natural and anthropogenic radiative forcing from 1850-2005, and statistical significance tests are used to confirm the regression equation. A unique heuristic model will be constructed for each climate model and for observations, and models will be tested by their ability to capture the observed cloud amount and behavior. Lastly, the intermodel spread in Arctic cloud amount will be attributed to individual processes, ranking the relative contributions of each factor to shed light on emergent constraints in the Arctic cloud radiative effect.
Guidance Document for PMF Applications with the Multilinear Engine
This document serves as a guide for users of the Multilinear Engine version 2 (ME-2) for source apportionment applications utilizing positive matrix factorization (PMF). It aims to educate experienced source apportionment analysts on available ME rotational tools and provides gui...
Source contribution of PM₂.₅ at different locations on the Malaysian Peninsula.
Ee-Ling, Ooi; Mustaffa, Nur Ili Hamizah; Amil, Norhaniza; Khan, Md Firoz; Latif, Mohd Talib
2015-04-01
This study determined the source contribution of PM2.5 (particulate matter <2.5 μm) in air at three locations on the Malaysian Peninsula. PM2.5 samples were collected using a high volume sampler equipped with quartz filters. Ion chromatography was used to determine the ionic composition of the samples and inductively coupled plasma mass spectrometry was used to determine the concentrations of heavy metals. Principal component analysis with multilinear regressions were used to identify the possible sources of PM2.5. The range of PM2.5 was between 10 ± 3 and 30 ± 7 µg m(-3). Sulfate (SO4 (2-)) was the major ionic compound detected and zinc was found to dominate the heavy metals. Source apportionment analysis revealed that motor vehicle and soil dust dominated the composition of PM2.5 in the urban area. Domestic waste combustion dominated in the suburban area, while biomass burning dominated in the rural area.
Experimental and computational prediction of glass transition temperature of drugs.
Alzghoul, Ahmad; Alhalaweh, Amjad; Mahlin, Denny; Bergström, Christel A S
2014-12-22
Glass transition temperature (Tg) is an important inherent property of an amorphous solid material which is usually determined experimentally. In this study, the relation between Tg and melting temperature (Tm) was evaluated using a data set of 71 structurally diverse druglike compounds. Further, in silico models for prediction of Tg were developed based on calculated molecular descriptors and linear (multilinear regression, partial least-squares, principal component regression) and nonlinear (neural network, support vector regression) modeling techniques. The models based on Tm predicted Tg with an RMSE of 19.5 K for the test set. Among the five computational models developed herein the support vector regression gave the best result with RMSE of 18.7 K for the test set using only four chemical descriptors. Hence, two different models that predict Tg of drug-like molecules with high accuracy were developed. If Tm is available, a simple linear regression can be used to predict Tg. However, the results also suggest that support vector regression and calculated molecular descriptors can predict Tg with equal accuracy, already before compound synthesis.
NASA Astrophysics Data System (ADS)
Vokhmyanin, M. V.; Ponyavin, D. I.
2016-12-01
The interplanetary magnetic field (IMF) By component affects the configuration of field-aligned currents (FAC) whose geomagnetic response is observed from high to low latitudes. The ground magnetic perturbations induced by FACs are opposite on the dawnside and duskside and depend upon the IMF By polarity. Based on the multilinear regression analysis, we show that this effect is presented at the midlatitude observatories, Niemegk and Arti, in the X and Y components of the geomagnetic field. This allows us to infer the IMF sector structure from the old geomagnetic records made at Ekaterinburg and Potsdam since 1850 and 1890, respectively. Geomagnetic data from various stations provide proxies of the IMF polarity which coincide for the most part of the nineteenth and twentieth centuries. This supports their reliabilities and makes them suitable for studying the large-scale IMF sector structure in the past.
Esbaugh, A J; Brix, K V; Mager, E M; Grosell, M
2011-09-01
The current study examined the acute toxicity of lead (Pb) to Ceriodaphnia dubia and Pimephales promelas in a variety of natural waters. The natural waters were selected to range in pertinent water chemistry parameters such as calcium, pH, total CO(2) and dissolved organic carbon (DOC). Acute toxicity was determined for C. dubia and P. promelas using standard 48h and 96h protocols, respectively. For both organisms acute toxicity varied markedly according to water chemistry, with C. dubia LC50s ranging from 29 to 180μg/L and P. promelas LC50s ranging from 41 to 3598μg/L. Additionally, no Pb toxicity was observed for P. promelas in three alkaline natural waters. With respect to water chemistry parameters, DOC had the strongest protective impact for both organisms. A multi-linear regression (MLR) approach combining previous lab data and the current data was used to identify the relative importance of individual water chemistry components in predicting acute Pb toxicity for both species. As anticipated, the P. promelas best-fit MLR model combined DOC, calcium and pH. Unexpectedly, in the C. dubiaMLR model the importance of pH, TCO(2) and calcium was minimal while DOC and ionic strength were the controlling water quality variables. Adjusted R(2) values of 0.82 and 0.64 for the P. promelas and C. dubia models, respectively, are comparable to previously developed biotic ligand models for other metals. Copyright © 2011 Elsevier Inc. All rights reserved.
Beelders, Theresa; de Beer, Dalene; Kidd, Martin; Joubert, Elizabeth
2018-01-01
Mangiferin, a C-glucosyl xanthone, abundant in mango and honeybush, is increasingly targeted for its bioactive properties and thus to enhance functional properties of food. The thermal degradation kinetics of mangiferin at pH3, 4, 5, 6 and 7 were each modeled at five temperatures ranging between 60 and 140°C. First-order reaction models were fitted to the data using non-linear regression to determine the reaction rate constant at each pH-temperature combination. The reaction rate constant increased with increasing temperature and pH. Comparison of the reaction rate constants at 100°C revealed an exponential relationship between the reaction rate constant and pH. The data for each pH were also modeled with the Arrhenius equation using non-linear and linear regression to determine the activation energy and pre-exponential factor. Activation energies decreased slightly with increasing pH. Finally, a multi-linear model taking into account both temperature and pH was developed for mangiferin degradation. Sterilization (121°C for 4min) of honeybush extracts dissolved at pH4, 5 and 7 did not cause noticeable degradation of mangiferin, although the multi-linear model predicted 34% degradation at pH7. The extract matrix is postulated to exert a protective effect as changes in potential precursor content could not fully explain the stability of mangiferin. Copyright © 2017 Elsevier Ltd. All rights reserved.
Modeling Incorrect Responses to Multiple-Choice Items with Multilinear Formula Score Theory.
ERIC Educational Resources Information Center
Drasgow, Fritz; And Others
This paper addresses the information revealed in incorrect option selection on multiple choice items. Multilinear Formula Scoring (MFS), a theory providing methods for solving psychological measurement problems of long standing, is first used to estimate option characteristic curves for the Armed Services Vocational Aptitude Battery Arithmetic…
Fisz, Jacek J
2006-12-07
The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.
Ryu, Hyeuk; Luco, Nicolas; Baker, Jack W.; Karaca, Erdem
2008-01-01
A methodology was recently proposed for the development of hazard-compatible building fragility models using parameters of capacity curves and damage state thresholds from HAZUS (Karaca and Luco, 2008). In the methodology, HAZUS curvilinear capacity curves were used to define nonlinear dynamic SDOF models that were subjected to the nonlinear time history analysis instead of the capacity spectrum method. In this study, we construct a multilinear capacity curve with negative stiffness after an ultimate (capping) point for the nonlinear time history analysis, as an alternative to the curvilinear model provided in HAZUS. As an illustration, here we propose parameter values of the multilinear capacity curve for a moderate-code low-rise steel moment resisting frame building (labeled S1L in HAZUS). To determine the final parameter values, we perform nonlinear time history analyses of SDOF systems with various parameter values and investigate their effects on resulting fragility functions through sensitivity analysis. The findings improve capacity curves and thereby fragility and/or vulnerability models for generic types of structures.
Tor, Ali; Aydin, Mehmet Emin; Aydin, Senar; Tabakci, Mustafa; Beduk, Fatma
2013-11-15
An aminopropyl silica gel-immobilized calix[6]arene (C[6]APS) has been used for the removal of lindane from an aqueous solution in batch sorption technique. The C[6]APS was synthesized with p-tert-butylcalix[6]arene hexacarboxylate derivative and aminopropyl silica gel in the presence of N,N'-diisopropyl carbodiimide coupling reagent. The sorption study was carried out as functions of solution pH, contact time, initial lindane concentration, C[6]APS dosage and ionic strength of solution. The matrix effect of natural water samples on the sorption efficiency of C[6]APS was also investigated. Maximum lindane removal was obtained at a wide pH range of 2-8 and sorption equilibrium was achieved in 2h. The isotherm analysis indicated that the sorption data can be represented by both Langmuir and Freundlich isotherm models. Increasing ionic strength of the solutions increased the sorption efficiency and matrix of natural water samples had no effect on the sorption of lindane. By using multilinear regression model, regression equation was also developed to explain the effects of the experimental variables. Copyright © 2013 Elsevier B.V. All rights reserved.
Adaptive Multilinear Tensor Product Wavelets
Weiss, Kenneth; Lindstrom, Peter
2015-08-12
Many foundational visualization techniques including isosurfacing, direct volume rendering and texture mapping rely on piecewise multilinear interpolation over the cells of a mesh. However, there has not been much focus within the visualization community on techniques that efficiently generate and encode globally continuous functions defined by the union of multilinear cells. Wavelets provide a rich context for analyzing and processing complicated datasets. In this paper, we exploit adaptive regular refinement as a means of representing and evaluating functions described by a subset of their nonzero wavelet coefficients. We analyze the dependencies involved in the wavelet transform and describe how tomore » generate and represent the coarsest adaptive mesh with nodal function values such that the inverse wavelet transform is exactly reproduced via simple interpolation (subdivision) over the mesh elements. This allows for an adaptive, sparse representation of the function with on-demand evaluation at any point in the domain. In conclusion, we focus on the popular wavelets formed by tensor products of linear B-splines, resulting in an adaptive, nonconforming but crack-free quadtree (2D) or octree (3D) mesh that allows reproducing globally continuous functions via multilinear interpolation over its cells.« less
Separation mechanism of nortriptyline and amytriptyline in RPLC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gritti, Fabrice; Guiochon, Georges A
2005-08-01
The single and the competitive equilibrium isotherms of nortriptyline and amytriptyline were acquired by frontal analysis (FA) on the C{sub 18}-bonded discovery column, using a 28/72 (v/v) mixture of acetonitrile and water buffered with phosphate (20 mM, pH 2.70). The adsorption energy distributions (AED) of each compound were calculated from the raw adsorption data. Both the fitting of the adsorption data using multi-linear regression analysis and the AEDs are consistent with a trimodal isotherm model. The single-component isotherm data fit well to the tri-Langmuir isotherm model. The extension to a competitive two-component tri-Langmuir isotherm model based on the best parametersmore » of the single-component isotherms does not account well for the breakthrough curves nor for the overloaded band profiles measured for mixtures of nortriptyline and amytriptyline. However, it was possible to derive adjusted parameters of a competitive tri-Langmuir model based on the fitting of the adsorption data obtained for these mixtures. A very good agreement was then found between the calculated and the experimental overloaded band profiles of all the mixtures injected.« less
Head Pose Estimation Using Multilinear Subspace Analysis for Robot Human Awareness
NASA Technical Reports Server (NTRS)
Ivanov, Tonislav; Matthies, Larry; Vasilescu, M. Alex O.
2009-01-01
Mobile robots, operating in unconstrained indoor and outdoor environments, would benefit in many ways from perception of the human awareness around them. Knowledge of people's head pose and gaze directions would enable the robot to deduce which people are aware of the its presence, and to predict future motions of the people for better path planning. To make such inferences, requires estimating head pose on facial images that are combination of multiple varying factors, such as identity, appearance, head pose, and illumination. By applying multilinear algebra, the algebra of higher-order tensors, we can separate these factors and estimate head pose regardless of subject's identity or image conditions. Furthermore, we can automatically handle uncertainty in the size of the face and its location. We demonstrate a pipeline of on-the-move detection of pedestrians with a robot stereo vision system, segmentation of the head, and head pose estimation in cluttered urban street scenes.
Oral health status and the epidemiologic paradox within latino immigrant groups
2012-01-01
Background According to the United States census, there are 28 categories that define “Hispanic/Latinos.” This paper compares differences in oral health status between Mexican immigrants and other Latino immigrant groups. Methods Derived from a community-based sample (N = 240) in Los Angeles, this cross-sectional study uses an interview covering demographic and behavioral measures, and an intraoral examination using NIDCR epidemiologic criteria. Descriptive, bivariate analysis, and multiple regression analysis were conducted to examine the determinants that are associated with the Oral Health Status Index (OHSI). Results Mexican immigrants had a significantly higher OHSI (p < .05) compared to other Latinos. The multilinear regression showed that both age and gender (p < .05), percentage of untreated decayed teeth (p < .001), number of replaced missing teeth (p < .001), and attachment loss (p < .001) were significant. Conclusions Compared with the other Latino immigrants in our sample, Mexican immigrants have significantly better oral health status. This confirms the epidemiologic paradox previously found in comparisons of Mexicans with whites and African Americans. In this case of oral health status the paradox also occurs between Mexicans and other Latinos. Therefore, when conducting oral health studies of Latinos, more consideration needs to be given to differences within Latino subgroups, such as their country of origin and their unique ethnic and cultural characteristics. PMID:22958726
QSPR using MOLGEN-QSPR: the challenge of fluoroalkane boiling points.
Rücker, Christoph; Meringer, Markus; Kerber, Adalbert
2005-01-01
By means of the new software MOLGEN-QSPR, a multilinear regression model for the boiling points of lower fluoroalkanes is established. The model is based exclusively on simple descriptors derived directly from molecular structure and nevertheless describes a broader set of data more precisely than previous attempts that used either more demanding (quantum chemical) descriptors or more demanding (nonlinear) statistical methods such as neural networks. The model's internal consistency was confirmed by leave-one-out cross-validation. The model was used to predict all unknown boiling points of fluorobutanes, and the quality of predictions was estimated by means of comparison with boiling point predictions for fluoropentanes.
NASA Astrophysics Data System (ADS)
Zhu, Qiao; Huang, Xiao-Feng; Cao, Li-Ming; Wei, Lin-Tong; Zhang, Bin; He, Ling-Yan; Elser, Miriam; Canonaco, Francesco; Slowik, Jay G.; Bozzetti, Carlo; El-Haddad, Imad; Prévôt, André S. H.
2018-02-01
Organic aerosols (OAs), which consist of thousands of complex compounds emitted from various sources, constitute one of the major components of fine particulate matter. The traditional positive matrix factorization (PMF) method often apportions aerosol mass spectrometer (AMS) organic datasets into less meaningful or mixed factors, especially in complex urban cases. In this study, an improved source apportionment method using a bilinear model of the multilinear engine (ME-2) was applied to OAs collected during the heavily polluted season from two Chinese megacities located in the north and south with an Aerodyne high-resolution aerosol mass spectrometer (HR-ToF-AMS). We applied a rather novel procedure for utilization of prior information and selecting optimal solutions, which does not necessarily depend on other studies. Ultimately, six reasonable factors were clearly resolved and quantified for both sites by constraining one or more factors: hydrocarbon-like OA (HOA), cooking-related OA (COA), biomass burning OA (BBOA), coal combustion (CCOA), less-oxidized oxygenated OA (LO-OOA) and more-oxidized oxygenated OA (MO-OOA). In comparison, the traditional PMF method could not effectively resolve the appropriate factors, e.g., BBOA and CCOA, in the solutions. Moreover, coal combustion and traffic emissions were determined to be primarily responsible for the concentrations of PAHs and BC, respectively, through the regression analyses of the ME-2 results.
Asquith, William H.; Roussel, Meghan C.
2009-01-01
Annual peak-streamflow frequency estimates are needed for flood-plain management; for objective assessment of flood risk; for cost-effective design of dams, levees, and other flood-control structures; and for design of roads, bridges, and culverts. Annual peak-streamflow frequency represents the peak streamflow for nine recurrence intervals of 2, 5, 10, 25, 50, 100, 200, 250, and 500 years. Common methods for estimation of peak-streamflow frequency for ungaged or unmonitored watersheds are regression equations for each recurrence interval developed for one or more regions; such regional equations are the subject of this report. The method is based on analysis of annual peak-streamflow data from U.S. Geological Survey streamflow-gaging stations (stations). Beginning in 2007, the U.S. Geological Survey, in cooperation with the Texas Department of Transportation and in partnership with Texas Tech University, began a 3-year investigation concerning the development of regional equations to estimate annual peak-streamflow frequency for undeveloped watersheds in Texas. The investigation focuses primarily on 638 stations with 8 or more years of data from undeveloped watersheds and other criteria. The general approach is explicitly limited to the use of L-moment statistics, which are used in conjunction with a technique of multi-linear regression referred to as PRESS minimization. The approach used to develop the regional equations, which was refined during the investigation, is referred to as the 'L-moment-based, PRESS-minimized, residual-adjusted approach'. For the approach, seven unique distributions are fit to the sample L-moments of the data for each of 638 stations and trimmed means of the seven results of the distributions for each recurrence interval are used to define the station specific, peak-streamflow frequency. As a first iteration of regression, nine weighted-least-squares, PRESS-minimized, multi-linear regression equations are computed using the watershed characteristics of drainage area, dimensionless main-channel slope, and mean annual precipitation. The residuals of the nine equations are spatially mapped, and residuals for the 10-year recurrence interval are selected for generalization to 1-degree latitude and longitude quadrangles. The generalized residual is referred to as the OmegaEM parameter and represents a generalized terrain and climate index that expresses peak-streamflow potential not otherwise represented in the three watershed characteristics. The OmegaEM parameter was assigned to each station, and using OmegaEM, nine additional regression equations are computed. Because of favorable diagnostics, the OmegaEM equations are expected to be generally reliable estimators of peak-streamflow frequency for undeveloped and ungaged stream locations in Texas. The mean residual standard error, adjusted R-squared, and percentage reduction of PRESS by use of OmegaEM are 0.30log10, 0.86, and -21 percent, respectively. Inclusion of the OmegaEM parameter provides a substantial reduction in the PRESS statistic of the regression equations and removes considerable spatial dependency in regression residuals. Although the OmegaEM parameter requires interpretation on the part of analysts and the potential exists that different analysts could estimate different values for a given watershed, the authors suggest that typical uncertainty in the OmegaEM estimate might be about +or-0.1010. Finally, given the two ensembles of equations reported herein and those in previous reports, hydrologic design engineers and other analysts have several different methods, which represent different analytical tracks, to make comparisons of peak-streamflow frequency estimates for ungaged watersheds in the study area.
Xia, Yinhong
2018-01-01
Suppose that the kernel K satisfies a certain Hörmander type condition. Let b be a function satisfying [Formula: see text] for [Formula: see text], and let [Formula: see text] be a family of multilinear singular integral operators, i.e., [Formula: see text] The main purpose of this paper is to establish the weighted [Formula: see text]-boundedness of the variation operator and the oscillation operator for [Formula: see text].
Exploiting structure: Introduction and motivation
NASA Technical Reports Server (NTRS)
Xu, Zhong Ling
1994-01-01
This annual report summarizes the research activities that were performed from 26 Jun. 1993 to 28 Feb. 1994. We continued to investigate the Robust Stability of Systems where transfer functions or characteristic polynomials are affine multilinear functions of parameters. An approach that differs from 'Stability by Linear Process' and that reduces the computational burden of checking the robust stability of the system with multilinear uncertainty was found for low order, 2-order, and 3-order cases. We proved a crucial theorem, the so-called Face Theorem. Previously, we have proven Kharitonov's Vertex Theorem and the Edge Theorem by Bartlett. The detail of this proof is contained in the Appendix. This Theorem provides a tool to describe the boundary of the image of the affine multilinear function. For SPR design, we have developed some new results. The third objective for this period is to design a controller for IHM by the H-infinity optimization technique. The details are presented in the Appendix.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iliopoulos, AS; Sun, X; Pitsianis, N
Purpose: To address and lift the limited degree of freedom (DoF) of globally bilinear motion components such as those based on principal components analysis (PCA), for encoding and modeling volumetric deformation motion. Methods: We provide a systematic approach to obtaining a multi-linear decomposition (MLD) and associated motion model from deformation vector field (DVF) data. We had previously introduced MLD for capturing multi-way relationships between DVF variables, without being restricted by the bilinear component format of PCA-based models. PCA-based modeling is commonly used for encoding patient-specific deformation as per planning 4D-CT images, and aiding on-board motion estimation during radiotherapy. However, themore » bilinear space-time decomposition inherently limits the DoF of such models by the small number of respiratory phases. While this limit is not reached in model studies using analytical or digital phantoms with low-rank motion, it compromises modeling power in the presence of relative motion, asymmetries and hysteresis, etc, which are often observed in patient data. Specifically, a low-DoF model will spuriously couple incoherent motion components, compromising its adaptability to on-board deformation changes. By the multi-linear format of extracted motion components, MLD-based models can encode higher-DoF deformation structure. Results: We conduct mathematical and experimental comparisons between PCA- and MLD-based models. A set of temporally-sampled analytical trajectories provides a synthetic, high-rank DVF; trajectories correspond to respiratory and cardiac motion factors, including different relative frequencies and spatial variations. Additionally, a digital XCAT phantom is used to simulate a lung lesion deforming incoherently with respect to the body, which adheres to a simple respiratory trend. In both cases, coupling of incoherent motion components due to a low model DoF is clearly demonstrated. Conclusion: Multi-linear decomposition can enable decoupling of distinct motion factors in high-rank DVF measurements. This may improve motion model expressiveness and adaptability to on-board deformation, aiding model-based image reconstruction for target verification. NIH Grant No. R01-184173.« less
Quantification of trace metals in infant formula premixes using laser-induced breakdown spectroscopy
NASA Astrophysics Data System (ADS)
Cama-Moncunill, Raquel; Casado-Gavalda, Maria P.; Cama-Moncunill, Xavier; Markiewicz-Keszycka, Maria; Dixit, Yash; Cullen, Patrick J.; Sullivan, Carl
2017-09-01
Infant formula is a human milk substitute generally based upon fortified cow milk components. In order to mimic the composition of breast milk, trace elements such as copper, iron and zinc are usually added in a single operation using a premix. The correct addition of premixes must be verified to ensure that the target levels in infant formulae are achieved. In this study, a laser-induced breakdown spectroscopy (LIBS) system was assessed as a fast validation tool for trace element premixes. LIBS is a promising emission spectroscopic technique for elemental analysis, which offers real-time analyses, little to no sample preparation and ease of use. LIBS was employed for copper and iron determinations of premix samples ranging approximately from 0 to 120 mg/kg Cu/1640 mg/kg Fe. LIBS spectra are affected by several parameters, hindering subsequent quantitative analyses. This work aimed at testing three matrix-matched calibration approaches (simple-linear regression, multi-linear regression and partial least squares regression (PLS)) as means for precision and accuracy enhancement of LIBS quantitative analysis. All calibration models were first developed using a training set and then validated with an independent test set. PLS yielded the best results. For instance, the PLS model for copper provided a coefficient of determination (R2) of 0.995 and a root mean square error of prediction (RMSEP) of 14 mg/kg. Furthermore, LIBS was employed to penetrate through the samples by repetitively measuring the same spot. Consequently, LIBS spectra can be obtained as a function of sample layers. This information was used to explore whether measuring deeper into the sample could reduce possible surface-contaminant effects and provide better quantifications.
Structure-seeking multilinear methods for the analysis of fMRI data.
Andersen, Anders H; Rayens, William S
2004-06-01
In comprehensive fMRI studies of brain function, the data structures often contain higher-order ways such as trial, task condition, subject, and group in addition to the intrinsic dimensions of time and space. While multivariate bilinear methods such as principal component analysis (PCA) have been used successfully for extracting information about spatial and temporal features in data from a single fMRI run, the need to unfold higher-order data sets into bilinear arrays has led to decompositions that are nonunique and to the loss of multiway linkages and interactions present in the data. These additional dimensions or ways can be retained in multilinear models to produce structures that are unique and which admit interpretations that are neurophysiologically meaningful. Multiway analysis of fMRI data from multiple runs of a bilateral finger-tapping paradigm was performed using the parallel factor (PARAFAC) model. A trilinear model was fitted to a data cube of dimensions voxels by time by run. Similarly, a quadrilinear model was fitted to a higher-way structure of dimensions voxels by time by trial by run. The spatial and temporal response components were extracted and validated by comparison to results from traditional SVD/PCA analyses based on scenarios of unfolding into lower-order bilinear structures.
Evaluation of Patient Satisfaction with Tuberculosis Services in Southern Nigeria
Onyeonoro, Ugochukwu U; Chukwu, Joseph N; Nwafor, Charles C; Meka, Anthony O; Omotowo, Babatunde I; Madichie, Nelson O; Ogbudebe, Chidubem; Ikebudu, Joy N; Oshi, Daniel C; Ekeke, Ngozi; Paul, Nsirimobu I; Duru, Chukwuma B
2015-01-01
OBJECTIVE Knowing tuberculosis (TB) patients’ satisfaction enables TB program managers to identify gaps in service delivery and institute measures to address them. This study is aimed at evaluating patients’ satisfaction with TB services in southern Nigeria. MATERIALS AND METHODS A total of 378 patients accessing TB care were studied using a validated Patient Satisfaction (PS-38) questionnaire on various aspects of TB services. Factor analysis was used to identify eight factors related to TB patient satisfaction. Test of association was used to study the relation between patient satisfaction scores and patient and health facility characteristics, while multilinear regression analysis was used to identify predictors of patient satisfaction. RESULTS Highest satisfaction was reported for adherence counseling and access to care. Patient characteristics were associated with overall satisfaction, registration, adherence counseling, access to care, amenities, and staff attitude, while health system factors were associated with staff attitude, amenities, and health education. Predictors of satisfaction with TB services included gender, educational status, if tested for HIV, distance, payment for TB services, and level and type of health-care facility. CONCLUSION Patient- and health system–related factors were found to influence patient satisfaction and, hence, should be taken into consideration in TB service programing. PMID:26508872
Comparison of Nimbus-7 SMMR and GOES-1 VISSR Atmospheric Liquid Water Content.
NASA Astrophysics Data System (ADS)
Lojou, Jean-Yves; Frouin, Robert; Bernard, René
1991-02-01
Vertically integrated atmospheric liquid water content derived from Nimbus-7 Scanning Multichannel Microwave Radiometer (SMMR) brightness temperatures and from GOES-1 Visible and Infrared Spin-Scan Radiometer (VISSR) radiances in the visible are compared over the Indian Ocean during MONEX (monsoon experiment). In the retrieval procedure, Wilheit and Chang' algorithm and Stephens' parameterization schemes are applied to the SMMR and VISSR data, respectively. The results indicate that in the 0-100 mg cm2 range of liquid water content considered, the correlation coefficient between the two types of estimates is 0.83 (0.81- 0.85 at the 99 percent confidence level). The Wilheit and Chang algorithm, however, yields values lower than those obtained with Stephens's schemes by 24.5 mg cm2 on the average, and occasionally the SMMR-based values are negative. Alternative algorithms are proposed for use with SMMR data, which eliminate the bias, augment the correlation coefficient, and reduce the rms difference. These algorithms include using the Witheit and Chang formula with modified coefficients (multilinear regression), the Wilheit and Chang formula with the same coefficients but different equivalent atmospheric temperatures for each channel (temperature bias adjustment), and a second-order polynomial in brightness temperatures at 18, 21, and 37 GHz (polynomial development). When applied to a dataset excluded from the regressionn dataset, the multilinear regression algorithm provides the best results, namely a 0.91 correlation coefficient, a 5.2 mg cm2 (residual) difference, and a 2.9 mg cm2 bias. Simply shifting the liquid water content predicted by the Wilheit and Chang algorithm does not yield as good comparison statistics, indicating that the occasional negative values are not due only to a bias. The more accurate SMMR-derived liquid water content allows one to better evaluate cloud transmittance in the solar spectrum, at least in the area and during the period analyzed. Combining this cloud transmittance with a clear sky model would provide ocean surface insulation estimates from SMMR data alone.
Ali, S. M.; Mehmood, C. A; Khan, B.; Jawad, M.; Farid, U; Jadoon, J. K.; Ali, M.; Tareen, N. K.; Usman, S.; Majid, M.; Anwar, S. M.
2016-01-01
In smart grid paradigm, the consumer demands are random and time-dependent, owning towards stochastic probabilities. The stochastically varying consumer demands have put the policy makers and supplying agencies in a demanding position for optimal generation management. The utility revenue functions are highly dependent on the consumer deterministic stochastic demand models. The sudden drifts in weather parameters effects the living standards of the consumers that in turn influence the power demands. Considering above, we analyzed stochastically and statistically the effect of random consumer demands on the fixed and variable revenues of the electrical utilities. Our work presented the Multi-Variate Gaussian Distribution Function (MVGDF) probabilistic model of the utility revenues with time-dependent consumer random demands. Moreover, the Gaussian probabilities outcome of the utility revenues is based on the varying consumer n demands data-pattern. Furthermore, Standard Monte Carlo (SMC) simulations are performed that validated the factor of accuracy in the aforesaid probabilistic demand-revenue model. We critically analyzed the effect of weather data parameters on consumer demands using correlation and multi-linear regression schemes. The statistical analysis of consumer demands provided a relationship between dependent (demand) and independent variables (weather data) for utility load management, generation control, and network expansion. PMID:27314229
Ali, S M; Mehmood, C A; Khan, B; Jawad, M; Farid, U; Jadoon, J K; Ali, M; Tareen, N K; Usman, S; Majid, M; Anwar, S M
2016-01-01
In smart grid paradigm, the consumer demands are random and time-dependent, owning towards stochastic probabilities. The stochastically varying consumer demands have put the policy makers and supplying agencies in a demanding position for optimal generation management. The utility revenue functions are highly dependent on the consumer deterministic stochastic demand models. The sudden drifts in weather parameters effects the living standards of the consumers that in turn influence the power demands. Considering above, we analyzed stochastically and statistically the effect of random consumer demands on the fixed and variable revenues of the electrical utilities. Our work presented the Multi-Variate Gaussian Distribution Function (MVGDF) probabilistic model of the utility revenues with time-dependent consumer random demands. Moreover, the Gaussian probabilities outcome of the utility revenues is based on the varying consumer n demands data-pattern. Furthermore, Standard Monte Carlo (SMC) simulations are performed that validated the factor of accuracy in the aforesaid probabilistic demand-revenue model. We critically analyzed the effect of weather data parameters on consumer demands using correlation and multi-linear regression schemes. The statistical analysis of consumer demands provided a relationship between dependent (demand) and independent variables (weather data) for utility load management, generation control, and network expansion.
NASA Astrophysics Data System (ADS)
Chen, Zhixiang; Fu, Bin
This paper is our third step towards developing a theory of testing monomials in multivariate polynomials and concentrates on two problems: (1) How to compute the coefficients of multilinear monomials; and (2) how to find a maximum multilinear monomial when the input is a ΠΣΠ polynomial. We first prove that the first problem is #P-hard and then devise a O *(3 n s(n)) upper bound for this problem for any polynomial represented by an arithmetic circuit of size s(n). Later, this upper bound is improved to O *(2 n ) for ΠΣΠ polynomials. We then design fully polynomial-time randomized approximation schemes for this problem for ΠΣ polynomials. On the negative side, we prove that, even for ΠΣΠ polynomials with terms of degree ≤ 2, the first problem cannot be approximated at all for any approximation factor ≥ 1, nor "weakly approximated" in a much relaxed setting, unless P=NP. For the second problem, we first give a polynomial time λ-approximation algorithm for ΠΣΠ polynomials with terms of degrees no more a constant λ ≥ 2. On the inapproximability side, we give a n (1 - ɛ)/2 lower bound, for any ɛ> 0, on the approximation factor for ΠΣΠ polynomials. When the degrees of the terms in these polynomials are constrained as ≤ 2, we prove a 1.0476 lower bound, assuming Pnot=NP; and a higher 1.0604 lower bound, assuming the Unique Games Conjecture.
QSAR models for anti-malarial activity of 4-aminoquinolines.
Masand, Vijay H; Toropov, Andrey A; Toropova, Alla P; Mahajan, Devidas T
2014-03-01
In the present study, predictive quantitative structure - activity relationship (QSAR) models for anti-malarial activity of 4-aminoquinolines have been developed. CORAL, which is freely available on internet (http://www.insilico.eu/coral), has been used as a tool of QSAR analysis to establish statistically robust QSAR model of anti-malarial activity of 4-aminoquinolines. Six random splits into the visible sub-system of the training and invisible subsystem of validation were examined. Statistical qualities for these splits vary, but in all these cases, statistical quality of prediction for anti-malarial activity was quite good. The optimal SMILES-based descriptor was used to derive the single descriptor based QSAR model for a data set of 112 aminoquinolones. All the splits had r(2)> 0.85 and r(2)> 0.78 for subtraining and validation sets, respectively. The three parametric multilinear regression (MLR) QSAR model has Q(2) = 0.83, R(2) = 0.84 and F = 190.39. The anti-malarial activity has strong correlation with presence/absence of nitrogen and oxygen at a topological distance of six.
Multiple summing operators on C(K) spaces
NASA Astrophysics Data System (ADS)
Pérez-García, David; Villanueva, Ignacio
2004-04-01
In this paper, we characterize, for 1≤ p<∞, the multiple ( p, 1)-summing multilinear operators on the product of C(K) spaces in terms of their representing polymeasures. As consequences, we obtain a new characterization of ( p, 1)-summing linear operators on C(K) in terms of their representing measures and a new multilinear characterization of L ∞ spaces. We also solve a problem stated by M.S. Ramanujan and E. Schock, improve a result of H. P. Rosenthal and S. J. Szarek, and give new results about polymeasures.
Prediction of Mass Spectral Response Factors from Predicted Chemometric Data for Druglike Molecules
NASA Astrophysics Data System (ADS)
Cramer, Christopher J.; Johnson, Joshua L.; Kamel, Amin M.
2017-02-01
A method is developed for the prediction of mass spectral ion counts of drug-like molecules using in silico calculated chemometric data. Various chemometric data, including polar and molecular surface areas, aqueous solvation free energies, and gas-phase and aqueous proton affinities were computed, and a statistically significant relationship between measured mass spectral ion counts and the combination of aqueous proton affinity and total molecular surface area was identified. In particular, through multilinear regression of ion counts on predicted chemometric data, we find that log10(MS ion counts) = -4.824 + c 1•PA + c 2•SA, where PA is the aqueous proton affinity of the molecule computed at the SMD(aq)/M06-L/MIDI!//M06-L/MIDI! level of electronic structure theory, SA is the total surface area of the molecule in its conjugate base form, and c 1 and c 2 have values of -3.912 × 10-2 mol kcal-1 and 3.682 × 10-3 Å-2. On a 66-molecule training set, this regression exhibits a multiple R value of 0.791 with p values for the intercept, c 1, and c 2 of 1.4 × 10-3, 4.3 × 10-10, and 2.5 × 10-6, respectively. Application of this regression to an 11-molecule test set provides a good correlation of prediction with experiment ( R = 0.905) albeit with a systematic underestimation of about 0.2 log units. This method may prove useful for semiquantitative analysis of drug metabolites for which MS response factors or authentic standards are not readily available.
NASA Astrophysics Data System (ADS)
Błaszczak, Barbara
2018-01-01
The paper reports the results of the measurements of water-soluble ions and carbonaceous matter content in the fine particulate matter (PM2.5), as well as the contributions of major sources in PM2.5. Daily PM2.5 samples were collected during heating and non-heating season of the year 2013 in three different locations in Poland: Szczecin (urban background), Trzebinia (urban background) and Złoty Potok (regional background). The concentrations of PM2.5, and its related components, exhibited clear spatiotemporal variability with higher levels during the heating period. The share of the total carbon (TC) in PM2.5 exceeded 40% and was primarily determined by fluctuations in the share of OC. Sulfates (SO42-), nitrates (NO3-) and ammonium (NH4+) dominated in the ionic composition of PM2.5 and accounted together 34% (Szczecin), 30% (Trzebinia) and 18% (Złoty Potok) of PM2.5 mass. Source apportionment analysis, performed by PCA-MLRA model (Principal Component Analysis - Multilinear Regression Analysis), revealed that secondary aerosol, whose presence is related to oxidation of gaseous precursors emitted from fuel combustion and biomass burning, had the largest contribution in observed PM2.5 concentrations. In addition, the contribution of traffic sources together with road dust resuspension, was observed. The share of natural sources (sea spray, crustal dust) was generally lower.
On the effect of networks of cycle-tracks on the risk of cycling. The case of Seville.
Marqués, R; Hernández-Herrador, V
2017-05-01
We analyze the evolution of the risk of cycling in Seville before and after the implementation of a network of segregated cycle tracks in the city. Specifically, we study the evolution of the risk for cyclists of being involved in a collision with a motor vehicle, using data reported by the traffic police along the period 2000-2013, i.e. seven years before and after the network was built. A sudden drop of such risk was observed after the implementation of the network of bikeways. We study, through a multilinear regression analysis, the evolution of the risk by means of explanatory variables representing changes in the built environment, specifically the length of the bikeways and a stepwise jump variable taking the values 0/1 before/after the network was implemented. We found that this last variable has a high value as explanatory variable, even higher than the length of the network, thus suggesting that networking the bikeways has a substantial effect on cycling safety by itself and beyond the mere increase in the length of the bikeways. We also analyze safety in numbers through a non-linear regression analysis. Our results fully agree qualitatively and quantitatively with the results previously reported by Jacobsen (2003), thus providing an independent confirmation of Jacobsen's results. Finally, the mutual causal relationships between the increase in safety, the increase in the number of cyclists and the presence of the network of bikeways are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Jackman, Patrick; Sun, Da-Wen; Elmasry, Gamal
2012-08-01
A new algorithm for the conversion of device dependent RGB colour data into device independent L*a*b* colour data without introducing noticeable error has been developed. By combining a linear colour space transform and advanced multiple regression methodologies it was possible to predict L*a*b* colour data with less than 2.2 colour units of error (CIE 1976). By transforming the red, green and blue colour components into new variables that better reflect the structure of the L*a*b* colour space, a low colour calibration error was immediately achieved (ΔE(CAL) = 14.1). Application of a range of regression models on the data further reduced the colour calibration error substantially (multilinear regression ΔE(CAL) = 5.4; response surface ΔE(CAL) = 2.9; PLSR ΔE(CAL) = 2.6; LASSO regression ΔE(CAL) = 2.1). Only the PLSR models deteriorated substantially under cross validation. The algorithm is adaptable and can be easily recalibrated to any working computer vision system. The algorithm was tested on a typical working laboratory computer vision system and delivered only a very marginal loss of colour information ΔE(CAL) = 2.35. Colour features derived on this system were able to safely discriminate between three classes of ham with 100% correct classification whereas colour features measured on a conventional colourimeter were not. Copyright © 2012 Elsevier Ltd. All rights reserved.
Influence of Solar Variability on the North Atlantic / European Sector.
NASA Astrophysics Data System (ADS)
Gray, L. J.
2016-12-01
The 11year solar cycle signal in December-January-February averaged mean-sea-level pressure and Atlantic/European blocking frequency is examined using multilinear regression with indices to represent variability associated with the solar cycle, volcanic eruptions, the El Nino - Southern Oscillation (ENSO) and the Atlantic Multidecadal Oscillation (AMO). Results from a previous 11-year solar cycle signal study of the period 1870-2010 (140 years; 13 solar cycles) that suggested a 3-4 year lagged signal in SLP over the Atlantic are confirmed by analysis of a much longer reconstructed dataset for the period 1660-2010 (350 years; 32 solar cycles). Apparent discrepancies between earlier studies are resolved and stem primarily from the lagged nature of the response and differences between early- and late-winter responses. Analysis of the separate winter months provide supporting evidence for two mechanisms of influence, one operating via the atmosphere that maximises in late winter at 0-2 year lags and one via the mixd-layer ocean that maximises in early winter at 3-4 year lags. Corresponding analysis of DJF-averaged Atlantic / European blocking frequency shows a highly statistically significant signal at 1-year lag that originates promarily from the late winter response. The 11-year solar signal in DJF blocking frequency is compared with other known influences from ENSO and the AMO and found to be as large in amplitude and have a larger region of statistical significance.
NASA Astrophysics Data System (ADS)
Herrero, I.; Ezcurra, A.; Areitio, J.; Diaz-Argandoña, J.; Ibarra-Berastegi, G.; Saenz, J.
2013-11-01
Storms developed under local instability conditions are studied in the Spanish Basque region with the aim of establishing precipitation-lightning relationships. Those situations may produce, in some cases, flash flood. Data used correspond to daily rain depth (mm) and the number of CG flashes in the area. Rain and lightning are found to be weakly correlated on a daily basis, a fact that seems related to the existence of opposite gradients in their geographical distribution. Rain anomalies, defined as the difference between observed and estimated rain depth based on CG flashes, are analysed by PCA method. Results show a first EOF explaining 50% of the variability that linearly relates the rain anomalies observed each day and that confirms their spatial structure. Based on those results, a multilinear expression has been developed to estimate the rain accumulated daily in the network based on the CG flashes registered in the area. Moreover, accumulates and maximum values of rain are found to be strongly correlated, therefore making the multilinear expression a useful tool to estimate maximum precipitation during those kind of storms.
NASA Astrophysics Data System (ADS)
Vaglio Laurin, Gaia; Puletti, Nicola; Chen, Qi; Corona, Piermaria; Papale, Dario; Valentini, Riccardo
2016-10-01
Estimates of forest aboveground biomass are fundamental for carbon monitoring and accounting; delivering information at very high spatial resolution is especially valuable for local management, conservation and selective logging purposes. In tropical areas, hosting large biomass and biodiversity resources which are often threatened by unsustainable anthropogenic pressures, frequent forest resources monitoring is needed. Lidar is a powerful tool to estimate aboveground biomass at fine resolution; however its application in tropical forests has been limited, with high variability in the accuracy of results. Lidar pulses scan the forest vertical profile, and can provide structure information which is also linked to biodiversity. In the last decade the remote sensing of biodiversity has received great attention, but few studies focused on the use of lidar for assessing tree species richness in tropical forests. This research aims at estimating aboveground biomass and tree species richness using discrete return airborne lidar in Ghana forests. We tested an advanced statistical technique, Multivariate Adaptive Regression Splines (MARS), which does not require assumptions on data distribution or on the relationships between variables, being suitable for studying ecological variables. We compared the MARS regression results with those obtained by multilinear regression and found that both algorithms were effective, but MARS provided higher accuracy either for biomass (R2 = 0.72) and species richness (R2 = 0.64). We also noted strong correlation between biodiversity and biomass field values. Even if the forest areas under analysis are limited in extent and represent peculiar ecosystems, the preliminary indications produced by our study suggest that instrument such as lidar, specifically useful for pinpointing forest structure, can also be exploited as a support for tree species richness assessment.
Nonparametric regression applied to quantitative structure-activity relationships
Constans; Hirst
2000-03-01
Several nonparametric regressors have been applied to modeling quantitative structure-activity relationship (QSAR) data. The simplest regressor, the Nadaraya-Watson, was assessed in a genuine multivariate setting. Other regressors, the local linear and the shifted Nadaraya-Watson, were implemented within additive models--a computationally more expedient approach, better suited for low-density designs. Performances were benchmarked against the nonlinear method of smoothing splines. A linear reference point was provided by multilinear regression (MLR). Variable selection was explored using systematic combinations of different variables and combinations of principal components. For the data set examined, 47 inhibitors of dopamine beta-hydroxylase, the additive nonparametric regressors have greater predictive accuracy (as measured by the mean absolute error of the predictions or the Pearson correlation in cross-validation trails) than MLR. The use of principal components did not improve the performance of the nonparametric regressors over use of the original descriptors, since the original descriptors are not strongly correlated. It remains to be seen if the nonparametric regressors can be successfully coupled with better variable selection and dimensionality reduction in the context of high-dimensional QSARs.
1979-09-01
without determinantal divisors, Linear and Multilinear Algebra 7(1979), 107-109. 4. The use of integral operators in number theory (with C. Ryavec and...Gersgorin revisited, to appear in Letters in Linear Algebra. 15. A surprising determinantal inequality for real matrices (with C.R. Johnson), to appear in...Analysis: An Essay Concerning the Limitations of Some Mathematical Methods in the Social , Political and Biological Sciences, David Berlinski, MIT Press
Evaluation of globally available precipitation data products as input for water balance models
NASA Astrophysics Data System (ADS)
Lebrenz, H.; Bárdossy, A.
2009-04-01
Subject of this study is the evaluation of globally available precipitation data products, which are intended to be used as input variables for water balance models in ungauged basins. The selected data sources are a) the Global Precipitation Climatology Centre (GPCC), b) the Global Precipitation Climatology Project (GPCP) and c) the Climate Research Unit (CRU), resulting into twelve globally available data products. The data products imply different data bases, different derivation routines and varying resolutions in time and space. For validation purposes, the ground data from South Africa were screened on homogeneity and consistency by various tests and an outlier detection using multi-linear regression was performed. External Drift Kriging was subsequently applied on the ground data and the resulting precipitation arrays were compared to the different products with respect to quantity and variance.
On the concept of sloped motion for free-floating wave energy converters.
Payne, Grégory S; Pascal, Rémy; Vaillant, Guillaume
2015-10-08
A free-floating wave energy converter (WEC) concept whose power take-off (PTO) system reacts against water inertia is investigated herein. The main focus is the impact of inclining the PTO direction on the system performance. The study is based on a numerical model whose formulation is first derived in detail. Hydrodynamics coefficients are obtained using the linear boundary element method package WAMIT. Verification of the model is provided prior to its use for a PTO parametric study and a multi-objective optimization based on a multi-linear regression method. It is found that inclining the direction of the PTO at around 50° to the vertical is highly beneficial for the WEC performance in that it provides a high capture width ratio over a broad region of the wave period range.
On the concept of sloped motion for free-floating wave energy converters
Payne, Grégory S.; Pascal, Rémy; Vaillant, Guillaume
2015-01-01
A free-floating wave energy converter (WEC) concept whose power take-off (PTO) system reacts against water inertia is investigated herein. The main focus is the impact of inclining the PTO direction on the system performance. The study is based on a numerical model whose formulation is first derived in detail. Hydrodynamics coefficients are obtained using the linear boundary element method package WAMIT. Verification of the model is provided prior to its use for a PTO parametric study and a multi-objective optimization based on a multi-linear regression method. It is found that inclining the direction of the PTO at around 50° to the vertical is highly beneficial for the WEC performance in that it provides a high capture width ratio over a broad region of the wave period range. PMID:26543397
Gonzalez, J; Marchand-Geneste, N; Giraudel, J L; Shimada, T
2012-01-01
To obtain chemical clues on the process of bioactivation by cytochromes P450 1A1 and 1B1, some QSAR studies were carried out based on cellular experiments of the metabolic activation of polycyclic aromatic hydrocarbons and heterocyclic aromatic compounds by those enzymes. Firstly, the 3D structures of cytochromes 1A1 and 1B1 were built using homology modelling with a cytochrome 1A2 template. Using these structures, 32 ligands including heterocyclic aromatic compounds, polycyclic aromatic hydrocarbons and corresponding diols, were docked with LigandFit and CDOCKER algorithms. Binding mode analysis highlighted the importance of hydrophobic interactions and the hydrogen bonding network between cytochrome amino acids and docked molecules. Finally, for each enzyme, multilinear regression and artificial neural network QSAR models were developed and compared. These statistical models highlighted the importance of electronic, structural and energetic descriptors in metabolic activation process, and could be used for virtual screening of ligand databases. In the case of P450 1A1, the best model was obtained with artificial neural network analysis and gave an r (2) of 0.66 and an external prediction [Formula: see text] of 0.73. Concerning P450 1B1, artificial neural network analysis gave a much more robust model, associated with an r (2) value of 0.73 and an external prediction [Formula: see text] of 0.59.
Modes of hurricane activity variability in the eastern Pacific: Implications for the 2016 season
NASA Astrophysics Data System (ADS)
Boucharel, Julien; Jin, Fei-Fei; England, Matthew H.; Lin, I. I.
2016-11-01
A gridded product of accumulated cyclone energy (ACE) in the eastern Pacific is constructed to assess the dominant mode of tropical cyclone (TC) activity variability. Results of an empirical orthogonal function decomposition and regression analysis of environmental variables indicate that the two dominant modes of ACE variability (40% of the total variance) are related to different flavors of the El Niño-Southern Oscillation (ENSO). The first mode, more active during the later part of the hurricane season (September-November), is linked to the eastern Pacific El Niño through the delayed oceanic control associated with the recharge-discharge mechanism. The second mode, dominant in the early months of the hurricane season, is related to the central Pacific El Niño mode and the associated changes in atmospheric variability. A multilinear regression forecast model of the dominant principal components of ACE variability is then constructed. The wintertime subsurface state of the eastern equatorial Pacific (characterizing ENSO heat discharge), the east-west tilt of the thermocline (describing ENSO phase transition), the anomalous ocean surface conditions in the TC region in spring (portraying atmospheric changes induced by persistence of local surface anomalies), and the intraseasonal atmospheric variability in the western Pacific are found to be good predictors of TC activity. Results complement NOAA's official forecast by providing additional spatial and temporal information. They indicate a more active 2016 season ( 2 times the ACE mean) with a spatial expansion into the central Pacific associated with the heat discharge from the 2015/2016 El Niño.
Sparse alignment for robust tensor learning.
Lai, Zhihui; Wong, Wai Keung; Xu, Yong; Zhao, Cairong; Sun, Mingming
2014-10-01
Multilinear/tensor extensions of manifold learning based algorithms have been widely used in computer vision and pattern recognition. This paper first provides a systematic analysis of the multilinear extensions for the most popular methods by using alignment techniques, thereby obtaining a general tensor alignment framework. From this framework, it is easy to show that the manifold learning based tensor learning methods are intrinsically different from the alignment techniques. Based on the alignment framework, a robust tensor learning method called sparse tensor alignment (STA) is then proposed for unsupervised tensor feature extraction. Different from the existing tensor learning methods, L1- and L2-norms are introduced to enhance the robustness in the alignment step of the STA. The advantage of the proposed technique is that the difficulty in selecting the size of the local neighborhood can be avoided in the manifold learning based tensor feature extraction algorithms. Although STA is an unsupervised learning method, the sparsity encodes the discriminative information in the alignment step and provides the robustness of STA. Extensive experiments on the well-known image databases as well as action and hand gesture databases by encoding object images as tensors demonstrate that the proposed STA algorithm gives the most competitive performance when compared with the tensor-based unsupervised learning methods.
Sharma, Ashok K; Srivastava, Gopal N; Roy, Ankita; Sharma, Vineet K
2017-01-01
The experimental methods for the prediction of molecular toxicity are tedious and time-consuming tasks. Thus, the computational approaches could be used to develop alternative methods for toxicity prediction. We have developed a tool for the prediction of molecular toxicity along with the aqueous solubility and permeability of any molecule/metabolite. Using a comprehensive and curated set of toxin molecules as a training set, the different chemical and structural based features such as descriptors and fingerprints were exploited for feature selection, optimization and development of machine learning based classification and regression models. The compositional differences in the distribution of atoms were apparent between toxins and non-toxins, and hence, the molecular features were used for the classification and regression. On 10-fold cross-validation, the descriptor-based, fingerprint-based and hybrid-based classification models showed similar accuracy (93%) and Matthews's correlation coefficient (0.84). The performances of all the three models were comparable (Matthews's correlation coefficient = 0.84-0.87) on the blind dataset. In addition, the regression-based models using descriptors as input features were also compared and evaluated on the blind dataset. Random forest based regression model for the prediction of solubility performed better ( R 2 = 0.84) than the multi-linear regression (MLR) and partial least square regression (PLSR) models, whereas, the partial least squares based regression model for the prediction of permeability (caco-2) performed better ( R 2 = 0.68) in comparison to the random forest and MLR based regression models. The performance of final classification and regression models was evaluated using the two validation datasets including the known toxins and commonly used constituents of health products, which attests to its accuracy. The ToxiM web server would be a highly useful and reliable tool for the prediction of toxicity, solubility, and permeability of small molecules.
Sharma, Ashok K.; Srivastava, Gopal N.; Roy, Ankita; Sharma, Vineet K.
2017-01-01
The experimental methods for the prediction of molecular toxicity are tedious and time-consuming tasks. Thus, the computational approaches could be used to develop alternative methods for toxicity prediction. We have developed a tool for the prediction of molecular toxicity along with the aqueous solubility and permeability of any molecule/metabolite. Using a comprehensive and curated set of toxin molecules as a training set, the different chemical and structural based features such as descriptors and fingerprints were exploited for feature selection, optimization and development of machine learning based classification and regression models. The compositional differences in the distribution of atoms were apparent between toxins and non-toxins, and hence, the molecular features were used for the classification and regression. On 10-fold cross-validation, the descriptor-based, fingerprint-based and hybrid-based classification models showed similar accuracy (93%) and Matthews's correlation coefficient (0.84). The performances of all the three models were comparable (Matthews's correlation coefficient = 0.84–0.87) on the blind dataset. In addition, the regression-based models using descriptors as input features were also compared and evaluated on the blind dataset. Random forest based regression model for the prediction of solubility performed better (R2 = 0.84) than the multi-linear regression (MLR) and partial least square regression (PLSR) models, whereas, the partial least squares based regression model for the prediction of permeability (caco-2) performed better (R2 = 0.68) in comparison to the random forest and MLR based regression models. The performance of final classification and regression models was evaluated using the two validation datasets including the known toxins and commonly used constituents of health products, which attests to its accuracy. The ToxiM web server would be a highly useful and reliable tool for the prediction of toxicity, solubility, and permeability of small molecules. PMID:29249969
Potmischil, Francisc; Duddeck, Helmut; Nicolescu, Alina; Deleanu, Calin
2007-03-01
The (15)N chemical shifts of 13 N-methylpiperidine-derived mono-, bi- and tricycloaliphatic tertiary amines, their methiodides and their N-epimeric pairs of N-oxides were measured, and the contributions of specific structural parameters to the chemical shifts were determined by multilinear regression analysis. Within the examined compounds, the effects of N-oxidation upon the (15)N chemical shifts of the amines vary from +56 ppm to +90 ppm (deshielding), of which approx. +67.7 ppm is due to the inductive effect of the incoming N(+)--O(-) oxygen atom, whereas the rest is due to the additive shift effects of the various C-alkyl substituents of the piperidine ring. The effects of quaternization vary from -3.1 ppm to +29.3 ppm, of which approx. +8.9 ppm is due to the inductive effect of the incoming N(+)--CH(3) methyl group, and the rest is due to the additive shift effects of the various C-alkyl substituents of the piperidine ring. The shift effects of the C-alkyl substituents in the amines, the N-oxides and the methiodides are discussed. Copyright (c) 2007 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Masand, Vijay H.; El-Sayed, Nahed N. E.; Bambole, Mukesh U.; Quazi, Syed A.
2018-04-01
Multiple discrete quantitative structure-activity relationships (QSARs) models were constructed for the anticancer activity of α, β-unsaturated carbonyl-based compounds, oxime and oxime ether analogues with a variety of substituents like sbnd Br, sbnd OH, -OMe, etc. at different positions. A big pool of descriptors was considered for QSAR model building. Genetic algorithm (GA), available in QSARINS-Chem, was executed to choose optimum number and set of descriptors to create the multi-linear regression equations for a dataset of sixty-nine compounds. The newly developed five parametric models were subjected to exhaustive internal and external validation along with Y-scrambling using QSARINS-Chem, according to the OECD principles for QSAR model validation. The models were built using easily interpretable descriptors and accepted after confirming statistically robustness with high external predictive ability. The five parametric models were found to have R2 = 0.80 to 0.86, R2ex = 0.75 to 0.84, and CCCex = 0.85 to 0.90. The models indicate that frequency of nitrogen and oxygen atoms separated by five bonds from each other and internal electronic environment of the molecule have correlation with the anticancer activity.
Robust stability of fractional order polynomials with complicated uncertainty structure
Şenol, Bilal; Pekař, Libor
2017-01-01
The main aim of this article is to present a graphical approach to robust stability analysis for families of fractional order (quasi-)polynomials with complicated uncertainty structure. More specifically, the work emphasizes the multilinear, polynomial and general structures of uncertainty and, moreover, the retarded quasi-polynomials with parametric uncertainty are studied. Since the families with these complex uncertainty structures suffer from the lack of analytical tools, their robust stability is investigated by numerical calculation and depiction of the value sets and subsequent application of the zero exclusion condition. PMID:28662173
Ju, Hyun-Bin; Kang, Eun-Chan; Jeon, Dong-Wook; Kim, Tae-Hyun; Moon, Jung-Joon; Kim, Sung-Jin; Choi, Ji-Min; Jung, Do-Un
2018-01-01
Objective The objective of present study is to analyze the prevalence of depression and anxiety following breast cancer surgery and to assess the factors that affect postoperative psychological symptoms. Methods The Hamilton Rating Scale for Depression (HAM-D), Hamilton Anxiety Rating Scale (HAM-A), Body Image Scale (BIS), and Rosenberg Self Esteem Scale (RSES) were used to assess the psychological states of patients who had been diagnosed with and had undergone surgery for breast cancer. Blood concentrations of the stress markers adrenocorticotropic hormone, cortisol, arginine-vasopressin, and angiotensin-converting enzyme were measured. Pearson’s correlation analysis and multilinear regression analysis were used to analyse the data. Results At least mild depressive symptoms were noted in 50.5% of patients, while 42.4% of patients exhibited at least mild anxiety symptoms. HAM-D score was positively correlated with HAM-A (r=0.83, p<0.001) and BIS (r=0.29, p<0.001) scores and negatively correlated with RSES score (r=-0.41, p<0.001). HAM-A score was positively correlated with BIS score (r=0.32, p<0.001) and negatively correlated with RSES score (r=-0.27, p<0.001). There were no statistically significant associations between stress markers and depression/anxiety. Conclusion Patients with breast cancer frequently exhibit postoperative depression and anxiety, which are related to low levels of self-esteem and distorted body image. PMID:29475233
Barmpalexis, Panagiotis; Grypioti, Agni; Eleftheriadis, Georgios K; Fatouros, Dimitris G
2018-02-01
In the present study, liquisolid formulations were developed for improving dissolution profile of aprepitant (APT) in a solid dosage form. Experimental studies were complemented with artificial neural networks and genetic programming. Specifically, the type and concentration of liquid vehicle was evaluated through saturation-solubility studies, while the effect of the amount of viscosity increasing agent (HPMC), the type of wetting (Soluplus® vs. PVP) and solubilizing (Poloxamer®407 vs. Kolliphor®ELP) agents, and the ratio of solid coating (microcrystalline cellulose) to carrier (colloidal silicon dioxide) were evaluated based on in vitro drug release studies. The optimum liquisolid formulation exhibited improved dissolution characteristics compared to the marketed product Emend®. X-ray diffraction (XRD), scanning electron microscopy (SEM) and a novel method combining particle size analysis by dynamic light scattering (DLS) and HPLC, revealed that the increase in dissolution rate of APT in the optimum liquisolid formulation was due to the formation of stable APT nanocrystals. Differential scanning calorimetry (DSC) and attenuated total reflection FTIR spectroscopy (ATR-FTIR) revealed the presence of intermolecular interactions between APT and liquisolid formulation excipients. Multilinear regression analysis (MLR), artificial neural networks (ANNs), and genetic programming (GP) were used to correlate several formulation variables with dissolution profile parameters (Y 15min and Y 30min ) using a full factorial experimental design. Results showed increased correlation efficacy for ANNs and GP (RMSE of 0.151 and 0.273, respectively) compared to MLR (RMSE = 0.413).
RFID Reader Antenna with Multi-Linear Polarization Diversity
NASA Technical Reports Server (NTRS)
Fink, Patrick; Lin, Greg; Ngo, Phong; Kennedy, Timothy; Rodriguez, Danny; Chu, Andrew; Broyan, James; Schmalholz, Donald
2018-01-01
This paper describes an RFID reader antenna that offers reduced polarization loss compared to that typically associated with reader-tag communications involving arbitrary relative orientation of the reader antenna and the tag.
Haiduc, Adrian Marius; van Duynhoven, John
2005-02-01
The porous properties of food materials are known to determine important macroscopic parameters such as water-holding capacity and texture. In conventional approaches, understanding is built from a long process of establishing macrostructure-property relations in a rational manner. Only recently, multivariate approaches were introduced for the same purpose. The model systems used here are oil-in-water emulsions, stabilised by protein, and form complex structures, consisting of fat droplets dispersed in a porous protein phase. NMR time-domain decay curves were recorded for emulsions with varied levels of fat, protein and water. Hardness, dry matter content and water drainage were determined by classical means and analysed for correlation with the NMR data with multivariate techniques. Partial least squares can calibrate and predict these properties directly from the continuous NMR exponential decays and yields regression coefficients higher than 82%. However, the calibration coefficients themselves belong to the continuous exponential domain and do little to explain the connection between NMR data and emulsion properties. Transformation of the NMR decays into a discreet domain with non-negative least squares permits the use of multilinear regression (MLR) on the resulting amplitudes as predictors and hardness or water drainage as responses. The MLR coefficients show that hardness is highly correlated with the components that have T2 distributions of about 20 and 200 ms whereas water drainage is correlated with components that have T2 distributions around 400 and 1800 ms. These T2 distributions very likely correlate with water populations present in pores with different sizes and/or wall mobility. The results for the emulsions studied demonstrate that NMR time-domain decays can be employed to predict properties and to provide insight in the underlying microstructural features.
D'Archivio, Angelo Antonio; Incani, Angela; Ruggieri, Fabrizio
2011-01-01
In this paper, we use a quantitative structure-retention relationship (QSRR) method to predict the retention times of polychlorinated biphenyls (PCBs) in comprehensive two-dimensional gas chromatography (GC×GC). We analyse the GC×GC retention data taken from the literature by comparing predictive capability of different regression methods. The various models are generated using 70 out of 209 PCB congeners in the calibration stage, while their predictive performance is evaluated on the remaining 139 compounds. The two-dimensional chromatogram is initially estimated by separately modelling retention times of PCBs in the first and in the second column ((1) t (R) and (2) t (R), respectively). In particular, multilinear regression (MLR) combined with genetic algorithm (GA) variable selection is performed to extract two small subsets of predictors for (1) t (R) and (2) t (R) from a large set of theoretical molecular descriptors provided by the popular software Dragon, which after removal of highly correlated or almost constant variables consists of 237 structure-related quantities. Based on GA-MLR analysis, a four-dimensional and a five-dimensional relationship modelling (1) t (R) and (2) t (R), respectively, are identified. Single-response partial least square (PLS-1) regression is alternatively applied to independently model (1) t (R) and (2) t (R) without the need for preliminary GA variable selection. Further, we explore the possibility of predicting the two-dimensional chromatogram of PCBs in a single calibration procedure by using a two-response PLS (PLS-2) model or a feed-forward artificial neural network (ANN) with two output neurons. In the first case, regression is carried out on the full set of 237 descriptors, while the variables previously selected by GA-MLR are initially considered as ANN inputs and subjected to a sensitivity analysis to remove the redundant ones. Results show PLS-1 regression exhibits a noticeably better descriptive and predictive performance than the other investigated approaches. The observed values of determination coefficients for (1) t (R) and (2) t (R) in calibration (0.9999 and 0.9993, respectively) and prediction (0.9987 and 0.9793, respectively) provided by PLS-1 demonstrate that GC×GC behaviour of PCBs is properly modelled. In particular, the predicted two-dimensional GC×GC chromatogram of 139 PCBs not involved in the calibration stage closely resembles the experimental one. Based on the above lines of evidence, the proposed approach ensures accurate simulation of the whole GC×GC chromatogram of PCBs using experimental determination of only 1/3 retention data of representative congeners.
Huang, Mengmeng; Wei, Yan; Wang, Jun; Zhang, Yu
2016-01-01
We used the support vector regression (SVR) approach to predict and unravel reduction/promotion effect of characteristic flavonoids on the acrylamide formation under a low-moisture Maillard reaction system. Results demonstrated the reduction/promotion effects by flavonoids at addition levels of 1–10000 μmol/L. The maximal inhibition rates (51.7%, 68.8% and 26.1%) and promote rates (57.7%, 178.8% and 27.5%) caused by flavones, flavonols and isoflavones were observed at addition levels of 100 μmol/L and 10000 μmol/L, respectively. The reduction/promotion effects were closely related to the change of trolox equivalent antioxidant capacity (ΔTEAC) and well predicted by triple ΔTEAC measurements via SVR models (R: 0.633–0.900). Flavonols exhibit stronger effects on the acrylamide formation than flavones and isoflavones as well as their O-glycosides derivatives, which may be attributed to the number and position of phenolic and 3-enolic hydroxyls. The reduction/promotion effects were well predicted by using optimized quantitative structure-activity relationship (QSAR) descriptors and SVR models (R: 0.926–0.994). Compared to artificial neural network and multi-linear regression models, SVR models exhibited better fitting performance for both TEAC-dependent and QSAR descriptor-dependent predicting work. These observations demonstrated that the SVR models are competent for predicting our understanding on the future use of natural antioxidants for decreasing the acrylamide formation. PMID:27586851
NASA Astrophysics Data System (ADS)
Huang, Mengmeng; Wei, Yan; Wang, Jun; Zhang, Yu
2016-09-01
We used the support vector regression (SVR) approach to predict and unravel reduction/promotion effect of characteristic flavonoids on the acrylamide formation under a low-moisture Maillard reaction system. Results demonstrated the reduction/promotion effects by flavonoids at addition levels of 1-10000 μmol/L. The maximal inhibition rates (51.7%, 68.8% and 26.1%) and promote rates (57.7%, 178.8% and 27.5%) caused by flavones, flavonols and isoflavones were observed at addition levels of 100 μmol/L and 10000 μmol/L, respectively. The reduction/promotion effects were closely related to the change of trolox equivalent antioxidant capacity (ΔTEAC) and well predicted by triple ΔTEAC measurements via SVR models (R: 0.633-0.900). Flavonols exhibit stronger effects on the acrylamide formation than flavones and isoflavones as well as their O-glycosides derivatives, which may be attributed to the number and position of phenolic and 3-enolic hydroxyls. The reduction/promotion effects were well predicted by using optimized quantitative structure-activity relationship (QSAR) descriptors and SVR models (R: 0.926-0.994). Compared to artificial neural network and multi-linear regression models, SVR models exhibited better fitting performance for both TEAC-dependent and QSAR descriptor-dependent predicting work. These observations demonstrated that the SVR models are competent for predicting our understanding on the future use of natural antioxidants for decreasing the acrylamide formation.
NASA Astrophysics Data System (ADS)
Renteln, Paul
2013-11-01
Preface; 1. Linear algebra; 2. Multilinear algebra; 3. Differentiation on manifolds; 4. Homotopy and de Rham cohomology; 5. Elementary homology theory; 6. Integration on manifolds; 7. Vector bundles; 8. Geometric manifolds; 9. The degree of a smooth map; Appendixes; References; Index.
food science. Matthew's research at NREL is focused on applying uncertainty quantification techniques . Research Interests Uncertainty quantification Computational multilinear algebra Approximation theory of and the Canonical Tensor Decomposition, Journal of Computational Physics (2017) Randomized Alternating
NASA Astrophysics Data System (ADS)
Portafaix, T.; Bencherif, H.; Godin-Beekmann, S.; Begue, N.; Culot, A.
2014-12-01
The subtropical dynamical barrier located in the lower stratosphere on the edge of the Tropical Stratospheric Reservoir (TSR), controls and limits exchanges between tropical and extratropical lower stratosphere. The geographical position of stations located near from the edge of the Tropical Stratospheric Reservoir is interesting since they are regularly interested by air-mass filaments originated from TSR or mid-latitudes. During such filamentary events, profiles of chemical species are modified according to the origin and the height of the air mass. These perturbations called "laminae" are generally associated to quasi-horizontal transport events. Many SHADOZ (Southern Hemisphere ADditional OZonesondes) stations from all around the southern tropics were selected in order to study the variability of laminae. Profiles from ozonesondes were analyzed to detect laminae using a statistical standard deviation method from the climatology. Time series of laminae were investigated by a multilinear regression model in order to estimate the influence of several proxy on laminae variability from 1998 to 2013. Different forcings such as QBO, ENSO or IOD were applied. The first objective is to better quantify isentropic transport as function of the station location and the influence of the QBO on the laminae occurrences. Finally, cases studies were conducted from high-resolution advection model MIMOSA. These allow us to identify the air mass origin and to highlight privileged roads where meridional transport occurs between tropics and midlatitudes.
Amazon rainforest modulation of water security in the Pantanal wetland.
Bergier, Ivan; Assine, Mario L; McGlue, Michael M; Alho, Cleber J R; Silva, Aguinaldo; Guerreiro, Renato L; Carvalho, João C
2018-04-01
The Pantanal is a large wetland mainly located in Brazil, whose natural resources are important for local, regional and global economies. Many human activities in the region rely on Pantanal's ecosystem services including cattle breeding for beef production, professional and touristic fishing, and contemplative tourism. The conservation of natural resources and ecosystems services provided by the Pantanal wetland must consider strategies for water security. We explored precipitation data from 1926 to 2016 provided by a regional network of rain gauge stations managed by the Brazilian Government. A timeseries obtained by dividing the monthly accumulated-rainfall by the number of rainy days indicated a positive trend of the mean rate of rainy days (mm/day) for the studied period in all seasons. We assessed the linkage of Pantanal's rainfall patterns with large-scale climate data in South America provided by NOAA/ESRL from 1949 to 2016. Analysis of spatiotemporal correlation maps indicated that, in agreement with previous studies, the Amazon biome plays a significant role in controlling summer rainfall in the Pantanal. Based on these spatiotemporal maps, a multi-linear regression model was built to predict the mean rate of summer rainy days in Pantanal by 2100, relative to the 1961-1990 mean reference. We found that the deforestation of the Amazon rainforest has profound implications for water security and the conservation of Pantanal's ecosystem services. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Peng, Xing; Shi, Guo-Liang; Gao, Jian; Liu, Jia-Yuan; HuangFu, Yan-Qi; Ma, Tong; Wang, Hai-Ting; Zhang, Yue-Chong; Wang, Han; Li, Hui; Ivey, Cesunica E.; Feng, Yin-Chang
2016-08-01
With real time resolved data of Particulate matter (PM) and chemical species, understanding the source patterns and chemical characteristics is critical to establish controlling of PM. In this work, PM2.5 and chemical species were measured by corresponding online instruments with 1-h time resolution in Beijing. Multilinear Engine 2 (ME2) model was applied to explore the sources, and four sources (vehicle emission, crustal dust, secondary formation and coal combustion) were identified. To investigate the sensitivity of time resolution on the source contributions and chemical characteristics, ME2 was conducted with four time resolution runs (1-h, 2-h, 4-h, and 8-h). Crustal dust and coal combustion display large variation in the four time resolutions runs, with their contributions ranging from 6.7 to 10.4 μg m-3 and from 6.4 to 12.2 μg m-3, respectively. The contributions of vehicle emission and secondary formation range from 7.5 to 10.5 and from 14.7 to 16.7 μg m-3, respectively. The sensitivity analyses were conducted by principal component analysis-plot (PCA-plot), coefficient of divergence (CD), average absolute error (AAE) and correlation coefficients. For the four time resolution runs, the source contributions and profiles of crustal dust and coal combustion were more unstable than other source categories, possibly due to the lack of key markers of crustal dust and coal combustion (e.g. Si, Al). On the other hand, vehicle emission and crustal dust were more sensitive to time series of source contributions at different time resolutions. Findings in this study can improve our knowledge of source contributions and chemical characteristics at different time solutions.
On a Family of Multivariate Modified Humbert Polynomials
Aktaş, Rabia; Erkuş-Duman, Esra
2013-01-01
This paper attempts to present a multivariable extension of generalized Humbert polynomials. The results obtained here include various families of multilinear and multilateral generating functions, miscellaneous properties, and also some special cases for these multivariable polynomials. PMID:23935411
Methods for Estimating Uncertainty in Factor Analytic Solutions
The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DI...
Changing response of the North Atlantic/European winter climate to the 11 year solar cycle
NASA Astrophysics Data System (ADS)
Ma, Hedi; Chen, Haishan; Gray, Lesley; Zhou, Liming; Li, Xing; Wang, Ruili; Zhu, Siguang
2018-03-01
Recent studies have presented conflicting results regarding the 11 year solar cycle (SC) influences on winter climate over the North Atlantic/European region. Analyses of only the most recent decades suggest a synchronized North Atlantic Oscillation (NAO)-like response pattern to the SC. Analyses of long-term climate data sets dating back to the late 19th century, however, suggest a mean sea level pressure (mslp) response that lags the SC by 2-4 years in the southern node of the NAO (i.e. Azores region). To understand the conflicting nature and cause of these time dependencies in the SC surface response, the present study employs a lead/lag multi-linear regression technique with a sliding window of 44 years over the period 1751-2016. Results confirm previous analyses, in which the average response for the whole time period features a statistically significant 2-4 year lagged mslp response centered over the Azores region. Overall, the lagged nature of Azores mslp response is generally consistent in time. Stronger and statistically significant SC signals tend to appear in the periods when the SC forcing amplitudes are relatively larger. Individual month analysis indicates the consistent lagged response in December-January-February average arises primarily from early winter months (i.e. December and January), which has been associated with ocean feedback processes that involve reinforcement by anomalies from the previous winter. Additional analysis suggests that the synchronous NAO-like response in recent decades arises primarily from late winter (February), possibly reflecting a result of strong internal noise.
Cruz Minguillón, María; Querol, Xavier; Alastuey, Andrés; Monfort, Eliseo; Vicente Miró, José
2007-10-01
Principal component analysis (PCA) coupled with a multilinear regression analysis (MLRA) was applied to PM(10) speciation data series (2002-2005) from four sampling sites in a highly industrialised area (ceramic production) in the process of implementing emission abatement technology. Five common factors with similar chemical profiles were identified at all the sites: mineral, regional background (influenced by the industrial estate located on the coast: an oil refinery and a power plant), sea spray, industrial 1 (manufacture and use of glaze components, including frit fusion) and road traffic. The contribution of the regional background differs slightly from site to site. The mineral factor, attributed to the sum of several sources (mainly the ceramic industry, but also with minor contributions from soil resuspension and African dust outbreaks) contributes between 9 and 11 microg m(-3) at all the sites. Source industrial 1 entails an increase in PM(10) levels between 4 and 5 microg m(-3) at the urban sites and 2 microg m(-3) at the suburban background site. However, after 2004, this source contributed less than 2 microg m(-3) at most sites, whereas the remaining sources did not show an upward or downward trend along the study period. This gradual decrease in the contribution of source industrial 1 coincides with the implementation of PM abatement technology in the frit fusion kilns of the area. This relationship enables us to assess the efficiency of the implementation of environmental technologies in terms of their impact on air quality.
NASA Astrophysics Data System (ADS)
Avhad, Kiran C.; Patil, Dinesh S.; Chitrambalam, S.; Sreenath, M. C.; Joe, I. Hubert; Sekar, Nagaiyan
2018-05-01
Four new coumarin hybrid styryl dyes are synthesized by condensing 4-(7-(diethylamino)-2-oxo-2H-chromen-3-yl)-2-morpholinothiazole-5-carbaldehyde with dicyanovinylene containing active methylene intermediates and their linear and non-linear optical properties are studied. The dye having dicyanovinylene-isophorone acceptor displayed a large Stokes shift of 3702-4795 cm-1 in non-polar to polar solvent respectively. The dyes exhibit a good charge transfer characteristics and positive emission solvatochromism (∼50 nm-72 nm) in non-polar to a polar solvent which is well supported by multi-linear regression analysis. Viscosity induced enhancement study in ethanol/polyethylene glycol-400 system shows 2.71-6.78 fold increase in emission intensity. The intra and twisted-intramolecular charge transfer (ICT-TICT) characteristics were established using emission solvatochromism, polarity plots, generalised Mullikan-Hush (GMH) analysis and optimized geometry. A dye having the highest charge transfer dipole moment relatively possess the maximum two-photon absorption cross-section area (KK-1 = 165-207 GM) which was established using theoretical two-level model. The NLO properties have been investigated employing solvatochromic and computational methods and were found to be directly proportional to the polarity of the solvent. Z-scan results reveal that the dyes KK-1 and KK-2 possesses reverse saturable kind of behaviour whereas KK-3 and KK-4 show saturable kind of behaviour. From the experimental and theoretical data, these coumarin thiazole hybrid dyes can be considered as promising candidates for FMR and NLOphores.
Topological Signatures for Population Admixture
USDA-ARS?s Scientific Manuscript database
Topological Signatures for Population AdmixtureDeniz Yorukoglu1, Filippo Utro1, David Kuhn2, Saugata Basu3 and Laxmi Parida1* Abstract Background: As populations with multi-linear transmission (i.e., mixing of genetic material from two parents, say) evolve over generations, the genetic transmission...
A road map for multi-way calibration models.
Escandar, Graciela M; Olivieri, Alejandro C
2017-08-07
A large number of experimental applications of multi-way calibration are known, and a variety of chemometric models are available for the processing of multi-way data. While the main focus has been directed towards three-way data, due to the availability of various instrumental matrix measurements, a growing number of reports are being produced on order signals of increasing complexity. The purpose of this review is to present a general scheme for selecting the appropriate data processing model, according to the properties exhibited by the multi-way data. In spite of the complexity of the multi-way instrumental measurements, simple criteria can be proposed for model selection, based on the presence and number of the so-called multi-linearity breaking modes (instrumental modes that break the low-rank multi-linearity of the multi-way arrays), and also on the existence of mutually dependent instrumental modes. Recent literature reports on multi-way calibration are reviewed, with emphasis on the models that were selected for data processing.
NASA Astrophysics Data System (ADS)
Reyes-Villegas, Ernesto; Green, David C.; Priestman, Max; Canonaco, Francesco; Coe, Hugh; Prévôt, André S. H.; Allan, James D.
2016-12-01
The multilinear engine (ME-2) factorization tool is being widely used following the recent development of the Source Finder (SoFi) interface at the Paul Scherrer Institute. However, the success of this tool, when using the a value approach, largely depends on the inputs (i.e. target profiles) applied as well as the experience of the user. A strategy to explore the solution space is proposed, in which the solution that best describes the organic aerosol (OA) sources is determined according to the systematic application of predefined statistical tests. This includes trilinear regression, which proves to be a useful tool for comparing different ME-2 solutions. Aerosol Chemical Speciation Monitor (ACSM) measurements were carried out at the urban background site of North Kensington, London from March to December 2013, where for the first time the behaviour of OA sources and their possible environmental implications were studied using an ACSM. Five OA sources were identified: biomass burning OA (BBOA), hydrocarbon-like OA (HOA), cooking OA (COA), semivolatile oxygenated OA (SVOOA) and low-volatility oxygenated OA (LVOOA). ME-2 analysis of the seasonal data sets (spring, summer and autumn) showed a higher variability in the OA sources that was not detected in the combined March-December data set; this variability was explored with the triangle plots f44 : f43 f44 : f60, in which a high variation of SVOOA relative to LVOOA was observed in the f44 : f43 analysis. Hence, it was possible to conclude that, when performing source apportionment to long-term measurements, important information may be lost and this analysis should be done to short periods of time, such as seasonally. Further analysis on the atmospheric implications of these OA sources was carried out, identifying evidence of the possible contribution of heavy-duty diesel vehicles to air pollution during weekdays compared to those fuelled by petrol.
Multiaxial Cyclic Thermoplasticity Analysis with Besseling's Subvolume Method
NASA Technical Reports Server (NTRS)
Mcknight, R. L.
1983-01-01
A modification was formulated to Besseling's Subvolume Method to allow it to use multilinear stress-strain curves which are temperature dependent to perform cyclic thermoplasticity analyses. This method automotically reproduces certain aspects of real material behavior important in the analysis of Aircraft Gas Turbine Engine (AGTE) components. These include the Bauschinger effect, cross-hardening, and memory. This constitutive equation was implemented in a finite element computer program called CYANIDE. Subsequently, classical time dependent plasticity (creep) was added to the program. Since its inception, this program was assessed against laboratory and component testing and engine experience. The ability of this program to simulate AGTE material response characteristics was verified by this experience and its utility in providing data for life analyses was demonstrated. In this area of life analysis, the multiaxial thermoplasticity capabilities of the method have proved a match for the actual AGTE life experience.
NASA Astrophysics Data System (ADS)
Jiang, Y.; Liu, J.-R.; Luo, Y.; Yang, Y.; Tian, F.; Lei, K.-C.
2015-11-01
Groundwater in Beijing has been excessively exploited in a long time, causing the groundwater level continued to declining and land subsidence areas expanding, which restrained the economic and social sustainable development. Long years of study show good time-space corresponding relationship between groundwater level and land subsidence. To providing scientific basis for the following land subsidence prevention and treatment, quantitative research between groundwater level and settlement is necessary. Multi-linear regression models are set up by long series factual monitoring data about layered water table and settlement in the Tianzhu monitoring station. The results show that: layered settlement is closely related to water table, water level variation and amplitude, especially the water table. Finally, according to the threshold value in the land subsidence prevention and control plan of China (45, 30, 25 mm), the minimum allowable layered water level in this region while settlement achieving the threshold value is calculated between -18.448 and -10.082 m. The results provide a reasonable and operable control target of groundwater level for rational adjustment of groundwater exploited horizon in the future.
Haftka, Joris J H; Parsons, John R; Govers, Harrie A J
2006-11-24
A gas chromatographic method using Kováts retention indices has been applied to determine the liquid vapour pressure (P(i)), enthalpy of vaporization (DeltaH(i)) and difference in heat capacity between gas and liquid phase (DeltaC(i)) for a group of polycyclic aromatic hydrocarbons (PAHs). This group consists of 19 unsubstituted, methylated and sulphur containing PAHs. Differences in log P(i) of -0.04 to +0.99 log units at 298.15K were observed between experimental values and data from effusion and gas saturation studies. These differences in log P(i) have been fitted with multilinear regression resulting in a compound and temperature dependent correction. Over a temperature range from 273.15 to 423.15K, differences in corrected log P(i) of a training set (-0.07 to +0.03 log units) and a validation set (-0.17 to 0.19 log units) were within calculated error ranges. The corrected vapour pressures also showed a good agreement with other GC determined vapour pressures (average -0.09 log units).
Hinckson, Erica A; Badland, Hannah M
2011-01-01
In New Zealand, the School Travel Plan (STP) program was developed to increase school-related active travel rates and decrease traffic congestion. The plan was developed through collaboration among the school, community, and local council. The STP was tailored to each school's specific needs and incorporated educational initiatives, physical infrastructural changes in the vicinity of schools, and policy development. The purpose of this study was to determine the effectiveness of the STP program in changing school travel modes in children. Effectiveness was assessed by determining the difference between pre-STP and follow-up travel mode data in schools. The differences were assessed using multilinear regression analysis, including decile (measure of socioeconomic status), school roll at baseline, and STP year of implementation as predictors. Thirty-three elementary schools from the Auckland region participated in the study. School size ranged from 130 to 688 students. The final 2006 sample consisted of 13,631 students. On a set day (pre- and post-STP), students indicated their mode of transport to school and intended mode for returning home that day. Differences are reported as percentage points: there was an increase in active transport by 5.9% ± 6.8% when compared to baseline travel modes. School roll, STP year of implementation, and baseline values predicted engagement with active transport. Preliminary findings suggest that the STP program may be successful in creating mode shift changes to favor school-related active travel in elementary-school children.
Pinto, Susana; de Carvalho, Mamede
2017-02-01
Slow vital capacity (SVC) and forced vital capacity (FVC) are the most frequent used tests evaluating respiratory function in amyotrophic lateral sclerosis (ALS). No previous study has determined their interchangeability. To evaluate SVC-FVC correlation in ALS. Consecutive definite/probable ALS and primary lateral sclerosis (PLS) patients (2000-2014) in whom respiratory tests were performed at baseline/4-6months later were included. All were evaluated with revised ALS functional rating scale, the ALSFRS respiratory (R-subscore) and bulbar subscores, SVC, FVC, maximal inspiratory (MIP) and expiratory (MEP) pressures. SVC-FVC correlation was analysed by Pearson product-moment correlation test. Paired t-test compared baseline/follow-up values. Multilinear regression analysis modelled the relationship between tested variables. We included 592 ALS (332 men, mean onset age 62.6 ± 11.8 years, mean disease duration 15.4 ± 15 months) and 19 PLS (11 men, median age 54 years, median disease duration 5.5 years) patients. SVC and FVC predicted values decreased 2.15%/month and 2.08%/month, respectively. FVC and SVC were strongly correlated. Both were strongly correlated with MIP and MEP and moderately correlated with R-subscore for the all population and spinal-onset patients, but weakly correlated for bulbar-onset patients. FVC and SVC were strongly correlated and declined similarly. This correlation was preserved in bulbar-onset ALS and in spastic PLS patients.
Respiratory mechanics during sevoflurane anesthesia in children with and without asthma.
Habre, W; Scalfaro, P; Sims, C; Tiller, K; Sly, P D
1999-11-01
We studied lung function in children with and without asthma receiving anesthesia with sevoflurane. Fifty-two children had anesthesia induced with sevoflurane (up to 8%) in a mixture of 50% nitrous oxide in oxygen and then maintained at 3% with children breathing spontaneously via face mask and Jackson-Rees modification of the T-piece. Airway opening pressure and flow were then measured. After insertion of an oral endotracheal tube under 5% sevoflurane, measurements were repeated at 3%, as well as after increasing to 4.2%. Respiratory system resistance (Rrs) and compliance during expiration were calculated using multilinear regression analysis of airway opening pressure and flow, assuming a single-compartment model. Data from 44 children were analyzed (22 asthmatics and 22 normal children). The two groups were comparable with respect to age, weight, ventilation variables, and baseline respiratory mechanics. Intubation was associated with a significant increase in Rrs in asthmatics (17% +/- 49%), whereas in normal children, Rrs slightly decreased (-4% +/- 39%). At 4.2%, Rrs decreased slightly in both groups with almost no change in compliance system resistance. We concluded that in children with mild to moderate asthma, endotracheal intubation during sevoflurane anesthesia was associated with increase in Rrs that was not seen in nonasthmatic children. Tracheal intubation using sevoflurane as sole anesthetic is possible and its frequency is increasing. When comparing children with and without asthma, tracheal intubation under sevoflurane was associated with an increase in respiratory system resistance in asthmatic children. However, no apparent clinical adverse event was observed.
Compressed Continuous Computation v. 12/20/2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorodetsky, Alex
2017-02-17
A library for performing numerical computation with low-rank functions. The (C3) library enables performing continuous linear and multilinear algebra with multidimensional functions. Common tasks include taking "matrix" decompositions of vector- or matrix-valued functions, approximating multidimensional functions in low-rank format, adding or multiplying functions together, integrating multidimensional functions.
Dimension independence in exterior algebra.
Hawrylycz, M
1995-01-01
The identities between homogeneous expressions in rank 1 vectors and rank n - 1 covectors in a Grassmann-Cayley algebra of rank n, in which one set occurs multilinearly, are shown to represent a set of dimension-independent identities. The theorem yields an infinite set of nontrivial geometric identities from a given identity. PMID:11607520
Multi-way chemometric methodologies and applications: a central summary of our research work.
Wu, Hai-Long; Nie, Jin-Fang; Yu, Yong-Jie; Yu, Ru-Qin
2009-09-14
Multi-way data analysis and tensorial calibration are gaining widespread acceptance with the rapid development of modern analytical instruments. In recent years, our group working in State Key Laboratory of Chemo/Biosensing and Chemometrics in Hunan University has carried out exhaustive scientific research work in this area, such as building more canonical symbol systems, seeking the inner mathematical cyclic symmetry property for trilinear or multilinear decomposition, suggesting a series of multi-way calibration algorithms, exploring the rank estimation of three-way trilinear data array and analyzing different application systems. In this present paper, an overview from second-order data to third-order data covering about theories and applications in analytical chemistry has been presented.
Kranz, Georg S; Hahn, Andreas; Kraus, Christoph; Spies, Marie; Pichler, Verena; Jungwirth, Johannes; Mitterhauser, Markus; Wadsak, Wolfgang; Windischberger, Christian; Kasper, Siegfried; Lanzenberger, Rupert
2018-05-01
The serotonergic system modulates affect and is a target in the treatment of mood disorders. 5-HT 1A autoreceptors in the raphe control serotonin release by means of negative feedback inhibition. Hence, 5-HT 1A autoreceptor function should influence the serotonergic regulation of emotional reactivity in limbic regions. Previous findings suggest an inverse relationship between 5-HT 1A autoreceptor binding and amygdala reactivity to facial emotional expressions. The aim of the current multimodal neuroimaging study was to replicate the previous finding in a larger cohort. 31 healthy participants underwent fMRI as well as PET using the radioligand [carbonyl- 11 C]WAY-100635 to quantify 5-HT 1A autoreceptor binding in the dorsal raphe. The binding potential (BP ND ) was quantified using the multilinear reference tissue model (MRTM2) and cerebellar white matter as reference tissue. Functional MRI was done at 3T using a well-established facial emotion discrimination task (EDT). Here, participants had to match the emotional valence of facial expressions, while in a control condition they had to match geometric shapes. Effects of 5-HT 1A autoreceptor binding on amygdala reactivity were investigated using linear regression analysis with SPM8. Regression analysis between 5-HT 1A autoreceptor binding and mean amygdala reactivity revealed no statistically significant associations. Investigating amygdala reactivity in a voxel-wise approach revealed a positive association in the right amygdala (peak-T = 3.64, p < .05 FWE corrected for the amygdala volume) which was however conditional on the omission of age and sex as covariates in the model. Despite highly significant amygdala reactivity to facial emotional expressions, we were unable to replicate the inverse relationship between 5-HT 1A autoreceptor binding in the DRN and amygdala reactivity. Our results oppose previous multimodal imaging studies but seem to be in line with recent animal research. Deviation in results may be explained by methodological differences between our and previous multimodal studies. Copyright © 2018 Elsevier Inc. All rights reserved.
Panda, Suchismita; Mishra, Anuva; Jena, Manoranjan; Rout, Sashi Bhusan; Mohapatra, Srikrushna
2017-08-01
Anaemia is one of the common complications associated with Chronic Kidney Disease (CKD) responsible for the increase in the morbidity and mortality in such patients. Several factors have been attributed to cause renal anaemia, amongst which hyperparathyroidism is one of the less recognised reasons. Most studies have been conducted in this regard in CKD patients undergoing haemodialysis. The level of PTH in early stages of chronic kidney disease has not been much studied. The excess amount of Parathyroid Hormone (PTH) secondary to CKD has been suggested to be a causative factor for anaemia. To evaluate the serum PTH level in CKD patients before haemodialysis and to study the association of the haemoglobin status with the parathyroid hormone. Forty CKD patients above 18 years of age before haemodialysis and 25 age and sex matched healthy controls were included in the study. Routine biochemical and haematological parameters such as Routine Blood Sugar (RBS), urea, creatinine, Na + , K + , Ca 2+ , PTH and Hb% were perfomed. Red cell osmotic fragility was measured by serial dilutions of whole blood with varying concentrations of sodium chloride ranging from 0.1% to 0.9%. The study revealed a significant fall in Hb%, along with a rise in Median Osmotic Fragility (MOF) and PTH in the CKD patients when compared to the control group. Linear regression of PTH with Hb% revealed significant negative association between both the parameters with a R 2 value of 0.677. Multilinear regression analysis of MOF and other independent variables such as Hb%, Na + , K + , Ca 2+ , urea, PTH and creatinine highlighted the variance of MOF by 72%, maximal variance contributed by PTH. Receiver Operating Curve (ROC) analysis revealed an area under the curve of 0.980 with a sensitivity of 100% and specificity of 87% in detecting osmotic fragility at a cut off value of PTH ≥100 pg/ml. The underlying cause of anaemia should be identified early in the CKD patients before haemodialysis. Secondary hyperparathyroidism should be ruled out as a causative factor of anaemia to slow down the progression of the disease process.
Yang, Zhihui; Luo, Shuang; Wei, Zongsu; Ye, Tiantian; Spinney, Richard; Chen, Dong; Xiao, Ruiyang
2016-04-01
The second-order rate constants (k) of hydroxyl radical (·OH) with polychlorinated biphenyls (PCBs) in the gas phase are of scientific and regulatory importance for assessing their global distribution and fate in the atmosphere. Due to the limited number of measured k values, there is a need to model the k values for unknown PCBs congeners. In the present study, we developed a quantitative structure-activity relationship (QSAR) model with quantum chemical descriptors using a sequential approach, including correlation analysis, principal component analysis, multi-linear regression, validation, and estimation of applicability domain. The result indicates that the single descriptor, polarizability (α), plays an important role in determining the reactivity with a global standardized function of lnk = -0.054 × α ‒ 19.49 at 298 K. In order to validate the QSAR predicted k values and expand the current k value database for PCBs congeners, an independent method, density functional theory (DFT), was employed to calculate the kinetics and thermodynamics of the gas-phase ·OH oxidation of 2,4',5-trichlorobiphenyl (PCB31), 2,2',4,4'-tetrachlorobiphenyl (PCB47), 2,3,4,5,6-pentachlorobiphenyl (PCB116), 3,3',4,4',5,5'-hexachlorobiphenyl (PCB169), and 2,3,3',4,5,5',6-heptachlorobiphenyl (PCB192) at 298 K at B3LYP/6-311++G**//B3LYP/6-31 + G** level of theory. The QSAR predicted and DFT calculated k values for ·OH oxidation of these PCB congeners exhibit excellent agreement with the experimental k values, indicating the robustness and predictive power of the single-descriptor based QSAR model we developed. Copyright © 2015 Elsevier Ltd. All rights reserved.
Oliver, Stacy R.; Ngo, Jerry; Flores, Rebecca; Midyett, Jason; Meinardi, Simone; Carlson, Matthew K.; Rowland, F. Sherwood; Blake, Donald R.; Galassetti, Pietro R.
2011-01-01
Effective management of diabetes mellitus, affecting tens of millions of patients, requires frequent assessment of plasma glucose. Patient compliance for sufficient testing is often reduced by the unpleasantness of current methodologies, which require blood samples and often cause pain and skin callusing. We propose that the analysis of volatile organic compounds (VOCs) in exhaled breath can be used as a novel, alternative, noninvasive means to monitor glycemia in these patients. Seventeen healthy (9 females and 8 males, 28.0 ± 1.0 yr) and eight type 1 diabetic (T1DM) volunteers (5 females and 3 males, 25.8 ± 1.7 yr) were enrolled in a 240-min triphasic intravenous dextrose infusion protocol (baseline, hyperglycemia, euglycemia-hyperinsulinemia). In T1DM patients, insulin was also administered (using differing protocols on 2 repeated visits to separate the effects of insulinemia on breath composition). Exhaled breath and room air samples were collected at 12 time points, and concentrations of ∼100 VOCs were determined by gas chromatography and matched with direct plasma glucose measurements. Standard least squares regression was used on several subsets of exhaled gases to generate multilinear models to predict plasma glucose for each subject. Plasma glucose estimates based on two groups of four gases each (cluster A: acetone, methyl nitrate, ethanol, and ethyl benzene; cluster B: 2-pentyl nitrate, propane, methanol, and acetone) displayed very strong correlations with glucose concentrations (0.883 and 0.869 for clusters A and B, respectively) across nearly 300 measurements. Our study demonstrates the feasibility to accurately predict glycemia through exhaled breath analysis over a broad range of clinically relevant concentrations in both healthy and T1DM subjects. PMID:21467303
NASA Astrophysics Data System (ADS)
Gatos, I.; Tsantis, S.; Karamesini, M.; Skouroliakou, A.; Kagadis, G.
2015-09-01
Purpose: The design and implementation of a computer-based image analysis system employing the support vector machine (SVM) classifier system for the classification of Focal Liver Lesions (FLLs) on routine non-enhanced, T2-weighted Magnetic Resonance (MR) images. Materials and Methods: The study comprised 92 patients; each one of them has undergone MRI performed on a Magnetom Concerto (Siemens). Typical signs on dynamic contrast-enhanced MRI and biopsies were employed towards a three class categorization of the 92 cases: 40-benign FLLs, 25-Hepatocellular Carcinomas (HCC) within Cirrhotic liver parenchyma and 27-liver metastases from Non-Cirrhotic liver. Prior to FLLs classification an automated lesion segmentation algorithm based on Marcov Random Fields was employed in order to acquire each FLL Region of Interest. 42 texture features derived from the gray-level histogram, co-occurrence and run-length matrices and 12 morphological features were obtained from each lesion. Stepwise multi-linear regression analysis was utilized to avoid feature redundancy leading to a feature subset that fed the multiclass SVM classifier designed for lesion classification. SVM System evaluation was performed by means of leave-one-out method and ROC analysis. Results: Maximum accuracy for all three classes (90.0%) was obtained by means of the Radial Basis Kernel Function and three textural features (Inverse- Different-Moment, Sum-Variance and Long-Run-Emphasis) that describe lesion's contrast, variability and shape complexity. Sensitivity values for the three classes were 92.5%, 81.5% and 96.2% respectively, whereas specificity values were 94.2%, 95.3% and 95.5%. The AUC value achieved for the selected subset was 0.89 with 0.81 - 0.94 confidence interval. Conclusion: The proposed SVM system exhibit promising results that could be utilized as a second opinion tool to the radiologist in order to decrease the time/cost of diagnosis and the need for patients to undergo invasive examination.
Focal liver lesions segmentation and classification in nonenhanced T2-weighted MRI.
Gatos, Ilias; Tsantis, Stavros; Karamesini, Maria; Spiliopoulos, Stavros; Karnabatidis, Dimitris; Hazle, John D; Kagadis, George C
2017-07-01
To automatically segment and classify focal liver lesions (FLLs) on nonenhanced T2-weighted magnetic resonance imaging (MRI) scans using a computer-aided diagnosis (CAD) algorithm. 71 FLLs (30 benign lesions, 19 hepatocellular carcinomas, and 22 metastases) on T2-weighted MRI scans were delineated by the proposed CAD scheme. The FLL segmentation procedure involved wavelet multiscale analysis to extract accurate edge information and mean intensity values for consecutive edges computed using horizontal and vertical analysis that were fed into the subsequent fuzzy C-means algorithm for final FLL border extraction. Texture information for each extracted lesion was derived using 42 first- and second-order textural features from grayscale value histogram, co-occurrence, and run-length matrices. Twelve morphological features were also extracted to capture any shape differentiation between classes. Feature selection was performed with stepwise multilinear regression analysis that led to a reduced feature subset. A multiclass Probabilistic Neural Network (PNN) classifier was then designed and used for lesion classification. PNN model evaluation was performed using the leave-one-out (LOO) method and receiver operating characteristic (ROC) curve analysis. The mean overlap between the automatically segmented FLLs and the manual segmentations performed by radiologists was 0.91 ± 0.12. The highest classification accuracies in the PNN model for the benign, hepatocellular carcinoma, and metastatic FLLs were 94.1%, 91.4%, and 94.1%, respectively, with sensitivity/specificity values of 90%/97.3%, 89.5%/92.2%, and 90.9%/95.6% respectively. The overall classification accuracy for the proposed system was 90.1%. Our diagnostic system using sophisticated FLL segmentation and classification algorithms is a powerful tool for routine clinical MRI-based liver evaluation and can be a supplement to contrast-enhanced MRI to prevent unnecessary invasive procedures. © 2017 American Association of Physicists in Medicine.
Understanding Coupling of Global and Diffuse Solar Radiation with Climatic Variability
NASA Astrophysics Data System (ADS)
Hamdan, Lubna
Global solar radiation data is very important for wide variety of applications and scientific studies. However, this data is not readily available because of the cost of measuring equipment and the tedious maintenance and calibration requirements. Wide variety of models have been introduced by researchers to estimate and/or predict the global solar radiations and its components (direct and diffuse radiation) using other readily obtainable atmospheric parameters. The goal of this research is to understand the coupling of global and diffuse solar radiation with climatic variability, by investigating the relationships between these radiations and atmospheric parameters. For this purpose, we applied multilinear regression analysis on the data of National Solar Radiation Database 1991--2010 Update. The analysis showed that the main atmospheric parameters that affect the amount of global radiation received on earth's surface are cloud cover and relative humidity. Global radiation correlates negatively with both variables. Linear models are excellent approximations for the relationship between atmospheric parameters and global radiation. A linear model with the predictors total cloud cover, relative humidity, and extraterrestrial radiation is able to explain around 98% of the variability in global radiation. For diffuse radiation, the analysis showed that the main atmospheric parameters that affect the amount received on earth's surface are cloud cover and aerosol optical depth. Diffuse radiation correlates positively with both variables. Linear models are very good approximations for the relationship between atmospheric parameters and diffuse radiation. A linear model with the predictors total cloud cover, aerosol optical depth, and extraterrestrial radiation is able to explain around 91% of the variability in diffuse radiation. Prediction analysis showed that the linear models we fitted were able to predict diffuse radiation with efficiency of test adjusted R2 values equal to 0.93, using the data of total cloud cover, aerosol optical depth, relative humidity and extraterrestrial radiation. However, for prediction purposes, using nonlinear terms or nonlinear models might enhance the prediction of diffuse radiation.
Modeling Incorrect Responses to Multiple-Choice Items with Multilinear Formula Score Theory.
1987-08-01
Eisenhower Avenue University of Leyden Alexandria, VA 22333 Education Research Center Boerhaavelaan 2 Dr. John M. Eddins 2334 EN Leyden University of...22302-0268 Dr. William Montague NPRDC Code 13 Dr. William L. Maloy San Diego, CA 92152-6800 Chief of Naval Education and Training Ms. Kathleen Moreno
An Introduction to Multilinear Formula Score Theory. Measurement Series 84-4.
ERIC Educational Resources Information Center
Levine, Michael V.
Formula score theory (FST) associates each multiple choice test with a linear operator and expresses all of the real functions of item response theory as linear combinations of the operator's eigenfunctions. Hard measurement problems can then often be reformulated as easier, standard mathematical problems. For example, the problem of estimating…
NASA Astrophysics Data System (ADS)
Le Foll, S.; André, F.; Delmas, A.; Bouilly, J. M.; Aspa, Y.
2012-06-01
A backward Monte Carlo method for modelling the spectral directional emittance of fibrous media has been developed. It uses Mie theory to calculate the radiative properties of single fibres, modelled as infinite cylinders, and the complex refractive index is computed by a Drude-Lorenz model for the dielectric function. The absorption and scattering coefficient are homogenised over several fibres, but the scattering phase function of a single one is used to determine the scattering direction of energy inside the medium. Sensitivity analysis based on several Monte Carlo results has been performed to estimate coefficients for a Multi-Linear Model (MLM) specifically developed for inverse analysis of experimental data. This model concurs with the Monte Carlo method and is highly computationally efficient. In contrast, the surface emissivity model, which assumes an opaque medium, shows poor agreement with the reference Monte Carlo calculations.
Zarr, Robert R; Heckert, N Alan; Leigh, Stefan D
2014-01-01
Thermal conductivity data acquired previously for the establishment of Standard Reference Material (SRM) 1450, Fibrous Glass Board, as well as subsequent renewals 1450a, 1450b, 1450c, and 1450d, are re-analyzed collectively and as individual data sets. Additional data sets for proto-1450 material lots are also included in the analysis. The data cover 36 years of activity by the National Institute of Standards and Technology (NIST) in developing and providing thermal insulation SRMs, specifically high-density molded fibrous-glass board, to the public. Collectively, the data sets cover two nominal thicknesses of 13 mm and 25 mm, bulk densities from 60 kg·m(-3) to 180 kg·m(-3), and mean temperatures from 100 K to 340 K. The analysis repetitively fits six models to the individual data sets. The most general form of the nested set of multilinear models used is given in the following equation: [Formula: see text]where λ(ρ,T) is the predicted thermal conductivity (W·m(-1)·K(-1)), ρ is the bulk density (kg·m(-3)), T is the mean temperature (K) and ai (for i = 1, 2, … 6) are the regression coefficients. The least squares fit results for each model across all data sets are analyzed using both graphical and analytic techniques. The prevailing generic model for the majority of data sets is the bilinear model in ρ and T. [Formula: see text] One data set supports the inclusion of a cubic temperature term and two data sets with low-temperature data support the inclusion of an exponential term in T to improve the model predictions. Physical interpretations of the model function terms are described. Recommendations for future renewals of SRM 1450 are provided. An Addendum provides historical background on the origin of this SRM and the influence of the SRM on external measurement programs.
A skilful prediction scheme for West China autumn precipitation
NASA Astrophysics Data System (ADS)
Wei, Ting; Song, Wenling; Dong, Wenjie; Ke, Zongjian; Sun, Linhai; Wen, Xiaohang
2018-01-01
West China is one of the country's largest precipitation centres in autumn. This region's agriculture and people are highly vulnerable to the variability in the autumn rain. This study documents that the water vapour for West China autumn precipitation (WCAP) is from the Bay of Bengal, the South China Sea and the Western Pacific. A strong convergence of the three water vapour transports (WVTs) and their encounter with the cold air from the northern trough over Lake Barkersh-Lake Baikal result in the intense WCAP. Three predictors in the preceding spring or summer are identified for the interannual variability of WCAP: (1) sea surface temperature in the Indo-Pacific warm pool in summer, (2) soil moisture from the Hexi Corridor to the Hetao Plain in summer and (3) snow cover extent over East Europe and West Siberian in spring. The cold SSTAs contribute to an abnormal regional meridional circulation and intensified WVTs. The wet soil results in greater air humidity and anomalous southerly emerging over East Asia. Reduced snow cover stimulates a Rossby wave train that weakens the cold air, favouring autumn rainfall in West China. The three predictors, which demonstrate the influences of air-sea interaction, land surface processes and the cryosphere on the WCAP, have clear physical significance and are independent with each other. We then develop a new statistical prediction model with these predictors and the multilinear regression analysis method. The predicted and observed WCAP shows high correlation coefficients of 0.63 and 0.51 using cross-validation tests and independent hindcasts, respectively.
Li, Huiru; Liu, Hehuan; Mo, Ligui; Sheng, Guoying; Fu, Jiamo; Peng, Ping'an
2016-06-01
This study investigated polybrominated diphenyl ethers (PBDEs), polybrominated dibenzo-p-dioxins/furans (PBDD/Fs), and dechlorane plus (DP) in air around three concentrated vehicle parking areas (underground, indoor, and outdoor) in a metropolitan of South China. The parking areas showed higher concentrations of PBDEs, PBDD/Fs, and DP than their adjacent urban area or distinct congener/isomer profiles, which indicate their local emission sources. The highest PBDE and DP concentrations were found in the outdoor parking lot, which might be related to the heating effect of direct sunlight exposure. Multi-linear regression analysis results suggest that deca-BDEs without noticeable transformation contributed most to airborne PBDEs in all studied areas, followed by penta-BDEs. The statistically lower anti-DP fractions in the urban area than that of commercial product signified its degradation/transformation during transportation. Neither PBDEs nor vehicle exhaust contributed much to airborne PBDD/Fs in the parking areas. There were 68.1-100 % of PBDEs, PBDD/Fs, and DP associated with particles. Logarithms of gas-particle distribution coefficients (K ps) of PBDEs were significantly linear-correlated with those of their sub-cooled vapor pressures (p Ls) and octanol-air partition coefficients (K OAs) in all studied areas. The daily inhalation doses of PBDEs, DP, and PBDD/Fs were individually estimated as 89.7-10,741, 2.05-39.4, and 0.12-4.17 pg kg(-1) day(-1) for employees in the parking areas via Monte Carlo simulation.
Improving MAVEN-IUVS Lyman-Alpha Apoapsis Images
NASA Astrophysics Data System (ADS)
Chaffin, M.; AlMannaei, A. S.; Jain, S.; Chaufray, J. Y.; Deighan, J.; Schneider, N. M.; Thiemann, E.; Mayyasi, M.; Clarke, J. T.; Crismani, M. M. J.; Stiepen, A.; Montmessin, F.; Epavier, F.; McClintock, B.; Stewart, I. F.; Holsclaw, G.; Jakosky, B. M.
2017-12-01
In 2013, the Mars Atmosphere and Volatile EvolutioN (MAVEN) mission was launched to study the Martian upper atmosphere and ionosphere. MAVEN orbits through a very thin cloud of hydrogen gas, known as the hydrogen corona, that has been used to explore the planet's geologic evolution by detecting the loss of hydrogen from the atmosphere. Here we present various methods of extracting properties of the hydrogen corona from observations using MAVEN's Imaging Ultraviolet Spectograph (IUVS) instrument. The analysis presented here uses the IUVS Far Ultraviolet mode apoapase data. From apoapse, IUVS is able to obtain images of the hydrogen corona by detecting the Lyman-alpha airglow using a combination of instrument scan mirror and spacecraft motion. To complete one apoapse observation, eight scan swaths are performed to collect the observations and construct a coronal image. However, these images require further processing to account for the atmospheric MUV background that hinders the quality of the data. Here, we present new techniques for correcting instrument data. For the background subtraction, a multi-linear regression (MLR) routine of the first order MUV radiance was used to improve the images. A flat field correction was also applied by fitting a polynomial to periapse radiance observations. The apoapse data was re-binned using this fit.The results are presented as images to demonstrate the improvements in the data reduction. Implementing these methods for more orbits will improve our understanding of seasonal variability and H loss. Asymmetries in the Martian hydrogen corona can also be assessed to improve current model estimates of coronal H in the Martian atmosphere.
Variability of OH(3-1) and OH(6-2) emission altitude and volume emission rate from 2003 to 2011
NASA Astrophysics Data System (ADS)
Teiser, Georg; von Savigny, Christian
2017-08-01
In this study we report on variability in emission rate and centroid emission altitude of the OH(3-1) and OH(6-2) Meinel bands in the terrestrial nightglow based on spaceborne nightglow measurements with the SCIAMACHY (SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY) instrument on the Envisat satellite. The SCIAMACHY observations cover the time period from August 2002 to April 2012 and the nighttime observations used in this study are performed at 10:00 p.m. local solar time. Characterizing variability in OH emission altitude - particularly potential long-term variations - is important for an appropriate interpretation of ground-based OH rotational temperature measurements, because simultaneous observations of the vertical OH volume emission rate profile are usually not available for these measurements. OH emission altitude and vertically integrated emission rate time series with daily resolution for the OH(3-1) band and monthly resolution for the OH(6-2) band were analyzed using a standard multilinear regression approach allowing for seasonal variations, QBO-effects (Quasi-Biennial Oscillation), solar cycle (SC) variability and a linear long-term trend. The analysis focuses on low latitudes, where SCIAMACHY nighttime observations are available all year. The dominant sources of variability for both OH emission rate and altitude are the semi-annual and annual variations, with emission rate and altitude being highly anti-correlated. There is some evidence for a 11-year solar cycle signature in the vertically integrated emission rate and in the centroid emission altitude of both the OH(3-1) and OH(6-2) bands.
The GNAT: A new tool for processing NMR data.
Castañar, Laura; Poggetto, Guilherme Dal; Colbourne, Adam A; Morris, Gareth A; Nilsson, Mathias
2018-06-01
The GNAT (General NMR Analysis Toolbox) is a free and open-source software package for processing, visualising, and analysing NMR data. It supersedes the popular DOSY Toolbox, which has a narrower focus on diffusion NMR. Data import of most common formats from the major NMR platforms is supported, as well as a GNAT generic format. Key basic processing of NMR data (e.g., Fourier transformation, baseline correction, and phasing) is catered for within the program, as well as more advanced techniques (e.g., reference deconvolution and pure shift FID reconstruction). Analysis tools include DOSY and SCORE for diffusion data, ROSY T 1 /T 2 estimation for relaxation data, and PARAFAC for multilinear analysis. The GNAT is written for the MATLAB® language and comes with a user-friendly graphical user interface. The standard version is intended to run with a MATLAB installation, but completely free-standing compiled versions for Windows, Mac, and Linux are also freely available. © 2018 The Authors Magnetic Resonance in Chemistry Published by John Wiley & Sons Ltd.
Models of compacted fine-grained soils used as mineral liner for solid waste
NASA Astrophysics Data System (ADS)
Sivrikaya, Osman
2008-02-01
To prevent the leakage of pollutant liquids into groundwater and sublayers, the compacted fine-grained soils are commonly utilized as mineral liners or a sealing system constructed under municipal solid waste and other containment hazardous materials. This study presents the correlation equations of the compaction parameters required for construction of a mineral liner system. The determination of the characteristic compaction parameters, maximum dry unit weight ( γ dmax) and optimum water content ( w opt) requires considerable time and great effort. In this study, empirical models are described and examined to find which of the index properties correlate well with the compaction characteristics for estimating γ dmax and w opt of fine-grained soils at the standard compactive effort. The compaction data are correlated with different combinations of gravel content ( G), sand content ( S), fine-grained content (FC = clay + silt), plasticity index ( I p), liquid limit ( w L) and plastic limit ( w P) by performing multilinear regression (MLR) analyses. The obtained correlations with statistical parameters are presented and compared with the previous studies. It is found that the maximum dry unit weight and optimum water content have a considerably good correlation with plastic limit in comparison with liquid limit and plasticity index.
Evaluation of RISAT-1 SAR data for tropical forestry applications
NASA Astrophysics Data System (ADS)
Padalia, Hitendra; Yadav, Sadhana
2017-01-01
India launched C band (5.35 GHz) RISAT-1 (Radar Imaging Satellite-1) on 26th April, 2012, equipped with the capability to image the Earth at multiple-resolutions and -polarizations. In this study the potential of Fine Resolution Strip (FRS) modes of RISAT-1 was evaluated for characterization and classification forests and estimation of biomass of early growth stages. The study was carried out at the two sites located in the foothills of western Himalaya, India. The pre-processing and classification of FRS-1 SAR data was performed using PolSAR Pro ver. 5.0 software. The scattering mechanisms derived from m-chi decomposition of FRS-1 RH/RV data were found physically meaningful for the characterization of various surface features types. The forest and land use type classification of the study area was developed applying Support Vector Machine (SVM) algorithm on FRS-1 derived appropriate polarimetric features. The biomass of early growth stages of Eucalyptus (up to 60 ton/ha) was estimated developing a multi-linear regression model using C band σ0 HV and σ0 HH backscatter information. The study outcomes has promise for wider application of RISAT-1 data for forest cover monitoring, especially for the tropical regions.
Hou, X; Chen, X; Zhang, M; Yan, A
2016-01-01
Plasmodium falciparum, the most fatal parasite that causes malaria, is responsible for over one million deaths per year. P. falciparum dihydroorotate dehydrogenase (PfDHODH) has been validated as a promising drug development target for antimalarial therapy since it catalyzes the rate-limiting step for DNA and RNA biosynthesis. In this study, we investigated the quantitative structure-activity relationships (QSAR) of the antimalarial activity of PfDHODH inhibitors by generating four computational models using a multilinear regression (MLR) and a support vector machine (SVM) based on a dataset of 255 PfDHODH inhibitors. All the models display good prediction quality with a leave-one-out q(2) >0.66, a correlation coefficient (r) >0.85 on both training sets and test sets, and a mean square error (MSE) <0.32 on training sets and <0.37 on test sets, respectively. The study indicated that the hydrogen bonding ability, atom polarizabilities and ring complexity are predominant factors for inhibitors' antimalarial activity. The models are capable of predicting inhibitors' antimalarial activity and the molecular descriptors for building the models could be helpful in the development of new antimalarial drugs.
Kavurmacı, Murat; Ekercin, Semih; Altaş, Levent; Kurmaç, Yakup
2013-08-01
This paper focuses on the evaluation of water quality variations in Hirfanlı Water Reservoir, which is one of the most important water resources in Turkey, through EO-1 (Earth Observing-1) Advanced Land Imager (ALI) multispectral data and real-time field sampling. The study was materialized in 20 different sampling points during the overpass of the EO-1 ALI sensor over the study area. A multi-linear regression technique was used to explore the relationships between radiometrically corrected EO-1 ALI image data and water quality parameters: chlorophyll a, turbidity, and suspended solids. The retrieved and verified results show that the measured and estimated values of water quality parameters are in good agreement (R (2) >0.93). The resulting thematic maps derived from EO-1 multispectral data for chlorophyll a, turbidity, and suspended solids show the spatial distribution of the water quality parameters. The results indicate that the reservoir has average nutrient values. Furthermore, chlorophyll a, turbidity, and suspended solids values increased at the upstream reservoir and shallow coast of the Hirfanlı Water Reservoir.
Evaluating the process parameters of the dry coating process using a 2(5-1) factorial design.
Kablitz, Caroline Désirée; Urbanetz, Nora Anne
2013-02-01
A recent development of coating technology is dry coating, where polymer powder and liquid plasticizer are layered on the cores without using organic solvents or water. Several studies evaluating the process were introduced in literature, however, little information about the critical process parameters (CPPs) is given. Aim of the study was the investigation and optimization of CPPs with respect to one of the critical quality attributes (CQAs), the coating efficiency of the dry coating process in a rotary fluid bed. Theophylline pellets were coated with hydroxypropyl methylcellulose acetate succinate as enteric film former and triethyl citrate and acetylated monoglyceride as plasticizer. A 2(5-1) design of experiments (DOEs) was created investigating five independent process parameters namely coating temperature, curing temperature, feeding/spraying rate, air flow and rotor speed. The results were evaluated by multilinear regression using the software Modde(®) 7. It is shown, that generally, low feeding/spraying rates and low rotor speeds increase coating efficiency. High coating temperatures enhance coating efficiency, whereas medium curing temperatures have been found to be optimum in terms of coating efficiency. This study provides a scientific base for the design of efficient dry coating processes with respect to coating efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basso, Benjamin; Dixon, Lance J.
We use integrability at weak coupling to compute fishnet diagrams for four-point correlation functions in planar Φ 4 theory. Our results are always multilinear combinations of ladder integrals, which are in turn built out of classical polylogarithms. The Steinmann relations provide a powerful constraint on such linear combinations, which leads to a natural conjecture for any fishnet diagram as the determinant of a matrix of ladder integrals.
Bohnenblust-Hille inequalities: analytical and computational aspects.
Cavalcante, Wasthenny V; Pellegrino, Daniel M
2018-02-01
The Bohnenblust-Hille polynomial and multilinear inequalities were proved in 1931 and the determination of exact values of their constants is still an open and challenging problem, pursued by various authors. The present paper briefly surveys recent attempts to attack/solve this problem; it also presents new results, like connections with classical results of the linear theory of absolutely summing operators, and new perspectives.
Multidimensional NMR inversion without Kronecker products: Multilinear inversion
NASA Astrophysics Data System (ADS)
Medellín, David; Ravi, Vivek R.; Torres-Verdín, Carlos
2016-08-01
Multidimensional NMR inversion using Kronecker products poses several challenges. First, kernel compression is only possible when the kernel matrices are separable, and in recent years, there has been an increasing interest in NMR sequences with non-separable kernels. Second, in three or more dimensions, the singular value decomposition is not unique; therefore kernel compression is not well-defined for higher dimensions. Without kernel compression, the Kronecker product yields matrices that require large amounts of memory, making the inversion intractable for personal computers. Finally, incorporating arbitrary regularization terms is not possible using the Lawson-Hanson (LH) or the Butler-Reeds-Dawson (BRD) algorithms. We develop a minimization-based inversion method that circumvents the above problems by using multilinear forms to perform multidimensional NMR inversion without using kernel compression or Kronecker products. The new method is memory efficient, requiring less than 0.1% of the memory required by the LH or BRD methods. It can also be extended to arbitrary dimensions and adapted to include non-separable kernels, linear constraints, and arbitrary regularization terms. Additionally, it is easy to implement because only a cost function and its first derivative are required to perform the inversion.
NASA Astrophysics Data System (ADS)
Enfedaque, A.; Alberti, M. G.; Gálvez, J. C.
2017-09-01
The relevance of fibre reinforced cementitious materials (FRC) has increased due to the appearance of regulations that establish the requirements needed to take into account the contribution of the fibres in the structural design. However, in order to exploit the properties of such materials it is a key aspect being able to simulate their behaviour under fracture conditions. Considering a cohesive crack approach, several authors have studied the suitability of using several softening functions. However, none of these functions can be directly applied to FRC. The present contribution analyses the suitability of multilinear softening functions in order to obtain simulation results of fracture tests of a wide variety of FRC. The implementation of multilinear softening functions has been successfully performed by means of a material user subroutine in a commercial finite element code obtaining accurate results in a wide variety of FRC. Such softening functions were capable of simulating a ductile unloading behaviour as well as a rapid unloading followed by a reloading and afterwards a slow unloading. Moreover, the implementation performed has been proven as versatile, robust and efficient from a numerical point of view.
Gluing Ladder Feynman Diagrams into Fishnets
Basso, Benjamin; Dixon, Lance J.
2017-08-14
We use integrability at weak coupling to compute fishnet diagrams for four-point correlation functions in planar Φ 4 theory. Our results are always multilinear combinations of ladder integrals, which are in turn built out of classical polylogarithms. The Steinmann relations provide a powerful constraint on such linear combinations, which leads to a natural conjecture for any fishnet diagram as the determinant of a matrix of ladder integrals.
NASA Astrophysics Data System (ADS)
Reyes-Villegas, Ernesto; Priestley, Michael; Ting, Yu-Chieh; Haslett, Sophie; Bannan, Thomas; Le Breton, Michael; Williams, Paul I.; Bacak, Asan; Flynn, Michael J.; Coe, Hugh; Percival, Carl; Allan, James D.
2018-03-01
Over the past decade, there has been an increasing interest in short-term events that negatively affect air quality such as bonfires and fireworks. High aerosol and gas concentrations generated from public bonfires or fireworks were measured in order to understand the night-time chemical processes and their atmospheric implications. Nitrogen chemistry was observed during Bonfire Night with nitrogen containing compounds in both gas and aerosol phases and further N2O5 and ClNO2 concentrations, which depleted early next morning due to photolysis of NO3 radicals and ceasing production. Particulate organic oxides of nitrogen (PONs) concentrations of 2.8 µg m-3 were estimated using the m / z 46 : 30 ratios from aerosol mass spectrometer (AMS) measurements, according to previously published methods. Multilinear engine 2 (ME-2) source apportionment was performed to determine organic aerosol (OA) concentrations from different sources after modifying the fragmentation table and it was possible to identify two PON factors representing primary (pPON_ME2) and secondary (sPON_ME2) contributions. A slight improvement in the agreement between the source apportionment of the AMS and a collocated AE-31 Aethalometer was observed after modifying the prescribed fragmentation in the AMS organic spectrum (the fragmentation table) to determine PON sources, which resulted in an r2 = 0.894 between biomass burning organic aerosol (BBOA) and babs_470wb compared to an r2 = 0.861 obtained without the modification. Correlations between OA sources and measurements made using time-of-flight chemical ionisation mass spectrometry with an iodide adduct ion were performed in order to determine possible gas tracers to be used in future ME-2 analyses to constrain solutions. During Bonfire Night, strong correlations (r2) were observed between BBOA and methacrylic acid (0.92), acrylic acid (0.90), nitrous acid (0.86), propionic acid, (0.85) and hydrogen cyanide (0.76). A series of oxygenated species and chlorine compounds showed good correlations with sPON_ME2 and the low volatility oxygenated organic aerosol (LVOOA) factor during Bonfire Night and an event with low pollutant concentrations. Further analysis of pPON_ME2 and sPON_ME2 was performed in order to determine whether these PON sources absorb light near the UV region using an Aethalometer. This hypothesis was tested by doing multilinear regressions between babs_470wb and BBOA, sPON_ME2 and pPON_ME2. Our results suggest that sPON_ME2 does not absorb light at 470 nm, while pPON_ME2 and LVOOA do absorb light at 470 nm. This may inform black carbon (BC) source apportionment studies from Aethalometer measurements, through investigation of the brown carbon contribution to babs_470wb.
Recuerda, Maximilien; Périé, Delphine; Gilbert, Guillaume; Beaudoin, Gilles
2012-10-12
The treatment planning of spine pathologies requires information on the rigidity and permeability of the intervertebral discs (IVDs). Magnetic resonance imaging (MRI) offers great potential as a sensitive and non-invasive technique for describing the mechanical properties of IVDs. However, the literature reported small correlation coefficients between mechanical properties and MRI parameters. Our hypothesis is that the compressive modulus and the permeability of the IVD can be predicted by a linear combination of MRI parameters. Sixty IVDs were harvested from bovine tails, and randomly separated in four groups (in-situ, digested-6h, digested-18h, digested-24h). Multi-parametric MRI acquisitions were used to quantify the relaxation times T1 and T2, the magnetization transfer ratio MTR, the apparent diffusion coefficient ADC and the fractional anisotropy FA. Unconfined compression, confined compression and direct permeability measurements were performed to quantify the compressive moduli and the hydraulic permeabilities. Differences between groups were evaluated from a one way ANOVA. Multi linear regressions were performed between dependent mechanical properties and independent MRI parameters to verify our hypothesis. A principal component analysis was used to convert the set of possibly correlated variables into a set of linearly uncorrelated variables. Agglomerative Hierarchical Clustering was performed on the 3 principal components. Multilinear regressions showed that 45 to 80% of the Young's modulus E, the aggregate modulus in absence of deformation HA0, the radial permeability kr and the axial permeability in absence of deformation k0 can be explained by the MRI parameters within both the nucleus pulposus and the annulus pulposus. The principal component analysis reduced our variables to two principal components with a cumulative variability of 52-65%, which increased to 70-82% when considering the third principal component. The dendograms showed a natural division into four clusters for the nucleus pulposus and into three or four clusters for the annulus fibrosus. The compressive moduli and the permeabilities of isolated IVDs can be assessed mostly by MT and diffusion sequences. However, the relationships have to be improved with the inclusion of MRI parameters more sensitive to IVD degeneration. Before the use of this technique to quantify the mechanical properties of IVDs in vivo on patients suffering from various diseases, the relationships have to be defined for each degeneration state of the tissue that mimics the pathology. Our MRI protocol associated to principal component analysis and agglomerative hierarchical clustering are promising tools to classify the degenerated intervertebral discs and further find biomarkers and predictive factors of the evolution of the pathologies.
Regional regression equations for estimation of natural streamflow statistics in Colorado
Capesius, Joseph P.; Stephens, Verlin C.
2009-01-01
The U.S. Geological Survey (USGS), in cooperation with the Colorado Water Conservation Board and the Colorado Department of Transportation, developed regional regression equations for estimation of various streamflow statistics that are representative of natural streamflow conditions at ungaged sites in Colorado. The equations define the statistical relations between streamflow statistics (response variables) and basin and climatic characteristics (predictor variables). The equations were developed using generalized least-squares and weighted least-squares multilinear regression reliant on logarithmic variable transformation. Streamflow statistics were derived from at least 10 years of streamflow data through about 2007 from selected USGS streamflow-gaging stations in the study area that are representative of natural-flow conditions. Basin and climatic characteristics used for equation development are drainage area, mean watershed elevation, mean watershed slope, percentage of drainage area above 7,500 feet of elevation, mean annual precipitation, and 6-hour, 100-year precipitation. For each of five hydrologic regions in Colorado, peak-streamflow equations that are based on peak-streamflow data from selected stations are presented for the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year instantaneous-peak streamflows. For four of the five hydrologic regions, equations based on daily-mean streamflow data from selected stations are presented for 7-day minimum 2-, 10-, and 50-year streamflows and for 7-day maximum 2-, 10-, and 50-year streamflows. Other equations presented for the same four hydrologic regions include those for estimation of annual- and monthly-mean streamflow and streamflow-duration statistics for exceedances of 10, 25, 50, 75, and 90 percent. All equations are reported along with salient diagnostic statistics, ranges of basin and climatic characteristics on which each equation is based, and commentary of potential bias, which is not otherwise removed by log-transformation of the variables of the equations from interpretation of residual plots. The predictor-variable ranges can be used to assess equation applicability for ungaged sites in Colorado.
Strategic Studies Quarterly. Volume 6, Number 4, Winter 2012
2012-01-01
surfaced in Australia, where a disgruntled employee rigged a computerized control system at a water treatment plant and...strategy" refers to a multilinear whole-of-government method geared to overcome the resistance and effects of a rival’s A2/AD strategy. r * Anti-Access...counterspace tech- nologies, and long-range surface -to-air missiles. To a force that intends to [44] STRATEGIC STUDIES QUARTERLY ♦ WINTER 2012
Toroody, Ahmad Bahoo; Abaei, Mohammad Mahdy; Gholamnia, Reza
2016-12-01
Risk assessment can be classified into two broad categories: traditional and modern. This paper is aimed at contrasting the functional resonance analysis method (FRAM) as a modern approach with the fault tree analysis (FTA) as a traditional method, regarding assessing the risks of a complex system. Applied methodology by which the risk assessment is carried out, is presented in each approach. Also, FRAM network is executed with regard to nonlinear interaction of human and organizational levels to assess the safety of technological systems. The methodology is implemented for lifting structures deep offshore. The main finding of this paper is that the combined application of FTA and FRAM during risk assessment, could provide complementary perspectives and may contribute to a more comprehensive understanding of an incident. Finally, it is shown that coupling a FRAM network with a suitable quantitative method will result in a plausible outcome for a predefined accident scenario.
Pellegrino Baena, Cristina; Goulart, Alessandra Carvalho; Santos, Itamar de Souza; Suemoto, Claudia Kimie; Lotufo, Paulo Andrade; Bensenor, Isabela Judith
2017-01-01
Background The association between migraine and cognitive performance is unclear. We analyzed whether migraine is associated with cognitive performance among participants of the Brazilian Longitudinal Study of Adult Health, ELSA-Brasil. Methods Cross-sectional analysis, including participants with complete information about migraine and aura at baseline. Headache status (no headaches, non-migraine headaches, migraine without aura and migraine with aura), based on the International Headache Society classification, was used as the dependent variable in the multilinear regression models, using the category "no headache" as reference. Cognitive performance was measured with the Consortium to Establish a Registry for Alzheimer's Disease word list memory test (CERAD-WLMT), the semantic fluency test (SFT), and the Trail Making Test version B (TMTB). Z-scores for each cognitive test and a composite global score were created and analyzed as dependent variables. Multivariate models were adjusted for age, gender, education, race, coronary heart disease, heart failure, hypertension, diabetes, dyslipidemia, body mass index, smoking, alcohol use, physical activity, depression, and anxiety. In women, the models were further adjusted for hormone replacement therapy. Results We analyzed 4208 participants. Of these, 19% presented migraine without aura and 10.3% presented migraine with aura. All migraine headaches were associated with poor cognitive performance (linear coefficient β; 95% CI) at TMTB -0.083 (-0.160; -0.008) and poorer global z-score -0.077 (-0.152; -0.002). Also, migraine without aura was associated with poor cognitive performance at TMTB -0.084 (-0.160, -0.008 and global z-score -0.077 (-0.152; -0.002). Conclusion In participants of the ELSA-study, all migraine headaches and migraine without aura were significantly and independently associated with poorer cognitive performance.
Böhm, Harald; Hösl, Matthias; Döderlein, Leonhard
2017-05-01
Patellar tendon shortening procedure within single event multilevel surgeries was shown to improve crouch gait in Cerebral Palsy (CP) patients. However, one of the drawbacks associated to the correction of flexed knee gait may be increased pelvic anterior tilt with compensatory lumbar lordosis. Which CP patients are at risk for excessive anterior pelvic tilt following correction of flexed knee gait including patellar tendon shortening? 32 patients with CP between 8 and 18 years GMFCS I&II were included. They received patellar tendon shortenings within multilevel surgery. Patients with concomitant knee flexor lengthening were excluded. Gait analysis and clinical testing was performed pre- and 24.1 (SD=1.9) months postoperatively. Patients were subdivided into more/less than 5° increase in anterior pelvic tilt. Preoperative measures indicating m. rectus and m. psoas shortness, knee flexor over-length, hip extensor and abdominal muscle weakness and equinus gait were compared between groups. Stepwise multilinear regression of the response value increase in pelvic tilt during stance phase was performed from parameters that were significantly different between groups. 34% of patients showed more than 5° increased pelvic anterior tilt postoperatively. Best predictors for anterior pelvic tilt from preoperative measures were increased m. rectus tone and reduced hip extension during walking that explained together 39% of the variance in increase of anterior pelvic tilt. Every third patient showed considerable increased pelvic tilt following surgery of flexed knee gait. In particular patients with preoperative higher muscle tone in m. rectus and lower hip extension during walking were at risk and both features need to be addressed in the therapy. Copyright © 2017 Elsevier B.V. All rights reserved.
Turan, Sevgi; Konan, Ali
2012-01-01
Self-regulated learning indicates students' skills in controlling their own learning. Self-regulated learning, which a context-specific process, emphasizes autonomy and control. Students gain more autonomy with respect to learning in the clinical years. Examining the self-regulated learning skills of students in this period will provide important clues about the level at which students are ready to use these skills in real-life conditions. The self-regulated learning strategies used by medical students in surgical clerkship were investigated in this study and their relation with clinical achievement was analyzed. The study was conducted during the surgery clerkship of medical students. The participation rate was 94% (309 students). Motivated Strategies for Learning Questionnaire (MSLQ), a case-based examination, Objective Structured Clinical Examination (OSCE), and tutor evaluations for assessing achievement were used. The relationship between the Motivated Strategies for Learning Questionnaire scores of the students and clinical achievement was analyzed with multilinear regression analysis. The findings showed that students use self-regulated learning skills at medium levels during their surgery clerkship. A relationship between these skills and OSCE scores and tutor evaluations was determined. OSCE scores of the students were observed to increase in conjunction with increased self-efficacy levels. However, as students' beliefs regarding control over learning increased, OSCE scores decreased. No significant relationship was defined between self-regulated learning skills and case-based examination scores. We observed that a greater self-efficacy for learning resulted in higher OSCE scores. Conversely, students who believe that learning is a result of their own effort had lower OSCE scores. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Redmond, Haley; Thompson, Jonathan E
2011-04-21
In this work we describe and evaluate a simple scheme by which the refractive index (λ = 589 nm) of non-absorbing components common to secondary organic aerosols (SOA) may be predicted from molecular formula and density (g cm(-3)). The QSPR approach described is based on three parameters linked to refractive index-molecular polarizability, the ratio of mass density to molecular weight, and degree of unsaturation. After computing these quantities for a training set of 111 compounds common to atmospheric aerosols, multi-linear regression analysis was conducted to establish a quantitative relationship between the parameters and accepted value of refractive index. The resulting quantitative relationship can often estimate refractive index to ±0.01 when averaged across a variety of compound classes. A notable exception is for alcohols for which the model consistently underestimates refractive index. Homogenous internal mixtures can conceivably be addressed through use of either the volume or mole fraction mixing rules commonly used in the aerosol community. Predicted refractive indices reconstructed from chemical composition data presented in the literature generally agree with previous reports of SOA refractive index. Additionally, the predicted refractive indices lie near measured values we report for λ = 532 nm for SOA generated from vapors of α-pinene (R.I. 1.49-1.51) and toluene (R.I. 1.49-1.50). We envision the QSPR method may find use in reconstructing optical scattering of organic aerosols if mass composition data is known. Alternatively, the method described could be incorporated into in models of organic aerosol formation/phase partitioning to better constrain organic aerosol optical properties.
Decadal prediction of Sahel rainfall: where does the skill (or lack thereof) come from?
NASA Astrophysics Data System (ADS)
Mohino, Elsa; Keenlyside, Noel; Pohlmann, Holger
2016-12-01
Previous works suggest decadal predictions of Sahel rainfall could be skillful. However, the sources of such skill are still under debate. In addition, previous results are based on short validation periods (i.e. less than 50 years). In this work we propose a framework based on multi-linear regression analysis to study the potential sources of skill for predicting Sahel trends several years ahead. We apply it to an extended decadal hindcast performed with the MPI-ESM-LR model that span from 1901 to 2010 with 1 year sampling interval. Our results show that the skill mainly depends on how well we can predict the timing of the global warming (GW), the Atlantic multidecadal variability (AMV) and, to a lesser extent, the inter-decadal Pacific oscillation signals, and on how well the system simulates the associated SST and West African rainfall response patterns. In the case of the MPI-ESM-LR decadal extended hindcast, the observed timing is well reproduced only for the GW and AMV signals. However, only the West African rainfall response to the AMV is correctly reproduced. Thus, for most of the lead times the main source of skill in the decadal hindcast of West African rainfall is from the AMV. The GW signal degrades skill because the response of West African rainfall to GW is incorrectly captured. Our results also suggest that initialized decadal predictions of West African rainfall can be further improved by better simulating the response of global SST to GW and AMV. Furthermore, our approach may be applied to understand and attribute prediction skill for other variables and regions.
Zhong, N; Xu, B; Cui, R; Xu, M; Su, J; Zhang, Z; Liu, Y; Li, L; Sheng, C; Sheng, H; Qu, S
2016-07-01
Animal studies suggested that there is an independent bone-osteocalcin-gonadal axis, except of the hypothalamic-pituitary-gonadal axis. Based on this hypothesis, the higher osteocalcin during the high bone turnover should be followed by higher testosterone formation. Yet such clinical evidence is limited. The patients with uncontrolled hyperthyroidism are proper model with high bone turnover. If this hypothesis is true, there should be high testosterone level in patients with uncontrolled hyperthyroidism. Therefore, Graves' disease patients were recruited to study the correlation between osteocalcin and testosterone. 50 male hyperthyroidism patients with Graves' disease and 50 health persons matched by age and gender were enrolled in our cross-section study. Serum markers for thyroid hormone, sex hormone and bone metabolic markers including free triiodothyronine (FT3), free thyroxine (FT4), thyroid-stimulating hormone (TSH), testosterone, luteinizing hormone (LH), follicle-stimulating hormone (FSH) and osteocalcin (OC), C-terminal telopeptide fragments of type I collagen (CTX) were examined. The demographic parameters such as duration of disease were also collected. All data was analyzed by SPSS 20.0. High testosterone and osteocalcin level was observed in the hyperthyroidism patients (T 36.35±10.72 nmol/l and OC 46.79±26.83 ng/ml). In simple Pearson correlation, testosterone was positively associated with OC (r=0.486, P<0.001), and this positive relation still existed after adjusted for age, BMI, smoking, drinking, duration of disease, FT3, FT4, LH, FSH, CTX in multi-linear regression analysis (See Model 1-4). In male hyperthyroidism patients, osteocalcin was positively correlated with serum testosterone, which indirectly supports the hypothesis that serum osteocalcin participates in the regulation of sex hormone. © Georg Thieme Verlag KG Stuttgart · New York.
NASA Astrophysics Data System (ADS)
Wang, Xinlong; Nalawade, Sahil Sunil; Reddy, Divya Dhandapani; Tian, Fenghua; Gonzalez-Lima, F.; Liu, Hanli
2017-02-01
Transcranial infrared laser stimulation (TILS) uses infrared light (lasers or LEDs) for nondestructive and non-thermal photobiomodulation on the human brain. Although TILS has shown its beneficial effects to a variety of neurological and psychological conditions, its physiological mechanism remains unknown. Cytochrome-c-oxidase (CCO), the last enzyme in the electron transportation chain, is proposed to be the primary photoacceptor of this infrared laser. In this study, we wish to validate this proposed mechanism. We applied 8 minutes in vivo TILS on the right forehead of 11 human participants with a 1064-nm laser. Broad-band near infrared spectroscopy (bb-NIRS) from 740-900nm was also employed near the TILS site to monitor hemodynamic and metabolic responses during the stimulation and 5-minute recovery period. For rigorous comparison, we also performed similar 8-min bb-NIR measurements under placebo conditions. A multi-linear regression analysis based on the modified Beer-Lambert law was performed to estimate concentration changes of oxy-hemoglobin (Δ[HbO]), deoxy-hemoglobin (Δ[Hb]), and cytochrome-c-oxidase (Δ[CCO]). We found that TILS induced significant increases of [CCO], [HbO] and a decrease of [Hb] with dose-dependent manner as compared with placebo treatments. Furthermore, strong linear relationships or interplays between [CCO] versus [HbO] and [CCO] versus [Hb] induced by TILS were observed in vivo for the first time. These relationships have clearly revealed close coupling/relationship between the hemodynamic oxygen supply and blood volume versus up-regulation of CCO induced by photobiomodulation. Our results demonstrate the tremendous potential of bb-NIRS as a non-invasive in vivo means to study photobiomodulation mechanisms and perform treatment evaluations of TILS.
Fiber-Content Measurement of Wool-Cashmere Blends Using Near-Infrared Spectroscopy.
Zhou, Jinfeng; Wang, Rongwu; Wu, Xiongying; Xu, Bugao
2017-10-01
Cashmere and wool are two protein fibers with analogous geometrical attributes, but distinct physical properties. Due to its scarcity and unique features, cashmere is a much more expensive fiber than wool. In the textile production, cashmere is often intentionally blended with fine wool in order to reduce the material cost. To identify the fiber contents of a wool-cashmere blend is important to quality control and product classification. The goal of this study is to develop a reliable method for estimating fiber contents in wool-cashmere blends based on near-infrared (NIR) spectroscopy. In this study, we prepared two sets of cashmere-wool blends by using either whole fibers or fiber snippets in 11 different blend ratios of the two fibers and collected the NIR spectra of all the 22 samples. Of the 11 samples in each set, six were used as a subset for calibration and five as a subset for validation. By referencing the NIR band assignment to chemical bonds in protein, we identified six characteristic wavelength bands where the NIR absorbance powers of the two fibers were significantly different. We then performed the chemometric analysis with two multilinear regression (MLR) equations to predict the cashmere content (CC) in a blended sample. The experiment with these samples demonstrated that the predicted CCs from the MLR models were consistent with the CCs given in the preparations of the two sample sets (whole fiber or snippet), and the errors of the predicted CCs could be limited to 0.5% if the testing was performed over at least 25 locations. The MLR models seem to be reliable and accurate enough for estimating the cashmere content in a wool-cashmere blend and have potential to be used for tackling the cashmere adulteration problem.
NASA Astrophysics Data System (ADS)
Liu, Rongjie; Zhang, Jie; Yao, Haiyan; Cui, Tingwei; Wang, Ning; Zhang, Yi; Wu, Lingjuan; An, Jubai
2017-09-01
In this study, we monitored hourly changes in sea surface salinity (SSS) in turbid coastal waters from geostationary satellite ocean color images for the first time, using the Bohai Sea as a case study. We developed a simple multi-linear statistical regression model to retrieve SSS data from Geostationary Ocean Color Imager (GOCI) based on an in situ satellite matched-up dataset (R2 = 0.795; N = 41; Range: 26.4 to 31.9 psμ). The model was then validated using independent continuous SSS measurements from buoys, with the average percentage difference of 0.65%. The model was applied to GOCI images from the dry season during an astronomical tide to characterize hourly changes in SSS in the Bohai Sea. We found that the model provided reasonable estimates of the hourly changes in SSS and that trends in the modeled and measured data were similar in magnitude and direction (0.43 vs 0.33 psμ, R2 = 0.51). There were clear diurnal variations in the SSS of the Bohai Sea, with a regional average of 0.455 ± 0.079 psμ (0.02-3.77 psμ). The magnitude of the diurnal variations in SSS varied spatially, with large diurnal variability in the nearshore, particularly in the estuary, and small variability in the offshore area. The model for the riverine area was based on the inverse correlation between SSS and CDOM absorption. In the offshore area, the water mass of the North Yellow Sea, characterized by high SSS and low CDOM concentrations, dominated. Analysis of the driving mechanisms showed that the tidal current was the main control on hourly changes in SSS in the Bohai Sea.
Jeong, Sohyun; Sohn, Minji; Kim, Jae Hyun; Ko, Minoh; Seo, Hee-Won; Song, Yun-Kyoung; Choi, Boyoon; Han, Nayoung; Na, Han-Sung; Lee, Jong Gu; Kim, In-Wha; Oh, Jung Mi; Lee, Euni
2017-06-21
Clinical trial globalization is a major trend for industry-sponsored clinical trials. There has been a shift in clinical trial sites towards emerging regions of Eastern Europe, Latin America, Asia, the Middle East, and Africa. Our study objectives were to evaluate the current characteristics of clinical trials and to find out the associated multiple factors which could explain clinical trial globalization and its implications for clinical trial globalization in 2011-2013. The data elements of "phase," "recruitment status," "type of sponsor," "age groups," and "design of trial" from 30 countries were extracted from the ClinicalTrials.gov website. Ten continental representative countries including the USA were selected and the design elements were compared to those of the USA. Factors associated with trial site distribution were chosen for a multilinear regression analysis. The USA, Germany, France, Canada, and United Kingdom were the "top five" countries which frequently held clinical trials. The design elements from nine continental representative countries were quite different from those of the USA; phase 1 trials were more prevalent in India (OR 1.517, p < 0.001) while phase 3 trials were much more prevalent in all nine representative countries than in the USA. A larger number of "child" age group trials was performed in Poland (OR 1.852, p < 0.001), Israel (OR 1.546, p = 0.005), and South Africa (OR 1.963, p < 0.001) than in the USA. Multivariate analysis showed that health care expenditure per capita, Economic Freedom Index, Human Capital Index, and Intellectual Property Rights Index could explain the variance of regional distribution of clinical trials by 63.6%. The globalization of clinical trials in the emerging regions of Asia, South Africa, and Eastern Europe developed in parallel with the factors of economic drive, population for recruitment, and regulatory constraints.
Body mass index and glycemic control influence lipoproteins in children with type 1 diabetes.
Vaid, Shalini; Hanks, Lynae; Griffin, Russell; Ashraf, Ambika P
2016-01-01
Patients with type 1 diabetes mellitus (T1DM) have an extremely high risk of cardiovascular disease (CVD) morbidity and mortality. It is well known that dyslipidemia is a subclinical manifestation of atherosclerosis. To analyze presence and predicting factors of lipoprotein abnormalities prevalent in children with T1DM and whether race-specific differences exist between non-Hispanic white (NHW) and non-Hispanic black (NHB) in the lipoprotein characteristics. A retrospective electronic chart review including 600 (123 NHB and 477 NHW) T1DM patients aged 7.85 ± 3.75 years who underwent lipoprotein analysis. Relative to NHW counterparts, NHB T1DM subjects had a higher HbA1c, total cholesterol (TC), low-density lipoprotein cholesterol (LDL-c), apoB 100, lipoprotein (a), and high-density lipoprotein cholesterol (HDL-c), HDL-2, and HDL-3. Body mass index (BMI) was positively associated with TC, LDL-c, apoB 100, and non-HDL-c and inversely associated with HDL, HDL-2, and HDL-3. HbA1c was positively associated with TC, LDL-c, apoB 100, non-HDL-c, and HDL-3. Multilinear regression analysis demonstrated that HbA1c was positively associated with apoB 100 in both NHB and NHW, and BMI was a positive determinant of apoB 100 in NHW only. Poor glycemic control and high BMI may contribute to abnormal lipoprotein profiles. Glycemic control (in NHB and NHW) and weight management (in NHW) may have significant implications in T1DM. ApoB 100 concentrations in subjects with T1DM were determined by modifiable risk factors, BMI, HbA1C, and blood pressure, indicating the importance of adequate weight, glycemic, and blood pressure control for better diabetes care and likely lower CVD risk. Copyright © 2016 National Lipid Association. Published by Elsevier Inc. All rights reserved.
Gradient design for liquid chromatography using multi-scale optimization.
López-Ureña, S; Torres-Lapasió, J R; Donat, R; García-Alvarez-Coque, M C
2018-01-26
In reversed phase-liquid chromatography, the usual solution to the "general elution problem" is the application of gradient elution with programmed changes of organic solvent (or other properties). A correct quantification of chromatographic peaks in liquid chromatography requires well resolved signals in a proper analysis time. When the complexity of the sample is high, the gradient program should be accommodated to the local resolution needs of each analyte. This makes the optimization of such situations rather troublesome, since enhancing the resolution for a given analyte may imply a collateral worsening of the resolution of other analytes. The aim of this work is to design multi-linear gradients that maximize the resolution, while fulfilling some restrictions: all peaks should be eluted before a given maximal time, the gradient should be flat or increasing, and sudden changes close to eluting peaks are penalized. Consequently, an equilibrated baseline resolution for all compounds is sought. This goal is achieved by splitting the optimization problem in a multi-scale framework. In each scale κ, an optimization problem is solved with N κ ≈ 2 κ variables that are used to build the gradients. The N κ variables define cubic splines written in terms of a B-spline basis. This allows expressing gradients as polygonals of M points approximating the splines. The cubic splines are built using subdivision schemes, a technique of fast generation of smooth curves, compatible with the multi-scale framework. Owing to the nature of the problem and the presence of multiple local maxima, the algorithm used in the optimization problem of each scale κ should be "global", such as the pattern-search algorithm. The multi-scale optimization approach is successfully applied to find the best multi-linear gradient for resolving a mixture of amino acid derivatives. Copyright © 2017 Elsevier B.V. All rights reserved.
Artificial neural network modeling of dissolved oxygen in reservoir.
Chen, Wei-Bo; Liu, Wen-Cheng
2014-02-01
The water quality of reservoirs is one of the key factors in the operation and water quality management of reservoirs. Dissolved oxygen (DO) in water column is essential for microorganisms and a significant indicator of the state of aquatic ecosystems. In this study, two artificial neural network (ANN) models including back propagation neural network (BPNN) and adaptive neural-based fuzzy inference system (ANFIS) approaches and multilinear regression (MLR) model were developed to estimate the DO concentration in the Feitsui Reservoir of northern Taiwan. The input variables of the neural network are determined as water temperature, pH, conductivity, turbidity, suspended solids, total hardness, total alkalinity, and ammonium nitrogen. The performance of the ANN models and MLR model was assessed through the mean absolute error, root mean square error, and correlation coefficient computed from the measured and model-simulated DO values. The results reveal that ANN estimation performances were superior to those of MLR. Comparing to the BPNN and ANFIS models through the performance criteria, the ANFIS model is better than the BPNN model for predicting the DO values. Study results show that the neural network particularly using ANFIS model is able to predict the DO concentrations with reasonable accuracy, suggesting that the neural network is a valuable tool for reservoir management in Taiwan.
Bertelkamp, C; Verliefde, A R D; Reynisson, J; Singhal, N; Cabo, A J; de Jonge, M; van der Hoek, J P
2016-03-05
This study investigated relationships between OMP biodegradation rates and the functional groups present in the chemical structure of a mixture of 31 OMPs. OMP biodegradation rates were determined from lab-scale columns filled with soil from RBF site Engelse Werk of the drinking water company Vitens in The Netherlands. A statistically significant relationship was found between OMP biodegradation rates and the functional groups of the molecular structures of OMPs in the mixture. The OMP biodegradation rate increased in the presence of carboxylic acids, hydroxyl groups, and carbonyl groups, but decreased in the presence of ethers, halogens, aliphatic ethers, methyl groups and ring structures in the chemical structure of the OMPs. The predictive model obtained from the lab-scale soil column experiment gave an accurate qualitative prediction of biodegradability for approximately 70% of the OMPs monitored in the field (80% excluding the glymes). The model was found to be less reliable for the more persistent OMPs (OMPs with predicted biodegradation rates lower or around the standard error=0.77d(-1)) and OMPs containing amide or amine groups. These OMPs should be carefully monitored in the field to determine their removal during RBF. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, S.; Tian, H.; Wang, X.; Li, H.; He, Y.
2018-04-01
Vegetation plays a leading role in ecosystems. Plant communities are the main components of ecosystems. Green plants in ecosystems are the primary producers, and they provide the living organic matter for the survival of other organisms. The dynamics of most landscapes are driven by both natural processes and human activities. In this study, the growing season GIMMS NDVI3g and climatic data were used to analyse the vegetation trends and drivers in Beijing-Tianjin-Hebei region from 1982 to 2013. Result shows that, the vegetation in Beijing-Tianjin-Hebei region shows overall restoration and partial degradation trend. The significant restoration region accounts for 61.5 % of Beijing-Tianjin-Hebei region, while the significant degradation region accounts for 2.1 %. The dominant climatic factor for time series NDVI were analyzed using the multi-linear regression model. Vegetation growth in 17.9 % of Beijing-Tianjin-Hebei region is dominated by temperature, 35.5 % is dominated by precipitation, and 11.68 % is dominated by solar radiance. Human activities play important role for vegetation restoration in Beijing-Tianjin-Hebei Region, where the large scale forest restoration programs are the main human activities, such as the three-north shelterbelt construction project, Beijing-Tianjin-Hebei sandstorm source control project and grain for green projects.
Hyperspectral scattering profiles for prediction of the microbial spoilage of beef
NASA Astrophysics Data System (ADS)
Peng, Yankun; Zhang, Jing; Wu, Jianhu; Hang, Hui
2009-05-01
Spoilage in beef is the result of decomposition and the formation of metabolites caused by the growth and enzymatic activity of microorganisms. There is still no technology for the rapid, accurate and non-destructive detection of bacterially spoiled or contaminated beef. In this study, hyperspectral imaging technique was exploited to measure biochemical changes within the fresh beef. Fresh beef rump steaks were purchased from a commercial plant, and left to spoil in refrigerator at 8°C. Every 12 hours, hyperspectral scattering profiles over the spectral region between 400 nm and 1100 nm were collected directly from the sample surface in reflection pattern in order to develop an optimal model for prediction of the beef spoilage, in parallel the total viable count (TVC) per gram of beef were obtained by classical microbiological plating methods. The spectral scattering profiles at individual wavelengths were fitted accurately by a two-parameter Lorentzian distribution function. TVC prediction models were developed, using multi-linear regression, on relating individual Lorentzian parameters and their combinations at different wavelengths to log10(TVC) value. The best predictions were obtained with r2= 0.96 and SEP = 0.23 for log10(TVC). The research demonstrated that hyperspectral imaging technique is a valid tool for real-time and non-destructive detection of bacterial spoilage in beef.
Dong, Pei-Pei; Ge, Guang-Bo; Zhang, Yan-Yan; Ai, Chun-Zhi; Li, Guo-Hui; Zhu, Liang-Liang; Luan, Hong-Wei; Liu, Xing-Bao; Yang, Ling
2009-10-16
Seven pairs of epimers and one pair of isomeric metabolites of taxanes, each pair of which have similar structures but different retention behaviors, together with additional 13 taxanes with different substitutions were chosen to investigate the quantitative structure-retention relationship (QSRR) of taxanes in ultra fast liquid chromatography (UFLC). Monte Carlo variable selection (MCVS) method was adopted to choose descriptors. The selected four descriptors were used to build QSRR model with multi-linear regression (MLR) and artificial neural network (ANN) modeling techniques. Both linear and nonlinear models show good predictive ability, of which ANN model was better with the determination coefficient R(2) for training, validation and test set being 0.9892, 0.9747 and 0.9840, respectively. The results of 100 times' leave-12-out cross validation showed the robustness of this model. All the isomers can be correctly differentiated by this model. According to the selected descriptors, the three dimensional structural information was critical for recognition of epimers. Hydrophobic interaction was the uppermost factor for retention in UFLC. Molecules' polarizability and polarity properties were also closely correlated with retention behaviors. This QSRR model will be useful for separation and identification of taxanes including epimers and metabolites from botanical or biological samples.
Kuehnbaum, Naomi L; Gillen, Jenna B; Gibala, Martin J; Britz-McKibbin, Philip
2014-08-28
High-intensity interval training (HIIT) offers a practical approach for enhancing cardiorespiratory fitness, however its role in improving glucose regulation among sedentary yet normoglycemic women remains unclear. Herein, multi-segment injection capillary electrophoresis-mass spectrometry is used as a high-throughput platform in metabolomics to assess dynamic responses of overweight/obese women (BMI > 25, n = 11) to standardized oral glucose tolerance tests (OGTTs) performed before and after a 6-week HIIT intervention. Various statistical methods were used to classify plasma metabolic signatures associated with post-prandial glucose and/or training status when using a repeated measures/cross-over study design. Branched-chain/aromatic amino acids and other intermediates of urea cycle and carnitine metabolism decreased over time in plasma after oral glucose loading. Adaptive exercise-induced changes to plasma thiol redox and orthinine status were measured for trained subjects while at rest in a fasting state. A multi-linear regression model was developed to predict changes in glucose tolerance based on a panel of plasma metabolites measured for naïve subjects in their untrained state. Since treatment outcomes to physical activity are variable between-subjects, prognostic markers offer a novel approach to screen for potential negative responders while designing lifestyle modifications that maximize the salutary benefits of exercise for diabetes prevention on an individual level.
Antanasijević, Davor Z; Pocajt, Viktor V; Povrenović, Dragan S; Ristić, Mirjana Đ; Perić-Grujić, Aleksandra A
2013-01-15
This paper describes the development of an artificial neural network (ANN) model for the forecasting of annual PM(10) emissions at the national level, using widely available sustainability and economical/industrial parameters as inputs. The inputs for the model were selected and optimized using a genetic algorithm and the ANN was trained using the following variables: gross domestic product, gross inland energy consumption, incineration of wood, motorization rate, production of paper and paperboard, sawn wood production, production of refined copper, production of aluminum, production of pig iron and production of crude steel. The wide availability of the input parameters used in this model can overcome a lack of data and basic environmental indicators in many countries, which can prevent or seriously impede PM emission forecasting. The model was trained and validated with the data for 26 EU countries for the period from 1999 to 2006. PM(10) emission data, collected through the Convention on Long-range Transboundary Air Pollution - CLRTAP and the EMEP Programme or as emission estimations by the Regional Air Pollution Information and Simulation (RAINS) model, were obtained from Eurostat. The ANN model has shown very good performance and demonstrated that the forecast of PM(10) emission up to two years can be made successfully and accurately. The mean absolute error for two-year PM(10) emission prediction was only 10%, which is more than three times better than the predictions obtained from the conventional multi-linear regression and principal component regression models that were trained and tested using the same datasets and input variables. Copyright © 2012 Elsevier B.V. All rights reserved.
Rutherford, Julienne N.; McDade, Thom W.; Feranil, Alan; Adair, Linda; Kuzawa, Christopher
2011-01-01
Cardiovascular disease (CVD) is a leading cause of death in the Philippines, although few studies here have examined the lipid profiles underlying disease risk. The isolated low high density lipoprotein cholesterol (HDL-c) phenotype has been implicated as a CVD risk factor, the prevalence of which exhibits significant variation across populations. To assess population variation in individual lipid components and their associations with diet and anthropometric characteristics, we compare lipid profiles in a population of adult Filipino women (n=1877) to U.S. women participating in the National Health and Nutrition Examination Survey (n=477). We conducted multilinear regression models to assess the relationship between lipid components and BMI and dietary variables in the two populations. We measured the prevalence of lipid phenotypes, and logistic regression models determined the predictors of the isolated low HDL-c phenotype. HDL-c was lower in the Philippines (40.8±0.2 mg/dL) than in NHANES (60.7±0.7 mg/dL). The prevalence of the isolated low HDL-c phenotype was 28.8%, compared to 2.10% in NHANES. High prevalence among Filipinos was relatively invariant across all levels of BMI, but was strongly inversely related to BMI in NHANES and exhibited only at the BMI>25 kg/m2 threshold. Diet did not predict the low-HDL phenotype in Filipinos. Filipino women exhibit a high prevalence of the isolated low HDL-c phenotype, which is largely decoupled from anthropometric factors. The relationship of CVD to population variation in dyslipidemia and body composition needs further study, particularly in populations where the burden of cardiovascular and metabolic disease is rapidly increasing. PMID:20199988
NASA Astrophysics Data System (ADS)
Kumar, V.; Melet, A.; Meyssignac, B.; Ganachaud, A.; Kessler, W. S.; Singh, A.; Aucan, J.
2018-02-01
Rising sea levels are a critical concern in small island nations. The problem is especially serious in the western south Pacific, where the total sea level rise over the last 60 years has been up to 3 times the global average. In this study, we aim at reconstructing sea levels at selected sites in the region (Suva, Lautoka—Fiji, and Nouméa—New Caledonia) as a multilinear regression (MLR) of atmospheric and oceanic variables. We focus on sea level variability at interannual-to-interdecadal time scales, and trend over the 1988-2014 period. Local sea levels are first expressed as a sum of steric and mass changes. Then a dynamical approach is used based on wind stress curl as a proxy for the thermosteric component, as wind stress curl anomalies can modulate the thermocline depth and resultant sea levels via Rossby wave propagation. Statistically significant predictors among wind stress curl, halosteric sea level, zonal/meridional wind stress components, and sea surface temperature are used to construct a MLR model simulating local sea levels. Although we are focusing on the local scale, the global mean sea level needs to be adjusted for. Our reconstructions provide insights on key drivers of sea level variability at the selected sites, showing that while local dynamics and the global signal modulate sea level to a given extent, most of the variance is driven by regional factors. On average, the MLR model is able to reproduce 82% of the variance in island sea level, and could be used to derive local sea level projections via downscaling of climate models.
Analysis of weather condition influencing fire regime in Italy
NASA Astrophysics Data System (ADS)
Bacciu, Valentina; Masala, Francesco; Salis, Michele; Sirca, Costantino; Spano, Donatella
2014-05-01
Fires have a crucial role within Mediterranean ecosystems, with both negative and positive impacts on all biosphere components and with reverberations on different scales. Fire determines the landscape structure and plant composition, but it is also the cause of enormous economic and ecological damages, beside the loss of human life. In addition, several authors are in agreement suggesting that, during the past decades, changes on fire patterns have occurred, especially in terms of fire-prone areas expansion and fire season lengthening. Climate and weather are two of the main controlling agents, directly and indirectly, of fire regime influencing vegetation productivity, causing water stress, igniting fires through lightning, or modulating fire behavior through wind. On the other hand, these relationships could be not warranted in areas where most ignitions are caused by people (Moreno et al. 2009). Specific analyses of the driving forces of fire regime across countries and scales are thus still required in order to better anticipate fire seasons and also to advance our knowledge of future fire regimes. The objective of this work was to improve our knowledge of the relative effects of several weather variables on forest fires in Italy for the period 1985-2008. Meteorological data were obtained through the MARS (Monitoring Agricultural Resources) database, interpolated at 25x25 km scale. Fire data were provided by the JRC (Join Research Center) and the CFVA (Corpo Forestale e di Vigilanza Ambientale, Sardinia). A hierarchical cluster analysis, based on fire and weather data, allowed the identification of six homogeneous areas in terms of fire occurrence and climate (pyro-climatic areas). Two statistical techniques (linear and non-parametric models) were applied in order to assess if inter-annual variability in weather pattern and fire events had a significant trend. Then, through correlation analysis and multi-linear regression modeling, we investigated the influence of weather variables on fire activity across a range of time- and spatial-scales. The analysis revealed a general decrease of both number of fires and burned area, although not everywhere with the same magnitude. Overall, regression models where highly significant (p<0.001), and the explained variance ranged from 36% to 80% for fire number and from 37% to 76% for burned area, depending on pyro-climatic area. Moreover, our results contributed in determining the relative importance of climate variables acting at different timescales as control on intrinsic (i.e. flammability and moisture) and extrinsic (i.e. fuel amount and structure) characteristics of vegetation, thus strongly influencing fire occurrence. The good performance of our models, especially in the most fire affected pyro-climatic areas of Italy, and the better understanding of the main driver of fire variability gained through this work could be of great help for fire management among the different pyro-climatic areas.
Zarr, Robert R; Heckert, N Alan; Leigh, Stefan D
2014-01-01
Thermal conductivity data acquired previously for the establishment of Standard Reference Material (SRM) 1450, Fibrous Glass Board, as well as subsequent renewals 1450a, 1450b, 1450c, and 1450d, are re-analyzed collectively and as individual data sets. Additional data sets for proto-1450 material lots are also included in the analysis. The data cover 36 years of activity by the National Institute of Standards and Technology (NIST) in developing and providing thermal insulation SRMs, specifically high-density molded fibrous-glass board, to the public. Collectively, the data sets cover two nominal thicknesses of 13 mm and 25 mm, bulk densities from 60 kg·m−3 to 180 kg·m−3, and mean temperatures from 100 K to 340 K. The analysis repetitively fits six models to the individual data sets. The most general form of the nested set of multilinear models used is given in the following equation: λ(ρ,T)=a0+a1ρ+a2T+a3T3+a4e−(T−a5a6)2where λ(ρ,T) is the predicted thermal conductivity (W·m−1·K−1), ρ is the bulk density (kg·m−3), T is the mean temperature (K) and ai (for i = 1, 2, … 6) are the regression coefficients. The least squares fit results for each model across all data sets are analyzed using both graphical and analytic techniques. The prevailing generic model for the majority of data sets is the bilinear model in ρ and T. λ(ρ,T)=a0+a1ρ+a2T One data set supports the inclusion of a cubic temperature term and two data sets with low-temperature data support the inclusion of an exponential term in T to improve the model predictions. Physical interpretations of the model function terms are described. Recommendations for future renewals of SRM 1450 are provided. An Addendum provides historical background on the origin of this SRM and the influence of the SRM on external measurement programs. PMID:26601034
NASA Astrophysics Data System (ADS)
Minguillon, M. C.; Querol, X.; Monfort, E.; Alastuey, A.; Escrig, A.; Celades, I.; Miro, J. V.
2009-04-01
The relationship between specific particulate emission control and ambient levels of some PM10 components (Zn, As, Pb, Cs, Tl) was evaluated. To this end, the industrial area of Castellón (Eastern Spain) was selected, where around 40% of the EU glazed ceramic tiles and a high proportion of EU ceramic frits (middle product for the manufacture of ceramic glaze) are produced. The PM10 emissions from the ceramic processes were calculated over the period 2000 to 2007 taking into account the degree of implementation of corrective measures throughout the study period. Abatement systems (mainly bag filters) were implemented in the majority of the fusion kilns for frit manufacture in the area as a result of the application of the Directive 1996/61/CE, leading to a marked decrease in PM10 emissions. On the other hand, ambient PM10 sampling was carried out from April 2002 to July 2008 at three urban sites and one suburban site of the area and a complete chemical analysis was made for about 35 % of the collected samples, by means of different techniques (ICP-AES, ICP-MS, Ion Chromatography, selective electrode and elemental analyser). The series of chemical composition of PM10 allowed us to apply a source contribution model (Principal Component Analysis), followed by a multilinear regression analysis, so that PM10 sources were identified and their contribution to bulk ambient PM10 was quantified on a daily basis, as well as the contribution to bulk ambient concentrations of the identified key components (Zn, As, Pb, Cs, Tl). The contribution of the sources identified as the manufacture and use of ceramic glaze components, including the manufacture of ceramic frits, accounted for more than 65, 75, 58, 53, and 53% of ambient Zn, As, Pb, Cs and Tl levels, respectively (with the exception of Tl contribution at one of the sites). The important emission reductions of these sources during the study period had an impact on ambient key components levels, such that there was a high correlation between PM10 emissions from these sources and ambient key components levels (R2= 0.61-0.98).
Bayesian CP Factorization of Incomplete Tensors with Automatic Rank Determination.
Zhao, Qibin; Zhang, Liqing; Cichocki, Andrzej
2015-09-01
CANDECOMP/PARAFAC (CP) tensor factorization of incomplete data is a powerful technique for tensor completion through explicitly capturing the multilinear latent factors. The existing CP algorithms require the tensor rank to be manually specified, however, the determination of tensor rank remains a challenging problem especially for CP rank . In addition, existing approaches do not take into account uncertainty information of latent factors, as well as missing entries. To address these issues, we formulate CP factorization using a hierarchical probabilistic model and employ a fully Bayesian treatment by incorporating a sparsity-inducing prior over multiple latent factors and the appropriate hyperpriors over all hyperparameters, resulting in automatic rank determination. To learn the model, we develop an efficient deterministic Bayesian inference algorithm, which scales linearly with data size. Our method is characterized as a tuning parameter-free approach, which can effectively infer underlying multilinear factors with a low-rank constraint, while also providing predictive distributions over missing entries. Extensive simulations on synthetic data illustrate the intrinsic capability of our method to recover the ground-truth of CP rank and prevent the overfitting problem, even when a large amount of entries are missing. Moreover, the results from real-world applications, including image inpainting and facial image synthesis, demonstrate that our method outperforms state-of-the-art approaches for both tensor factorization and tensor completion in terms of predictive performance.
Influence of stromal refractive index and hydration on corneal laser refractive surgery.
de Ortueta, Diego; von Rüden, Dennis; Magnago, Thomas; Arba Mosquera, Samuel
2014-06-01
To evaluate the influence of the stromal refractive index and hydration on postoperative outcomes in eyes that had corneal laser refractive surgery using the Amaris laser system. Augenzentrum Recklinghausen, Recklinghausen, Germany. Comparative case series. At the 6-month follow-up, right eyes were retrospectively analyzed. The effect of the stromal refractive index and hydration on refractive outcomes was assessed using univariate linear and multilinear correlations. Sixty eyes were analyzed. Univariate linear analyses showed that the stromal refractive index and hydration were correlated with the thickness of the preoperative exposed stroma and was statistically different for laser in situ keratomileusis and laser-assisted subepithelial keratectomy treatments. Univariate multilinear analyses showed that the spherical equivalent (SE) was correlated with the attempted SE and stromal refractive index (or hydration). Analyses suggest overcorrections for higher stromal refractive index values and for lower hydration values. The stromal refractive index and hydration affected postoperative outcomes in a subtle, yet significant manner. An adjustment toward greater attempted correction in highly hydrated corneas and less intended correction in low hydrated corneas might help optimize refractive outcomes. Mr. Magnago and Dr. Arba-Mosquera are employees of and Dr. Diego de Ortueta is a consultant to Schwind eye-tech-solutions GmbH & Co. KG. Mr. Rüden has no financial or proprietary interest in any material or method mentioned. Copyright © 2014 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Hydrostatic Stress Effect on the Yield Behavior of Inconel 100
NASA Technical Reports Server (NTRS)
Allen, Phillip A.; Wilson, Christopher D.
2003-01-01
Classical metal plasticity theory assumes that hydrostatic stress has negligible effect on the yield and postyield behavior of metals. Recent reexaminations of classical theory have revealed a significant effect of hydrostatic stress on the yield behavior of various geometries. Fatigue tests and nonlinear finite element analyses (FEA) of Inconel 100 (IN100) equal-arm bend specimens and new monotonic tests and nonlinear finite element analyses of IN100 smooth tension, smooth compression, and double-edge notch tension (DENT) test specimens have revealed the effect of internal hydrostatic tensile stresses on yielding. Nonlinear FEA using the von Mises (yielding is independent of hydrostatic stress) and the Drucker-Prager (yielding is linearly dependent on hydrostatic stress) yield functions were performed. A new FEA constitutive model was developed that incorporates a pressure-dependent yield function with combined multilinear kinematic and multilinear isotropic hardening using the ABAQUS user subroutine (UMAT) utility. In all monotonic tensile test cases, the von Mises constitutive model, overestimated the load for a given displacement or strain. Considering the failure displacements or strains for the DENT specimen, the Drucker-Prager FEM s predicted loads that were approximately 3% lower than the von Mises values. For the failure loads, the Drucker Prager FEM s predicted strains that were up to 35% greater than the von Mises values. Both the Drucker-Prager model and the von Mises model performed equally-well in simulating the equal-arm bend fatigue test.
Peleato, Nicolás M; Andrews, Robert C
2015-01-01
This work investigated the application of several fluorescence excitation-emission matrix analysis methods as natural organic matter (NOM) indicators for use in predicting the formation of trihalomethanes (THMs) and haloacetic acids (HAAs). Waters from four different sources (two rivers and two lakes) were subjected to jar testing followed by 24hr disinfection by-product formation tests using chlorine. NOM was quantified using three common measures: dissolved organic carbon, ultraviolet absorbance at 254 nm, and specific ultraviolet absorbance as well as by principal component analysis, peak picking, and parallel factor analysis of fluorescence spectra. Based on multi-linear modeling of THMs and HAAs, principle component (PC) scores resulted in the lowest mean squared prediction error of cross-folded test sets (THMs: 43.7 (μg/L)(2), HAAs: 233.3 (μg/L)(2)). Inclusion of principle components representative of protein-like material significantly decreased prediction error for both THMs and HAAs. Parallel factor analysis did not identify a protein-like component and resulted in prediction errors similar to traditional NOM surrogates as well as fluorescence peak picking. These results support the value of fluorescence excitation-emission matrix-principal component analysis as a suitable NOM indicator in predicting the formation of THMs and HAAs for the water sources studied. Copyright © 2014. Published by Elsevier B.V.
Square Deal: Lower Bounds and Improved Relaxations for Tensor Recovery
2013-08-16
problem size n from 10 to 30 with increment 1, and the observation ratio ρ from 0.01 to 0.2 with increment 0.01. For each (ρ, n)-pair, we simulate 5 test ...Foundations of Computational Mathematics, 12(6):805–849, 2012. [CRT] Emmanuel J. Candès, Justin K. Romberg , and Terence Tao. Stable signal recov- ery...2012. [SDS10] Marco Signoretto, Lieven De Lathauwer, and Johan AK Suykens. Nuclear norms for tensors and their use for convex multilinear estimation
Maternity leave in the ninth month of pregnancy and birth outcomes among working women.
Guendelman, Sylvia; Pearl, Michelle; Graham, Steve; Hubbard, Alan; Hosang, Nap; Kharrazi, Martin
2009-01-01
The health effects of antenatal maternity leave have been scarcely evaluated. In California, women are eligible for paid benefits up to 4 weeks before delivery. We explored whether leave at > or =36 weeks gestation increases gestation and birthweight, and reduces primary cesarean deliveries among full-time working women. Drawing from a 2002--2003 nested case-control study of preterm birth and low birthweight among working women in Southern California, we compared a cohort of women who took leave (n = 62) or worked until delivery (n = 385). Models weighted for probability of sampling were used to calculate hazards ratios for gestational age, odds ratios (OR) for primary cesarean delivery, and multilinear regression coefficients for birthweight. Leave-takers were similar to non-leave-takers on demographic and health characteristics, except that more clerical workers took leave (p = .02). Compared with non-leave-takers, leave-takers had almost 4 times lower odds of cesarean delivery after adjusting for covariates (OR, 0.27; 95% confidence interval [CI], 0.08-0.94). Overall, there were no marked differences in length of gestation or mean birthweight. However, in a subgroup of women whose efforts outstripped their occupational rewards, gestation was prolonged (hazard ratio for delivery each day between 36 and 41 weeks, 0.56; 95% CI, 0.34-0.93). Maternity leave in late pregnancy shows promise for reducing cesarean deliveries and prolonging gestation in occupationally strained women.
Gyrokinetic modeling of impurity peaking in JET H-mode plasmas
NASA Astrophysics Data System (ADS)
Manas, P.; Camenen, Y.; Benkadda, S.; Weisen, H.; Angioni, C.; Casson, F. J.; Giroud, C.; Gelfusa, M.; Maslov, M.
2017-06-01
Quantitative comparisons are presented between gyrokinetic simulations and experimental values of the carbon impurity peaking factor in a database of JET H-modes during the carbon wall era. These plasmas feature strong NBI heating and hence high values of toroidal rotation and corresponding gradient. Furthermore, the carbon profiles present particularly interesting shapes for fusion devices, i.e., hollow in the core and peaked near the edge. Dependencies of the experimental carbon peaking factor ( R / L nC ) on plasma parameters are investigated via multilinear regressions. A marked correlation between R / L nC and the normalised toroidal rotation gradient is observed in the core, which suggests an important role of the rotation in establishing hollow carbon profiles. The carbon peaking factor is then computed with the gyrokinetic code GKW, using a quasi-linear approach, supported by a few non-linear simulations. The comparison of the quasi-linear predictions to the experimental values at mid-radius reveals two main regimes. At low normalised collisionality, ν * , and T e / T i < 1 , the gyrokinetic simulations quantitatively recover experimental carbon density profiles, provided that rotodiffusion is taken into account. In contrast, at higher ν * and T e / T i > 1 , the very hollow experimental carbon density profiles are never predicted by the simulations and the carbon density peaking is systematically over estimated. This points to a possible missing ingredient in this regime.
Estimation of the barrier layer thickness in the Indian Ocean using Aquarius Salinity
NASA Astrophysics Data System (ADS)
Felton, Clifford S.; Subrahmanyam, Bulusu; Murty, V. S. N.; Shriver, Jay F.
2014-07-01
Monthly barrier layer thickness (BLT) estimates are derived from satellite measurements using a multilinear regression model (MRM) within the Indian Ocean. Sea surface salinity (SSS) from the recently launched Soil Moisture and Ocean Salinity (SMOS) and Aquarius SAC-D salinity missions are utilized to estimate the BLT. The MRM relates BLT to sea surface salinity (SSS), sea surface temperature (SST), and sea surface height anomalies (SSHA). Three regions where the BLT variability is most rigorous are selected to evaluate the performance of the MRM for 2012; the Southeast Arabian Sea (SEAS), Bay of Bengal (BoB), and Eastern Equatorial Indian Ocean (EEIO). The MRM derived BLT estimates are compared to gridded Argo and Hybrid Coordinate Ocean Model (HYCOM) BLTs. It is shown that different mechanisms are important for sustaining the BLT variability in each of the selected regions. Sensitivity tests show that SSS is the primary driver of the BLT within the MRM. Results suggest that salinity measurements obtained from Aquarius and SMOS can be useful for tracking and predicting the BLT in the Indian Ocean. Largest MRM errors occur along coastlines and near islands where land contamination skews the satellite SSS retrievals. The BLT evolution during 2012, as well as the advantages and disadvantages of the current model are discussed. BLT estimations using HYCOM simulations display large errors that are related to model layer structure and the selected BLT methodology.
NASA Astrophysics Data System (ADS)
Verney-Carron, A.; Dutot, A. L.; Lombardo, T.; Chabas, A.
2012-07-01
Soiling results from the deposition of pollutants on materials. On glass, it leads to an alteration of its intrinsic optical properties. The nature and intensity of this phenomenon mirrors the pollution of an environment. This paper proposes a new statistical model in order to predict the evolution of haze (H) (i.e. diffuse/direct transmitted light ratio) as a function of time and major pollutant concentrations in the atmosphere (SO2, NO2, and PM10 (Particulate Matter < 10 μm)). The model was parameterized by using a large set of data collected in European cities (especially, Paris and its suburbs, Athens, Krakow, Prague, and Rome) during field exposure campaigns (French, European, and international programs). This statistical model, called NEUROPT-Glass, comes from an artificial neural network with two hidden layers and uses a non-linear parametric regression named Multilayer Perceptron (MLP). The results display a high determination coefficient (R2 = 0.88) between the measured and the predicted hazes and minimizes the dispersion of data compared to existing multilinear dose-response functions. Therefore, this model can be used with a great confidence in order to predict the soiling of glass as a function of time in world cities with different levels of pollution or to assess the effect of pollution reduction policies on glass soiling problems in urban environments.
Thresholds, injury, and loss relationships for thrips in Phleum pratense (Poales: Poaceae).
Reisig, Dominic D; Godfrey, Larry D; Marcum, Daniel B
2009-12-01
Timothy (Phleum pratense L.) is an important forage crop in many Western U.S. states. Marketing of timothy hay is primarily based on esthetics, and green color is an important attribute. The objective of these studies was to determine a relationship between arthropod populations, yield, and esthetic injury in timothy. Economic injury levels (EILs) and economic thresholds were calculated based on these relationships. Thrips (Thripidae) numbers were manipulated with insecticides in small plot studies in 2006, 2007, and 2008, although tetranychid mite levels were incidentally flared by cyfluthrin in some experiments. Arthropod population densities were determined weekly, and yield and esthetic injury were measured at each harvest. Effects of arthropods on timothy were assessed using multilinear regression. Producers were also surveyed to relate economic loss from leaf color to the injury ratings for use in establishing EILs. Thrips population levels were significantly related to yield loss in only one of nine experiments. Thrips population levels were significantly related to injury once before the first annual harvest and twice before the second. Thrips were the most important pest in these experiments, and they were more often related to esthetic injury rather than yield loss. EILs and economic thresholds for thrips population levels were established using esthetic injury data. These results document the first example of a significant relationship between arthropod pest population levels and economic yield and quality losses in timothy.
Kuehnbaum, Naomi L.; Gillen, Jenna B.; Gibala, Martin J.; Britz-McKibbin, Philip
2014-01-01
High-intensity interval training (HIIT) offers a practical approach for enhancing cardiorespiratory fitness, however its role in improving glucose regulation among sedentary yet normoglycemic women remains unclear. Herein, multi-segment injection capillary electrophoresis-mass spectrometry is used as a high-throughput platform in metabolomics to assess dynamic responses of overweight/obese women (BMI > 25, n = 11) to standardized oral glucose tolerance tests (OGTTs) performed before and after a 6-week HIIT intervention. Various statistical methods were used to classify plasma metabolic signatures associated with post-prandial glucose and/or training status when using a repeated measures/cross-over study design. Branched-chain/aromatic amino acids and other intermediates of urea cycle and carnitine metabolism decreased over time in plasma after oral glucose loading. Adaptive exercise-induced changes to plasma thiol redox and orthinine status were measured for trained subjects while at rest in a fasting state. A multi-linear regression model was developed to predict changes in glucose tolerance based on a panel of plasma metabolites measured for naïve subjects in their untrained state. Since treatment outcomes to physical activity are variable between-subjects, prognostic markers offer a novel approach to screen for potential negative responders while designing lifestyle modifications that maximize the salutary benefits of exercise for diabetes prevention on an individual level. PMID:25164777
Schwaederle, Maria; Wei, Caimiao; Lee, J. Jack; Hong, David S.; Eggermont, Alexander M.; Schilsky, Richard L.; Mendelsohn, John; Lazar, Vladimir
2015-01-01
Background: In order to ascertain the impact of a biomarker-based (personalized) strategy, we compared outcomes between US Food and Drug Administration (FDA)–approved cancer treatments that were studied with and without such a selection rationale. Methods: Anticancer agents newly approved (September 1998 to June 2013) were identified at the Drugs@FDA website. Efficacy, treatment-related mortality, and hazard ratios (HRs) for time-to-event endpoints were analyzed and compared in registration trials for these agents. All statistical tests were two-sided. Results: Fifty-eight drugs were included (leading to 57 randomized [32% personalized] and 55 nonrandomized trials [47% personalized], n = 38 104 patients). Trials adopting a personalized strategy more often included targeted (100% vs 65%, P < .001), oral (68% vs 35%, P = .001), and single agents (89% vs 71%, P = .04) and more frequently permitted crossover to experimental treatment (67% vs 28%, P = .009). In randomized registration trials (using a random-effects meta-analysis), personalized therapy arms were associated with higher relative response rate ratios (RRRs, compared with their corresponding control arms) (RRRs = 3.82, 95% confidence interval [CI] = 2.51 to 5.82, vs RRRs = 2.08, 95% CI = 1.76 to 2.47, adjusted P = .03), longer PFS (hazard ratio [HR] = 0.41, 95% CI = 0.33 to 0.51, vs HR = 0.59, 95% CI = 0.53 to 0.65, adjusted P < .001) and a non-statistically significantly longer OS (HR = 0.71, 95% CI = 0.61 to 0.83, vs HR = 0.81, 95% CI = 0.77 to 0.85, adjusted P = .07) compared with nonpersonalized trials. Analysis of experimental arms in all 112 registration trials (randomized and nonrandomized) demonstrated that personalized therapy was associated with higher response rate (48%, 95% CI = 42% to 55%, vs 23%, 95% CI = 20% to 27%, P < .001) and longer PFS (median = 8.3, interquartile range [IQR] = 5 vs 5.5 months, IQR = 5, adjusted P = .002) and OS (median = 19.3, IQR = 17 vs 13.5 months, IQR = 8, Adjusted P = .04). A personalized strategy was an independent predictor of better RR, PFS, and OS, as demonstrated by multilinear regression analysis. Treatment-related mortality rate was similar for personalized and nonpersonalized trials. Conclusions: A biomarker-based approach was safe and associated with improved efficacy outcomes in FDA-approved anticancer agents. PMID:26378224
NASA Astrophysics Data System (ADS)
Zempila, Melina-Maria; Taylor, Michael; Bais, Alkiviadis; Kazadzis, Stelios
2016-10-01
We report on the construction of generic models to calculate photosynthetically active radiation (PAR) from global horizontal irradiance (GHI), and vice versa. Our study took place at stations of the Greek UV network (UVNET) and the Hellenic solar energy network (HNSE) with measurements from NILU-UV multi-filter radiometers and CM pyranometers, chosen due to their long (≈1 M record/site) high temporal resolution (≈1 min) record that captures a broad range of atmospheric environments and cloudiness conditions. The uncertainty of the PAR measurements is quantified to be ±6.5% while the uncertainty involved in GHI measurements is up to ≈±7% according to the manufacturer. We show how multi-linear regression and nonlinear neural network (NN) models, trained at a calibration site (Thessaloniki) can be made generic provided that the input-output time series are processed with multi-channel singular spectrum analysis (M-SSA). Without M-SSA, both linear and nonlinear models perform well only locally. M-SSA with 50 time-lags is found to be sufficient for identification of trend, periodic and noise components in aerosol, cloud parameters and irradiance, and to construct regularized noise models of PAR from GHI irradiances. Reconstructed PAR and GHI time series capture ≈95% of the variance of the cross-validated target measurements and have median absolute percentage errors <2%. The intra-site median absolute error of M-SSA processed models were ≈8.2±1.7 W/m2 for PAR and ≈9.2±4.2 W/m2 for GHI. When applying the models trained at Thessaloniki to other stations, the average absolute mean bias between the model estimates and measured values was found to be ≈1.2 W/m2 for PAR and ≈0.8 W/m2 for GHI. For the models, percentage errors are well within the uncertainty of the measurements at all sites. Generic NN models were found to perform marginally better than their linear counterparts.
Cheung, Felix; Loeb, Charles A; Croglio, Michael P; Waltzer, Wayne C; Weissbart, Steven J
2017-09-01
Determining whether bacterial presence in urine microscopy represents infection is important as ureteral stent placement is indicated in patients with obstructing urolithiasis and infection. We aim to investigate whether the presence of bacteria on urine microscopy is associated with other markers of infection in patients with obstructing urolithiasis presenting to the emergency room. We performed a cross-sectional study of 199 patients with obstructing urolithiasis and divided patients into two groups according to the presence of bacteria on urine microscopy. The primary outcome was serum white blood cell count and secondary outcomes were objective fever, subjective fever, tachycardia, pyuria, and final urine culture. Univariate and multivariate analysis were used to assess whether the presence of bacteria on microscopy was associated with other markers of infection. The study included 72 patients in the bacteriuria group and 127 without bacteriuria. On univariate analysis, the presence of bacteria was not associated with leukocytosis, objective fever, or subjective fever, but it was associated with gender (p < 0.001), pyuria (p < 0.001), positive nitrites (p = 0.001), positive leukocyte esterase (p < 0.001), and squamous epithelial cells (p = 0.002). In a multilinear regression model including the presence of squamous cells, age, and sex, the presence of bacteriuria was not related to serum white blood cell count (coefficient -0.47; 95% confidence interval [CI] -1.1, 0.2; p = 0.17), heart rate (coefficient 0.85; 95% CI -2.5, 4.2; p = 0.62), presence of subjective or objective fever (odds ratio [OR] 1.5; 95% CI 0.8, 3.1; p = 0.18), or the presence of squamous epithelial cells (coefficient -4.4; 95% CI -10, 1.2; p = 0.12). However, the presence of bacteriuria was related to only the degree of pyuria (coefficient 16.4; 95% CI 9.6, 23.3; p < 0.001). Bacteria on urine microscopy is not associated with other markers of systemic infection and may largely represent a contaminant. Renal colic may be a risk factor for providing a contaminated urine specimen.
NASA Astrophysics Data System (ADS)
Merlin, O.; Stefan, V. G.; Amazirh, A.; Chanzy, A.; Ceschia, E.; Er-Raki, S.; Gentine, P.; Tallec, T.; Ezzahar, J.; Bircher, S.; Beringer, J.; Khabba, S.
2016-05-01
A meta-analysis data-driven approach is developed to represent the soil evaporative efficiency (SEE) defined as the ratio of actual to potential soil evaporation. The new model is tested across a bare soil database composed of more than 30 sites around the world, a clay fraction range of 0.02-0.56, a sand fraction range of 0.05-0.92, and about 30,000 acquisition times. SEE is modeled using a soil resistance (rss) formulation based on surface soil moisture (θ) and two resistance parameters rss,ref and θefolding. The data-driven approach aims to express both parameters as a function of observable data including meteorological forcing, cut-off soil moisture value θ1/2 at which SEE=0.5, and first derivative of SEE at θ1/2, named Δθ1/2-1. An analytical relationship between >(rss,ref;θefolding) and >(θ1/2;Δθ1/2-1>) is first built by running a soil energy balance model for two extreme conditions with rss = 0 and rss˜∞ using meteorological forcing solely, and by approaching the middle point from the two (wet and dry) reference points. Two different methods are then investigated to estimate the pair >(θ1/2;Δθ1/2-1>) either from the time series of SEE and θ observations for a given site, or using the soil texture information for all sites. The first method is based on an algorithm specifically designed to accomodate for strongly nonlinear SEE>(θ>) relationships and potentially large random deviations of observed SEE from the mean observed SEE>(θ>). The second method parameterizes θ1/2 as a multi-linear regression of clay and sand percentages, and sets Δθ1/2-1 to a constant mean value for all sites. The new model significantly outperformed the evaporation modules of ISBA (Interaction Sol-Biosphère-Atmosphère), H-TESSEL (Hydrology-Tiled ECMWF Scheme for Surface Exchange over Land), and CLM (Community Land Model). It has potential for integration in various land-surface schemes, and real calibration capabilities using combined thermal and microwave remote sensing data.
Wang, Jinxu; Tong, Xin; Li, Peibo; Liu, Menghua; Peng, Wei; Cao, Hui; Su, Weiwei
2014-08-08
Shenqi Fuzheng Injection (SFI) is an injectable traditional Chinese herbal formula comprised of two Chinese herbs, Radix codonopsis and Radix astragali, which were commonly used to improve immune functions against chronic diseases in an integrative and holistic way in China and other East Asian countries for thousands of years. This present study was designed to explore the bioactive components on immuno-enhancement effects in SFI using the relevance analysis between chemical fingerprints and biological effects in vivo. According to a four-factor, nine-level uniform design, SFI samples were prepared with different proportions of the four portions separated from SFI via high speed counter current chromatography (HSCCC). SFI samples were assessed with high performance liquid chromatography (HPLC) for 23 identified components. For the immunosuppressed murine experiments, biological effects in vivo were evaluated on spleen index (E1), peripheral white blood cell counts (E2), bone marrow cell counts (E3), splenic lymphocyte proliferation (E4), splenic natural killer cell activity (E5), peritoneal macrophage phagocytosis (E6) and the amount of interleukin-2 (E7). Based on the hypothesis that biological effects in vivo varied with differences in components, multivariate relevance analysis, including gray relational analysis (GRA), multi-linear regression analysis (MLRA) and principal component analysis (PCA), were performed to evaluate the contribution of each identified component. The results indicated that the bioactive components of SFI on immuno-enhancement activities were calycosin-7-O-β-d-glucopyranoside (P9), isomucronulatol-7,2'-di-O-glucoside (P11), biochanin-7-glucoside (P12), 9,10-dimethoxypterocarpan-3-O-xylosylglucoside (P15) and astragaloside IV (P20), which might have positive effects on spleen index (E1), splenic lymphocyte proliferation (E4), splenic natural killer cell activity (E5), peritoneal macrophage phagocytosis (E6) and the amount of interleukin-2 (E7), while 5-hydroxymethyl-furaldehyde (P5) and lobetyolin (P13) might have negative effects on E1, E4, E5, E6 and E7. Finally, the bioactive HPLC fingerprint of SFI based on its bioactive components on immuno-enhancement effects was established for quality control of SFI. In summary, this study provided a perspective to explore the bioactive components in a traditional Chinese herbal formula with a series of HPLC and animal experiments, which would be helpful to improve quality control and inspire further clinical studies of traditional Chinese medicines. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Fei, Xunchang; Zekkos, Dimitrios; Raskin, Lutgarde
2016-09-01
The energy conversion potential of municipal solid waste (MSW) disposed of in landfills remains largely untapped because of the slow and variable rate of biogas generation, delayed and inefficient biogas collection, leakage of biogas, and landfill practices and infrastructure that are not geared toward energy recovery. A database consisting of methane (CH4) generation data, the major constituent of biogas, from 49 laboratory experiments and field monitoring data from 57 landfills was developed. Three CH4 generation parameters, i.e., waste decay rate (k), CH4 generation potential (L0), and time until maximum CH4 generation rate (tmax), were calculated for each dataset using U.S. EPA's Landfill Gas Emission Model (LandGEM). Factors influencing the derived parameters in laboratory experiments and landfills were investigated using multi-linear regression analysis. Total weight of waste (W) was correlated with biodegradation conditions through a ranked classification scheme. k increased with increasing percentage of readily biodegradable waste (Br0 (%)) and waste temperature, and reduced with increasing W, an indicator of less favorable biodegradation conditions. The values of k obtained in the laboratory were commonly significantly higher than those in landfills and those recommended by LandGEM. The mean value of L0 was 98 and 88L CH4/kg waste for laboratory and field studies, respectively, but was significantly affected by waste composition with ranges from 10 to 300L CH4/kg. tmax increased with increasing percentage of biodegradable waste (B0) and W. The values of tmax in landfills were higher than those in laboratory experiments or those based on LandGEM's recommended parameters. Enhancing biodegradation conditions in landfill cells has a greater impact on improving k and tmax than increasing B0. Optimizing the B0 and Br0 values of landfilled waste increases L0 and reduces tmax. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ensemble tropical-extratropical cyclone coastal flood hazard assessment with climate change
NASA Astrophysics Data System (ADS)
Orton, P. M.; Lin, N.; Colle, B.
2016-12-01
A challenge with quantifying future changes in coastal flooding for the U.S. East Coast is that climate change has varying effects on different types of storms, in addition to raising mean sea levels. Moreover, future flood hazard uncertainties are large and come from many sources. Here, a new coastal flood hazard assessment approach is demonstrated that separately evaluates and then combines probabilities of storm tide generated from tropical cyclones (TCs) and extratropical cyclones (ETCs). The separation enables us to incorporate climate change impacts on both types of storms. The assessment accounts for epistemic storm tide uncertainty using an ensemble of different prior studies and methods of assessment, merged with uncertainty in climate change effects on storm tides and sea levels. The assessment is applied for New York Harbor, under the auspices of the New York City Panel on Climate Change (NPCC). In the New York Bight region and much of the U.S. East Coast, differing flood exceedance curve slopes for TCs and ETCs arise due to their differing physics. It is demonstrated how errors can arise for this region from mixing together storm types in an extreme value statistical analysis, a common practice when using observations. The effects of climate change on TC and ETC flooding have recently been assessed for this region, for TCs using a Global Climate Model (GCM) driven hurricane model with hydrodynamic modeling, and for ETCs using a GCM-driven multilinear regression-based storm surge model. The results of these prior studies are applied to our central estimates of the flood exceedance curve probabilities, transforming them for climate change effects. The results are useful for decision-makers because they highlight the large uncertainty in present-day and future flood risk, and also for scientists because they identify the areas where further research is most needed.
Yang, Shengnan; Hsue, Cunyi; Lou, Qingqing
2015-05-01
Patient empowerment is playing an increasingly important role in diabetes and related disorders. This study evaluated the correlations among patient empowerment, self-care behavior, and glycemic control among patients with type 2 diabetes in mainland China. We conducted a multicenter cross-sectional study. Eight hundred eighty-five patients who sought care at hospitals in Nanjing, Changsha, Yunnan, and Chongqing, China, were enrolled. Structured questionnaires and medical records provided the data. The instruments included a demographic and clinical questionnaire, the Diabetes Empowerment Scale-Short Form, and the Chinese version of the Summary of Diabetes Self-Care Activities Scale. Glycosylated hemoglobin (HbA1c) was used as a measure of glycemic control. The data analyses are presented as proportions, means (±SD), β, and 95% confidence intervals (CIs). Multilinear regressions were used to examine the correlations among the scores of patient empowerment, self-care behavior, and HbA1c values. Linear regression revealed that patient empowerment was a statistically significant predictor of patients' self-care behavior even after controlling for age, gender, marital status, educational level, and diabetes duration. Diet (β=0.449; 95% CI, 0.370, 0.528), exercise (β=0.222; 95% CI, 0.164, 0.279), blood glucose testing (β=0.152; 95% CI, 0.106, 0.199), medication taking (β=0.062; 95% CI, 0.030, 0.095), and foot care (β=0.279; 95% CI, 0.217, 0.342). Additionally, patient empowerment was a statistically significant predictor of HbA1c (β=-0.094; 95% CI, -0.123, -0.065). Our study indicated that perceived diabetes empowerment is a predictor of self-care behavior and HbA1c in Chinese patients with type 2 diabetes. Therefore, interventions to enhance and promote patient empowerment should be essential components of diabetes education programs to improve self-care behavior and glycemic control.
NASA Astrophysics Data System (ADS)
Boudhina, Nissaf; Zitouna-Chebbi, Rim; Mekki, Insaf; Jacob, Frédéric; Ben Mechlia, Nétij; Masmoudi, Moncef; Prévot, Laurent
2018-06-01
Estimating evapotranspiration in hilly watersheds is paramount for managing water resources, especially in semiarid/subhumid regions. The eddy covariance (EC) technique allows continuous measurements of latent heat flux (LE). However, time series of EC measurements often experience large portions of missing data because of instrumental malfunctions or quality filtering. Existing gap-filling methods are questionable over hilly crop fields because of changes in airflow inclination and subsequent aerodynamic properties. We evaluated the performances of different gap-filling methods before and after tailoring to conditions of hilly crop fields. The tailoring consisted of splitting the LE time series beforehand on the basis of upslope and downslope winds. The experiment was setup within an agricultural hilly watershed in northeastern Tunisia. EC measurements were collected throughout the growth cycle of three wheat crops, two of them located in adjacent fields on opposite hillslopes, and the third one located in a flat field. We considered four gap-filling methods: the REddyProc method, the linear regression between LE and net radiation (Rn), the multi-linear regression of LE against the other energy fluxes, and the use of evaporative fraction (EF). Regardless of the method, the splitting of the LE time series did not impact the gap-filling rate, and it might improve the accuracies on LE retrievals in some cases. Regardless of the method, the obtained accuracies on LE estimates after gap filling were close to instrumental accuracies, and they were comparable to those reported in previous studies over flat and mountainous terrains. Overall, REddyProc was the most appropriate method, for both gap-filling rate and retrieval accuracy. Thus, it seems possible to conduct gap filling for LE time series collected over hilly crop fields, provided the LE time series are split beforehand on the basis of upslope-downslope winds. Future works should address consecutive vegetation growth cycles for a larger panel of conditions in terms of climate, vegetation, and water status.
Hyperspectral sensing of heavy metals in soil and vegetation: Feasibility and challenges
NASA Astrophysics Data System (ADS)
Wang, Fenghe; Gao, Jay; Zha, Yong
2018-02-01
Remote sensing of heavy metal contamination of soils has been widely studied. These studies concentrate heavily on the hyperspectral reflectance of typical metals in soils and in plants measured either in situ or in the laboratory. The most used wavebands lie within the visible-near infrared portion of the spectrum, especially the red edge. In comparison, mid- and far-infrared wavelengths are used far less frequently. Hyperspectral data are optimized to suppress noises and enhance the signal of the targeted metals through spectral derivatives and vegetation indexing. It is found that only subtle disparity exists in spectral responses for some metals at a sufficiently high content level. Not all metals have their own unique spectral response. Their detection has to rely on their co-variation with the spectrally responsive metals or organic matter in the soils. The closeness of the correlation dictates the accuracy of prediction. Without any theoretical grounding, this correlation is site-specific. Various analytical methods, including stepwise multi-linear regression, partial least squares regression, and neural networks have been used to model metal content level from the identified spectrally sensitive bands and/or their transformed indices. Both the model and the explanatory variables vary with the metal under detection and the area from which in situ samples are collected. Despite the amply demonstrated feasibility of estimating several metals by a large number of authors, only a few have succeeded in mapping the spatial distribution of metals from HyMAP, HJ-1A and Hyperion images to a satisfactory accuracy using complex algorithms and after taking environmental variables into account. The large number of reported failures testifies the difficulty in the detection of heavy metals in soils and plants, especially when their concentration level is low. The reasons or factors responsible for the success or failure have not been systematically analyzed, including the minimal spectral resolution required.
Contribution of bacteria-like particles to PM2.5 aerosol in urban and rural environments
NASA Astrophysics Data System (ADS)
Wolf, R.; El-Haddad, I.; Slowik, J. G.; Dällenbach, K.; Bruns, E.; Vasilescu, J.; Baltensperger, U.; Prévôt, A. S. H.
2017-07-01
We report highly time-resolved estimates of airborne bacteria-like particle concentrations in ambient aerosol using an Aerodyne aerosol mass spectrometer (AMS). AMS measurements with a newly developed PM2.5 and the standard (PM1) aerodynamic lens were performed at an urban background site (Zurich) and at a rural site (Payerne) in Switzerland. Positive matrix factorization using the multilinear engine (ME-2) implementation was used to estimate the contribution of bacteria-like particles to non-refractory organic aerosol. The success of the method was evaluated by a size-resolved analysis of the organic mass and the analysis of single particle mass spectra, which were detected with a light scattering system integrated into the AMS. Use of the PM2.5 aerodynamic lens increased measured bacteria-like concentrations, supporting the analysis method. However, at all sites, the low concentrations of this component suggest that airborne bacteria constitute a minor fraction of non-refractory PM2.5 organic aerosol mass. Estimated average mass concentrations were below 0.1 μg/m3 and relative contributions were lower than 2% at both sites. During rainfall periods, concentrations of the bacteria-like component increased considerably reaching a short-time maximum of approximately 2 μg/m3 at the Payerne site in summer.
Adams, Joost; Verbeek, Hilde; Zwakhalen, Sandra M G
2017-01-01
The shift in nursing home care for patients with dementia from traditional task-driven environments towards patient-centered small-scale environments has implications for nursing practice. Information about its implications for nursing staff is lacking, and only a few studies have addressed staff perceptions. We sought to explore staff perceptions of required skills and to determine differences in job satisfaction, motivation, and job characteristics of staff working in both care settings. A secondary data analysis was conducted. The data source used was drawn from a larger study testing the effects of small-scale living (Verbeek et al., 2009). Nursing staff working on a permanent basis and who were directly involved in care were eligible to participate in the study. Data on job satisfaction, motivation, and job characteristics of nursing staff working in typical small-scale and traditional care environments were derived using a questionnaire. Data were analyzed using descriptive statistics. Differences between nursing staff job satisfaction, motivation, and job characteristics were tested using multilinear regression analysis. In total, 138 staff members were included (81 staff members working in traditional nursing home wards and 57 staff members working in small-scale nursing home wards). The findings showed that in typical small-scale nursing homes, job satisfaction and job motivation were significantly higher compared to those in typical traditional nursing homes. Job autonomy and social support were also significantly higher, while job demands were significantly lower in these small-scale nursing homes. Social support was found to be the most significant predictor of job motivation and job satisfaction in both types of typical nursing homes. Nursing staff working in traditional care environments more often expressed the intention to switch to small-scale environments. Based on the findings of this study, it can be concluded that nursing homes environments differ substantially in experienced job satisfaction and job motivation. To enable a balanced work environment for nursing staff, a clear understanding of the relation between living environments and experienced job satisfaction among nursing staff is required. Since social support seems to be one of the key contributors to a supportive beneficial work climate, managers should focus on enabling this in daily nursing home care. © 2016 Sigma Theta Tau International.
Fasoula, S; Zisi, Ch; Gika, H; Pappa-Louisi, A; Nikitas, P
2015-05-22
A package of Excel VBA macros have been developed for modeling multilinear gradient retention data obtained in single or double gradient elution mode by changing organic modifier(s) content and/or eluent pH. For this purpose, ten chromatographic models were used and four methods were adopted for their application. The methods were based on (a) the analytical expression of the retention time, provided that this expression is available, (b) the retention times estimated using the Nikitas-Pappa approach, (c) the stepwise approximation, and (d) a simple numerical approximation involving the trapezoid rule for integration of the fundamental equation for gradient elution. For all these methods, Excel VBA macros have been written and implemented using two different platforms; the fitting and the optimization platform. The fitting platform calculates not only the adjustable parameters of the chromatographic models, but also the significance of these parameters and furthermore predicts the analyte elution times. The optimization platform determines the gradient conditions that lead to the optimum separation of a mixture of analytes by using the Solver evolutionary mode, provided that proper constraints are set in order to obtain the optimum gradient profile in the minimum gradient time. The performance of the two platforms was tested using experimental and artificial data. It was found that using the proposed spreadsheets, fitting, prediction, and optimization can be performed easily and effectively under all conditions. Overall, the best performance is exhibited by the analytical and Nikitas-Pappa's methods, although the former cannot be used under all circumstances. Copyright © 2015 Elsevier B.V. All rights reserved.
Ying, Qi; Feng, Miao; Song, Danlin; Wu, Li; Hu, Jianlin; Zhang, Hongliang; Kleeman, Michael J; Li, Xinghua
2018-05-15
Contributions to 15 trace elements in airborne particulate matter with aerodynamic diameters <2.5μm (PM 2.5 ) in China from five major source sectors (industrial sources, residential sources, transportation, power generation and windblown dust) were determined using a source-oriented Community Multiscale Air Quality (CMAQ) model. Using emission factors in the composite speciation profiles from US EPA's SPECIATE database for the five sources leads to relatively poor model performance at an urban site in Beijing. Improved predictions of the trace elements are obtained by using adjusted emission factors derived from a robust multilinear regression of the CMAQ predicted primary source contributions and observation at the urban site. Good correlations between predictions and observations are obtained for most elements studied with R>0.5, except for crustal elements Al, Si and Ca, particularly in spring. Predicted annual and seasonal average concentrations of Mn, Fe, Zn and Pb in Nanjing and Chengdu are also consistently improved using the adjusted emission factors. Annual average concentration of Fe is as high as 2.0μgm -3 with large contributions from power generation and transportation. Annual average concentration of Pb reaches 300-500ngm -3 in vast areas, mainly from residential activities, transportation and power generation. The impact of high concentrations of Fe on secondary sulfate formation and Pb on human health should be evaluated carefully in future studies. Copyright © 2017 Elsevier B.V. All rights reserved.
Hammer, Jort; Haftka, Joris J-H; Scherpenisse, Peter; Hermens, Joop L M; de Voogt, Pim W P
2017-02-01
To predict the fate and potential effects of organic contaminants, information about their hydrophobicity is required. However, common parameters to describe the hydrophobicity of organic compounds (e.g., octanol-water partition constant [K OW ]) proved to be inadequate for ionic and nonionic surfactants because of their surface-active properties. As an alternative approach to determine their hydrophobicity, the aim of the present study was therefore to measure the retention of a wide range of surfactants on a C 18 stationary phase. Capacity factors in pure water (k' 0 ) increased linearly with increasing number of carbon atoms in the surfactant structure. Fragment contribution values were determined for each structural unit with multilinear regression, and the results were consistent with the expected influence of these fragments on the hydrophobicity of surfactants. Capacity factors of reference compounds and log K OW values from the literature were used to estimate log K OW values for surfactants (log KOWHPLC). These log KOWHPLC values were also compared to log K OW values calculated with 4 computational programs: KOWWIN, Marvin calculator, SPARC, and COSMOThermX. In conclusion, capacity factors from a C 18 stationary phase are found to better reflect hydrophobicity of surfactants than their K OW values. Environ Toxicol Chem 2017;36:329-336. © 2016 The Authors. Environmental Toxicology and Chemistry Published by Wiley Periodicals, Inc. on behalf of SETAC. © 2016 The Authors. Environmental Toxicology and Chemistry Published by Wiley Periodicals, Inc. on behalf of SETAC.
Protein intake and lean tissue mass retention following bariatric surgery.
Moizé, Violeta; Andreu, Alba; Rodríguez, Lucía; Flores, Lilliam; Ibarzabal, Ainitze; Lacy, Antonio; Jiménez, Amanda; Vidal, Josep
2013-08-01
Since current protein intake (PI) recommendations for the bariatric surgery (BS) patient are not supported by conclusive evidence, we aimed to evaluate the relationship between PI and lean tissue mass (LTM) loss following BS. Observational study including patients undergoing gastric bypass (GBP; n = 25) or sleeve gastrectomy (SG; n = 25). Dietary advice and daily PI were assessed prior to, and at 2- and 6-weeks, 4-, 8-, and 12-months after surgery. Body composition was assessed by dual energy X-ray absorptiometry (DXA). LTM loss as percent of weight loss (%LTM loss) at 4- and 12-months after surgery were the main outcome variables. A PI ≥ 60 g/d was associated with lower %LTM loss at 4- (p = 0.030) and 12-months (p = 0.013). Similar results were obtained when a PI ≥ 1.1 g/kg of ideal body weight (IBW)/d was considered. Multilinear regression showed the only independent predictor of %LTM loss at 4-months was PI (expressed as g/kg IBW/d) (OR: -0.376, p = 0.017), whereas PI (OR: -0.468, p = 0.001) and surgical technique (OR: 0.399, p = 0.006) predicted 12-months %LTM loss. Our data provide supportive evidence for the PI goals of >60 g/d or 1.1 g/kg IBW/d as a being associated with better LTM preservation in the BS patient. Copyright © 2012 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhao, Changyu; Chen, Haishan; Sun, Shanlei
2018-04-01
Soil enthalpy ( H) contains the combined effects of both soil moisture ( w) and soil temperature ( T) in the land surface hydrothermal process. In this study, the sensitivities of H to w and T are investigated using the multi-linear regression method. Results indicate that T generally makes positive contributions to H, while w exhibits different (positive or negative) impacts due to soil ice effects. For example, w negatively contributes to H if soil contains more ice; however, after soil ice melts, w exerts positive contributions. In particular, due to lower w interannual variabilities in the deep soil layer (i.e., the fifth layer), H is more sensitive to T than to w. Moreover, to compare the potential capabilities of H, w and T in precipitation ( P) prediction, the Huanghe-Huaihe Basin (HHB) and Southeast China (SEC), with similar sensitivities of H to w and T, are selected. Analyses show that, despite similar spatial distributions of H-P and T-P correlation coefficients, the former values are always higher than the latter ones. Furthermore, H provides the most effective signals for P prediction over HHB and SEC, i.e., a significant leading correlation between May H and early summer (June) P. In summary, H, which integrates the effects of T and w as an independent variable, has greater capabilities in monitoring land surface heating and improving seasonal P prediction relative to individual land surface factors (e.g., T and w).
Robust control of systems with real parameter uncertainty and unmodelled dynamics
NASA Technical Reports Server (NTRS)
Chang, Bor-Chin; Fischl, Robert
1991-01-01
During this research period we have made significant progress in the four proposed areas: (1) design of robust controllers via H infinity optimization; (2) design of robust controllers via mixed H2/H infinity optimization; (3) M-delta structure and robust stability analysis for structured uncertainties; and (4) a study on controllability and observability of perturbed plant. It is well known now that the two-Riccati-equation solution to the H infinity control problem can be used to characterize all possible stabilizing optimal or suboptimal H infinity controllers if the optimal H infinity norm or gamma, an upper bound of a suboptimal H infinity norm, is given. In this research, we discovered some useful properties of these H infinity Riccati solutions. Among them, the most prominent one is that the spectral radius of the product of these two Riccati solutions is a continuous, nonincreasing, convex function of gamma in the domain of interest. Based on these properties, quadratically convergent algorithms are developed to compute the optimal H infinity norm. We also set up a detailed procedure for applying the H infinity theory to robust control systems design. The desire to design controllers with H infinity robustness but H(exp 2) performance has recently resulted in mixed H(exp 2) and H infinity control problem formulation. The mixed H(exp 2)/H infinity problem have drawn the attention of many investigators. However, solution is only available for special cases of this problem. We formulated a relatively realistic control problem with H(exp 2) performance index and H infinity robustness constraint into a more general mixed H(exp 2)/H infinity problem. No optimal solution yet is available for this more general mixed H(exp 2)/H infinity problem. Although the optimal solution for this mixed H(exp 2)/H infinity control has not yet been found, we proposed a design approach which can be used through proper choice of the available design parameters to influence both robustness and performance. For a large class of linear time-invariant systems with real parametric perturbations, the coefficient vector of the characteristic polynomial is a multilinear function of the real parameter vector. Based on this multilinear mapping relationship together with the recent developments for polytopic polynomials and parameter domain partition technique, we proposed an iterative algorithm for coupling the real structured singular value.
Shi, Guo-Liang; Liu, Gui-Rong; Tian, Ying-Ze; Zhou, Xiao-Yu; Peng, Xing; Feng, Yin-Chang
2014-06-01
PM10 and PM2.5 samples were simultaneously collected during a period which covered the Chinese New Year's (CNY) Festival. The concentrations of particulate matter (PM) and 16 polycyclic aromatic hydrocarbons (PAHs) were measured. The possible source contributions and toxicity risks were estimated for Festival and non-Festival periods. According to the diagnostic ratios and Multilinear Engine 2 (ME2), three sources were identified and their contributions were calculated: vehicle emission (48.97% for PM10, 53.56% for PM2.5), biomass & coal combustion (36.83% for PM10, 28.76% for PM2.5), and cook emission (22.29% for PM10, 27.23% for PM2.5). An interesting result was found: although the PAHs are not directly from the fireworks display, they were still indirectly influenced by biomass combustion which is affiliated with the fireworks display. Additionally, toxicity risks of different sources were estimated by Multilinear Engine 2-BaP equivalent (ME2-BaPE): vehicle emission (54.01% for PM10, 55.42% for PM2.5), cook emission (25.59% for PM10, 29.05% for PM2.5), and biomass & coal combustion source (20.90% for PM10, 14.28% for PM2.5). It is worth to be noticed that the toxicity contribution of cook emission was considerable in Festival period. The findings can provide useful information to protect the urban human health, as well as develop the effective air control strategies in special short-term anthropogenic activity event. Copyright © 2014 Elsevier B.V. All rights reserved.
Liu, Gui-Rong; Shi, Guo-Liang; Tian, Ying-Ze; Wang, Yi-Nan; Zhang, Cai-Yan; Feng, Yin-Chang
2015-01-01
An improved physically constrained source apportionment (PCSA) technology using the Multilinear Engine 2-species ratios (ME2-SR) method was proposed and applied to quantify the sources of PM10- and PM2.5-associated polycyclic aromatic hydrocarbons (PAHs) from Chengdu in winter time. Sixteen priority PAH compounds were detected with mean ΣPAH concentrations (sum of 16 PAHs) ranging from 70.65 ng/m(3) to 209.58 ng/m(3) and from 59.17 ng/m(3) to 170.64 ng/m(3) for the PM10 and PM2.5 samples, respectively. The ME2-SR and positive matrix factorization (PMF) models were employed to estimate the source contributions of PAHs, and these estimates agreed with the experimental results. For the PMF model, the highest contributor to the ΣPAHs was vehicular emission (81.69% for PM10, 82.06% for PM2.5), followed by coal combustion (12.68%, 12.11%), wood combustion (5.65%, 4.45%) and oil combustion (0.72%, 0.88%). For the ME2-SR method, the highest contributions were from diesel (43.19% for PM10, 47.17% for PM2.5) and gasoline exhaust (34.94%, 32.44%), followed by wood combustion (8.79%, 6.37%), coal combustion (12.46%, 12.37%) and oil combustion (0.80%, 1.22%). However, the PAH ratios calculated for the factors extracted by ME2-SR were closer to the values from actual source profiles, implying that the results obtained from ME2-SR might be physically constrained and satisfactory. Copyright © 2014 Elsevier B.V. All rights reserved.
García-Jacas, César R; Marrero-Ponce, Yovani; Acevedo-Martínez, Liesner; Barigye, Stephen J; Valdés-Martiní, José R; Contreras-Torres, Ernesto
2014-07-05
The present report introduces the QuBiLS-MIDAS software belonging to the ToMoCoMD-CARDD suite for the calculation of three-dimensional molecular descriptors (MDs) based on the two-linear (bilinear), three-linear, and four-linear (multilinear or N-linear) algebraic forms. Thus, it is unique software that computes these tensor-based indices. These descriptors, establish relations for two, three, and four atoms by using several (dis-)similarity metrics or multimetrics, matrix transformations, cutoffs, local calculations and aggregation operators. The theoretical background of these N-linear indices is also presented. The QuBiLS-MIDAS software was developed in the Java programming language and employs the Chemical Development Kit library for the manipulation of the chemical structures and the calculation of the atomic properties. This software is composed by a desktop user-friendly interface and an Abstract Programming Interface library. The former was created to simplify the configuration of the different options of the MDs, whereas the library was designed to allow its easy integration to other software for chemoinformatics applications. This program provides functionalities for data cleaning tasks and for batch processing of the molecular indices. In addition, it offers parallel calculation of the MDs through the use of all available processors in current computers. The studies of complexity of the main algorithms demonstrate that these were efficiently implemented with respect to their trivial implementation. Lastly, the performance tests reveal that this software has a suitable behavior when the amount of processors is increased. Therefore, the QuBiLS-MIDAS software constitutes a useful application for the computation of the molecular indices based on N-linear algebraic maps and it can be used freely to perform chemoinformatics studies. Copyright © 2014 Wiley Periodicals, Inc.
Nakatani, S; Garcia, M J; Firstenberg, M S; Rodriguez, L; Grimm, R A; Greenberg, N L; McCarthy, P M; Vandervoort, P M; Thomas, J D
1999-09-01
The study assessed whether hemodynamic parameters of left atrial (LA) systolic function could be estimated noninvasively using Doppler echocardiography. Left atrial systolic function is an important aspect of cardiac function. Doppler echocardiography can measure changes in LA volume, but has not been shown to relate to hemodynamic parameters such as the maximal value of the first derivative of the pressure (LA dP/dt(max)). Eighteen patients in sinus rhythm were studied immediately before and after open heart surgery using simultaneous LA pressure measurements and intraoperative transesophageal echocardiography. Left atrial pressure was measured with a micromanometer catheter, and LA dP/dt(max) during atrial contraction was obtained. Transmitral and pulmonary venous flow were recorded by pulsed Doppler echocardiography. Peak velocity, and mean acceleration and deceleration, and the time-velocity integral of each flow during atrial contraction was measured. The initial eight patients served as the study group to derive a multilinear regression equation to estimate LA dP/dt(max) from Doppler parameters, and the latter 10 patients served as the test group to validate the equation. A previously validated numeric model was used to confirm these results. In the study group, LA dP/dt(max) showed a linear relation with LA pressure before atrial contraction (r = 0.80, p < 0.005), confirming the presence of the Frank-Starling mechanism in the LA. Among transmitral flow parameters, mean acceleration showed the strongest correlation with LA dP/dt(max) (r = 0.78, p < 0.001). Among pulmonary venous flow parameters, no single parameter was sufficient to estimate LA dP/dt(max) with an r2 > 0.30. By stepwise and multiple linear regression analysis, LA dP/dt(max) was best described as follows: LA dP/dt(max) = 0.1 M-AC +/- 1.8 P-V - 4.1; r = 0.88, p < 0.0001, where M-AC is the mean acceleration of transmitral flow and P-V is the peak velocity of pulmonary venous flow during atrial contraction. This equation was tested in the latter 10 patients of the test group. Predicted and measured LA dP/dt(max) correlated well (r = 0.90, p < 0.0001). Numerical simulation verified that this relationship held across a wide range of atrial elastance, ventricular relaxation and systolic function, with LA dP/dt(max) predicted by the above equation with r = 0.94. A combination of transmitral and pulmonary venous flow parameters can provide a hemodynamic assessment of LA systolic function.
NASA Technical Reports Server (NTRS)
Nakatani, S.; Garcia, M. J.; Firstenberg, M. S.; Rodriguez, L.; Grimm, R. A.; Greenberg, N. L.; McCarthy, P. M.; Vandervoort, P. M.; Thomas, J. D.
1999-01-01
OBJECTIVES: The study assessed whether hemodynamic parameters of left atrial (LA) systolic function could be estimated noninvasively using Doppler echocardiography. BACKGROUND: Left atrial systolic function is an important aspect of cardiac function. Doppler echocardiography can measure changes in LA volume, but has not been shown to relate to hemodynamic parameters such as the maximal value of the first derivative of the pressure (LA dP/dt(max)). METHODS: Eighteen patients in sinus rhythm were studied immediately before and after open heart surgery using simultaneous LA pressure measurements and intraoperative transesophageal echocardiography. Left atrial pressure was measured with a micromanometer catheter, and LA dP/dt(max) during atrial contraction was obtained. Transmitral and pulmonary venous flow were recorded by pulsed Doppler echocardiography. Peak velocity, and mean acceleration and deceleration, and the time-velocity integral of each flow during atrial contraction was measured. The initial eight patients served as the study group to derive a multilinear regression equation to estimate LA dP/dt(max) from Doppler parameters, and the latter 10 patients served as the test group to validate the equation. A previously validated numeric model was used to confirm these results. RESULTS: In the study group, LA dP/dt(max) showed a linear relation with LA pressure before atrial contraction (r = 0.80, p < 0.005), confirming the presence of the Frank-Starling mechanism in the LA. Among transmitral flow parameters, mean acceleration showed the strongest correlation with LA dP/dt(max) (r = 0.78, p < 0.001). Among pulmonary venous flow parameters, no single parameter was sufficient to estimate LA dP/dt(max) with an r2 > 0.30. By stepwise and multiple linear regression analysis, LA dP/dt(max) was best described as follows: LA dP/dt(max) = 0.1 M-AC +/- 1.8 P-V - 4.1; r = 0.88, p < 0.0001, where M-AC is the mean acceleration of transmitral flow and P-V is the peak velocity of pulmonary venous flow during atrial contraction. This equation was tested in the latter 10 patients of the test group. Predicted and measured LA dP/dt(max) correlated well (r = 0.90, p < 0.0001). Numerical simulation verified that this relationship held across a wide range of atrial elastance, ventricular relaxation and systolic function, with LA dP/dt(max) predicted by the above equation with r = 0.94. CONCLUSIONS: A combination of transmitral and pulmonary venous flow parameters can provide a hemodynamic assessment of LA systolic function.
NASA Astrophysics Data System (ADS)
Piedrahita, Ricardo A.
The Denver Aerosol Sources and Health study (DASH) was a long-term study of the relationship between the variability in fine particulate mass and chemical constituents (PM2.5, particulate matter less than 2.5mum) and adverse health effects such as cardio-respiratory illnesses and mortality. Daily filter samples were chemically analyzed for multiple species. We present findings based on 2.8 years of DASH data, from 2003 to 2005. Multilinear Engine 2 (ME-2), a receptor-based source apportionment model was applied to the data to estimate source contributions to PM2.5 mass concentrations. This study relied on two different ME-2 models: (1) a 2-way model that closely reflects PMF-2; and (2) an enhanced model with meteorological data that used additional temporal and meteorological factors. The Coarse Rural Urban Sources and Health study (CRUSH) is a long-term study of the relationship between the variability in coarse particulate mass (PMcoarse, particulate matter between 2.5 and 10mum) and adverse health effects such as cardio-respiratory illnesses, pre-term births, and mortality. Hourly mass concentrations of PMcoarse and fine particulate matter (PM2.5) are measured using tapered element oscillating microbalances (TEOMs) with Filter Dynamics Measurement Systems (FDMS), at two rural and two urban sites. We present findings based on nine months of mass concentration data, including temporal trends, and non-parametric regressions (NPR) results, which were used to characterize the wind speed and wind direction relationships that might point to sources. As part of CRUSH, 1-year coarse and fine mode particulate matter filter sampling network, will allow us to characterize the chemical composition of the particulate matter collected and perform spatial comparisons. This work describes the construction and validation testing of four dichotomous filter samplers for this purpose. The use of dichotomous splitters with an approximate 2.5mum cut point, coupled with a 10mum cut diameter inlet head allows us to collect the separated size fractions that the collocated TEOMs collect continuously. Chemical analysis of the filters will include inorganic ions, organic compounds, EC, OC, and biological analyses. Side by side testing showed the cut diameters were in agreement with each other, and with a well characterized virtual impactor lent to the group by the University of Southern California. Error propagation was performed and uncertainty results were similar to the observed standard deviations.
Composite Multilinearity, Epistemic Uncertainty and Risk Achievement Worth
DOE Office of Scientific and Technical Information (OSTI.GOV)
E. Borgonovo; C. L. Smith
2012-10-01
Risk Achievement Worth is one of the most widely utilized importance measures. RAW is defined as the ratio of the risk metric value attained when a component has failed over the base case value of the risk metric. Traditionally, both the numerator and denominator are point estimates. Relevant literature has shown that inclusion of epistemic uncertainty i) induces notable variability in the point estimate ranking and ii) causes the expected value of the risk metric to differ from its nominal value. We obtain the conditions under which the equality holds between the nominal and expected values of a reliability riskmore » metric. Among these conditions, separability and state-of-knowledge independence emerge. We then study how the presence of epistemic uncertainty aspects RAW and the associated ranking. We propose an extension of RAW (called ERAW) which allows one to obtain a ranking robust to epistemic uncertainty. We discuss the properties of ERAW and the conditions under which it coincides with RAW. We apply our findings to a probabilistic risk assessment model developed for the safety analysis of NASA lunar space missions.« less
Finite Element Study on Continuous Rotating versus Reciprocating Nickel-Titanium Instruments.
El-Anwar, Mohamed I; Yousief, Salah A; Kataia, Engy M; El-Wahab, Tarek M Abd
2016-01-01
In the present study, GTX and ProTaper as continuous rotating endodontic files were numerically compared with WaveOne reciprocating file using finite element analysis, aiming at having a low cost, accurate/trustworthy comparison as well as finding out the effect of instrument design and manufacturing material on its lifespan. Two 3D finite element models were especially prepared for this comparison. Commercial engineering CAD/CAM package was used to model full detailed flute geometries of the instruments. Multi-linear materials were defined in analysis by using real strain-stress data of NiTi and M-Wire. Non-linear static analysis was performed to simulate the instrument inside root canal at a 45° angle in the apical portion and subjected to 0.3 N.cm torsion. The three simulations in this study showed that M-Wire is slightly more resistant to failure than conventional NiTi. On the other hand, both materials are fairly similar in case of severe locking conditions. For the same instrument geometry, M-Wire instruments may have longer lifespan than the conventional NiTi ones. In case of severe locking conditions both materials will fail similarly. Larger cross sectional area (function of instrument taper) resisted better to failure than the smaller ones, while the cross sectional shape and its cutting angles could affect instrument cutting efficiency.
Topology driven modeling: the IS metaphor.
Merelli, Emanuela; Pettini, Marco; Rasetti, Mario
In order to define a new method for analyzing the immune system within the realm of Big Data, we bear on the metaphor provided by an extension of Parisi's model, based on a mean field approach. The novelty is the multilinearity of the couplings in the configurational variables. This peculiarity allows us to compare the partition function [Formula: see text] with a particular functor of topological field theory-the generating function of the Betti numbers of the state manifold of the system-which contains the same global information of the system configurations and of the data set representing them. The comparison between the Betti numbers of the model and the real Betti numbers obtained from the topological analysis of phenomenological data, is expected to discover hidden n-ary relations among idiotypes and anti-idiotypes. The data topological analysis will select global features, reducible neither to a mere subgraph nor to a metric or vector space. How the immune system reacts, how it evolves, how it responds to stimuli is the result of an interaction that took place among many entities constrained in specific configurations which are relational. Within this metaphor, the proposed method turns out to be a global topological application of the S[B] paradigm for modeling complex systems.
NASA Astrophysics Data System (ADS)
Hu, Leqian; Ma, Shuai; Yin, Chunling
2018-03-01
In this work, fluorescence spectroscopy combined with multi-way pattern recognition techniques were developed for determining the geographical origin of kudzu root and detection and quantification of adulterants in kudzu root. Excitation-emission (EEM) spectra were obtained for 150 pure kudzu root samples of different geographical origins and 150 fake kudzu roots with different adulteration proportions by recording emission from 330 to 570 nm with excitation in the range of 320-480 nm, respectively. Multi-way principal components analysis (M-PCA) and multilinear partial least squares discriminant analysis (N-PLS-DA) methods were used to decompose the excitation-emission matrices datasets. 150 pure kudzu root samples could be differentiated exactly from each other according to their geographical origins by M-PCA and N-PLS-DA models. For the adulteration kudzu root samples, N-PLS-DA got better and more reliable classification result comparing with the M-PCA model. The results obtained in this study indicated that EEM spectroscopy coupling with multi-way pattern recognition could be used as an easy, rapid and novel tool to distinguish the geographical origin of kudzu root and detect adulterated kudzu root. Besides, this method was also suitable for determining the geographic origin and detection the adulteration of the other foodstuffs which can produce fluorescence.
Multispectral Resource Sampler Workshop
NASA Technical Reports Server (NTRS)
1979-01-01
The utility of the multispectral resource sampler (MRS) was examined by users in the following disciplines: agriculture, atmospheric studies, engineering, forestry, geology, hydrology/oceanography, land use, and rangelands/soils. Modifications to the sensor design were recommended and the desired types of products and number of scenes required per month were indicated. The history, design, capabilities, and limitations of the MRS are discussed as well as the multilinear spectral array technology which it uses. Designed for small area inventory, the MRS can provide increased temporal, spectral, and spatial resolution, facilitate polarization measurement and atmospheric correction, and test onboard data compression techniques. The advantages of using it along with the thematic mapper are considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gurvitis, Leonid
2009-01-01
An upper bound on the ergodic capacity of MIMO channels was introduced recently in [1]. This upper bound amounts to the maximization on the simplex of some multilinear polynomial p({lambda}{sub 1}, ..., {lambda}{sub n}) with non-negative coefficients. In general, such maximizations problems are NP-HARD. But if say, the functional log(p) is concave on the simplex and can be efficiently evaluated, then the maximization can also be done efficiently. Such log-concavity was conjectured in [1]. We give in this paper self-contained proof of the conjecture, based on the theory of H-Stable polynomials.
Cesari, Daniela; Amato, F; Pandolfi, M; Alastuey, A; Querol, X; Contini, D
2016-08-01
Source apportionment of aerosol is an important approach to investigate aerosol formation and transformation processes as well as to assess appropriate mitigation strategies and to investigate causes of non-compliance with air quality standards (Directive 2008/50/CE). Receptor models (RMs) based on chemical composition of aerosol measured at specific sites are a useful, and widely used, tool to perform source apportionment. However, an analysis of available studies in the scientific literature reveals heterogeneities in the approaches used, in terms of "working variables" such as the number of samples in the dataset and the number of chemical species used as well as in the modeling tools used. In this work, an inter-comparison of PM10 source apportionment results obtained at three European measurement sites is presented, using two receptor models: principal component analysis coupled with multi-linear regression analysis (PCA-MLRA) and positive matrix factorization (PMF). The inter-comparison focuses on source identification, quantification of source contribution to PM10, robustness of the results, and how these are influenced by the number of chemical species available in the datasets. Results show very similar component/factor profiles identified by PCA and PMF, with some discrepancies in the number of factors. The PMF model appears to be more suitable to separate secondary sulfate and secondary nitrate with respect to PCA at least in the datasets analyzed. Further, some difficulties have been observed with PCA in separating industrial and heavy oil combustion contributions. Commonly at all sites, the crustal contributions found with PCA were larger than those found with PMF, and the secondary inorganic aerosol contributions found by PCA were lower than those found by PMF. Site-dependent differences were also observed for traffic and marine contributions. The inter-comparison of source apportionment performed on complete datasets (using the full range of available chemical species) and incomplete datasets (with reduced number of chemical species) allowed to investigate the sensitivity of source apportionment (SA) results to the working variables used in the RMs. Results show that, at both sites, the profiles and the contributions of the different sources calculated with PMF are comparable within the estimated uncertainties indicating a good stability and robustness of PMF results. In contrast, PCA outputs are more sensitive to the chemical species present in the datasets. In PCA, the crustal contributions are higher in the incomplete datasets and the traffic contributions are significantly lower for incomplete datasets.
2012-01-01
Background Angiotensin receptor blockers (ARBs) are reported to provide direct protection to many organs by controlling inflammation and decreasing oxidant stress in patients without arteriosclerosis. This study aimed to evaluate (1) whether an ARB (candesartan) decreases values for inflammatory parameters in hypertensive patients with type 2 diabetes mellitus of long duration accompanied by arteriosclerosis and (2) whether there any predictors of which patients would receive the benefits of organ protection by candesartan. Methods We administered candesartan therapy (12 mg daily) for 6 months and evaluated whether there was improvement in serum inflammatory parameters high molecular weight adiponectin (HMW-ADN), plasminogen activator inhibitor-1 (PAI-1), highly sensitive C-reactive protein (Hs-CRP), vascular cell adhesion molecule-1 (VCAM-1) in serum and urinary-8-hydroxydeoxyguanosine (U-8-OHdG). We then analyzed the relationship between the degree of lowering of blood pressure and inflammatory factors and the relationship between pulse pressure and inflammatory factors. Finally, we analyzed predictive factors in patients who received the protective benefit of candesartan. Results After 6 months of treatment, significant improvements from baseline values were observed in all patients in HMW-ADN and PAI-1 but not in Hs-CRP, VCAM-1 and U-8-OHdG. Multilinear regression analysis was performed to determine which factors could best predict changes in HMW-ADN and PAI-1. Changes in blood pressure were not significant predictors of changes in metabolic factors in all patients. We found that the group with baseline pulse pressure <60 mmHg had improved HMW-ADN and PAI-1 values compared with the group with baseline pulse pressure ≥ 60 mmHg. These results suggest that pulse pressure at baseline could be predictive of changes in HMW-ADN and PAI-1. Conclusions Candesartan improved inflammatory parameters (HMW-ADN and PAI-1) in hypertensive patients with type 2 diabetes mellitus of long duration independent of blood pressure changes. Patients with pulse pressure <60 mmHg might receive protective benefits by candesartan. Trial registration UMIN000007921 PMID:23034088
NASA Astrophysics Data System (ADS)
Salam El Vilaly, Mohamed Abd; El Vilaly, Audra; Mahe, Gil
2017-04-01
Formerly a country of nomadism par excellence, Mauritania has experienced since its independence in 1960 a spectacular sedentarisation of its nomadic population. In fact, nomads have decreased from 75% of the total population in 1965 to 12 % in 1988, and just 6% in 2000. This rapid and unprecedented sedentarisation, particularly in Southern Mauritania, can be explained by several factors, including the devastating droughts in the 1970s and 1980s, as well as the turbulent transformation of Mauritania's political economy. Together, these factors have destabilized rural livelihoods and accelerated land degradation, livestock loss, urbanization, and conflict between farmers and herders over natural resources and water access across the area, resulting in unprecedented inter-regional migration. The aim of this 40- years study is not to review in detail all the factors driving inter-regional migration in Southern Mauritania, but instead to scrutinize at the relationship between vegetation productivity, land cover changes, rainfall trends, and dynamic spatial demographic shifts from 1971 to 2015. In this regard, we propose an advanced assessment approach that integrates demographic information, climatological data, and multi-sensor Normalized Difference Vegetation Index (NDVI) time series data from 1981 to 2015 at 5.6 km to characterize the inter-regional migration movements in Southern Mauritania. A multi-linear regression analysis was conducted to examine to which extent the inter-regional migration movements are controlled by both climate and environmental changes. The demographic data show that Southern Mauritania's population grew less rapidly at an annual rate between 1977 and 1988 than between 1988 and 2000. The annual growth rate recorded in 2000 was 2.9%, compared to 2.5% in 1988 and 2.29% in 1960. Moreover, the population sedentarized dramatically at a rate of 95.2% in 2000 compared to 84.4% in 1988. The results also show distinctive interactions between vegetation dynamics, rainfall variations, and inter-regional migration during the last four decades: between 1977 and 1988, changes in rainfall bore the greatest impact on migration. Keywords: migration, climat change, environnemental migrants,
Zhang, Guang-Hui; Lu, Ye; Ji, Bu-Qiang; Ren, Jing-Chao; Sun, Pin; Ding, Shibin; Liao, Xiaoling; Liao, Kaiju; Liu, Jinyi; Cao, Jia; Lan, Qing; Rothman, Nathaniel; Xia, Zhao-Lin
2017-12-01
Global DNA hypomethylation is commonly observed in benzene-exposed workers, but the underlying mechanisms remain unclear. We sought to discover the relationships among reduced white blood cell (WBC) counts, micronuclear (MN) frequency, and global DNA methylation to determine whether there were associations with mutations in DNMT3A/3B. Therefore, we recruited 410 shoe factory workers and 102 controls from Wenzhou in Zhenjiang Province. A Methylated DNA Quantification Kit was used to quantify global DNA methylation, and single nucleotide polymorphisms (SNPs) in DNMT3A (rs36012910, rs1550117, and R882) and DNMT3B (rs1569686, rs2424909, and rs2424913) were identified using the restriction fragment length polymorphism method. A multilinear regression analysis demonstrated that the benzene-exposed workers experienced significant global DNA hypomethylation compared with the controls (β = -0.51, 95% CI: -0.69 to -0.32, P < 0.001). The DNMT3A R882 mutant allele (R882H and R882C) (β = -0.25, 95% CI: -0.54 to 0.04, P = 0.094) and the DNMT3B rs2424909 GG allele (β = -0.37, 95% CI: -0.70 to -0.03, P = 0.031) were significantly associated with global DNA hypomethylation compared with the wild-type genotype after adjusting for confounding factors. Furthermore, the MN frequency in the R882 mutant allele (R882H and R882C) (FR = 1.18, 95% CI: 0.99 to 1.40, P = 0.054) was higher than that of the wild-type. The results imply that hypomethylation occurs due to benzene exposure and that mutations in DNMTs are significantly associated with global DNA methylation, which might have influenced the induction of MN following exposure to benzene. Environ. Mol. Mutagen. 58:678-687, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Pianalto, Frederick S.
Coccidioidomycosis (Valley Fever) is an environmentally-mediated respiratory disease caused by the inhalation of airborne spores from the fungi Coccidioides spp. The fungi reside in arid and semi-arid soils of the Americas. The disease has increased epidemically in Arizona and other areas within the last two decades. Despite this increase, the ecology of the fungi remains obscure, and environmental antecedents of the disease are largely unstudied. Two sources of soil disturbance, hypothesized to affect soil ecology and initiate spore dissemination, are investigated. Nocturnal desert rodents interact substantially with the soil substrate. Rodents are hypothesized to act as a reservoir of coccidioidomycosis, a mediator of soil properties, and a disseminator of fungal spores. Rodent distributions are poorly mapped for the study area. We build automated multi-linear regression models and decision tree models for ten rodent species using rodent trapping data from the Organ Pipe Cactus National Monument (ORPI) in southwest Arizona with a combination of surface temperature, a vegetation index and its texture, and a suite of topographic rasters. Surface temperature, derived from Landsat TM thermal images, is the most widely selected predictive variable in both automated methods. Construction-related soil disturbance (e.g. road construction, trenching, land stripping, and earthmoving) is a significant source of fugitive dust, which decreases air quality and may carry soil pathogens. Annual differencing of Landsat Thematic Mapper (TM) mid-infrared images is used to create change images, and thresholded change areas are associated with coordinates of local dust inspections. The output metric identifies source areas of soil disturbance, and it estimates the annual amount of dust-producing surface area for eastern Pima County spanning 1994 through 2009. Spatially explicit construction-related soil disturbance and rodent abundance data are compared with coccidioidomycosis incidence data using rank order correlation and regression methods. Construction-related soil disturbance correlates strongly with annual county-wide incidence. It also correlates with Tucson periphery incidence aggregated to zip codes. Abundance values for the desert pocket mouse (Chaetodipus penicillatus), derived from a soil-adjusted vegetation index, aspect (northing) and thermal radiance, correlate with total study period incidence aggregated to zip code.
Quantization with maximally degenerate Poisson brackets: the harmonic oscillator!
NASA Astrophysics Data System (ADS)
Nutku, Yavuz
2003-07-01
Nambu's construction of multi-linear brackets for super-integrable systems can be thought of as degenerate Poisson brackets with a maximal set of Casimirs in their kernel. By introducing privileged coordinates in phase space these degenerate Poisson brackets are brought to the form of Heisenberg's equations. We propose a definition for constructing quantum operators for classical functions, which enables us to turn the maximally degenerate Poisson brackets into operators. They pose a set of eigenvalue problems for a new state vector. The requirement of the single-valuedness of this eigenfunction leads to quantization. The example of the harmonic oscillator is used to illustrate this general procedure for quantizing a class of maximally super-integrable systems.
Dimensional Reduction for the General Markov Model on Phylogenetic Trees.
Sumner, Jeremy G
2017-03-01
We present a method of dimensional reduction for the general Markov model of sequence evolution on a phylogenetic tree. We show that taking certain linear combinations of the associated random variables (site pattern counts) reduces the dimensionality of the model from exponential in the number of extant taxa, to quadratic in the number of taxa, while retaining the ability to statistically identify phylogenetic divergence events. A key feature is the identification of an invariant subspace which depends only bilinearly on the model parameters, in contrast to the usual multi-linear dependence in the full space. We discuss potential applications including the computation of split (edge) weights on phylogenetic trees from observed sequence data.
MLEP-Fail calibration for 1/8 inch thick cast plate of 17-4 steel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corona, Edmundo
The purpose of the work presented in this memo was to calibrate the Sierra material model Multilinear Elastic-Plastic Hardening Model with Failure (MLEP-Fail) for 1/8 inch thick cast plate of 17-4 steel. The calibration approach is essentially the same as that recently used in a previous memo using data from smooth and notched tensile specimens. The notched specimens were manufactured with three notch radii R = 1=8, 1/32 and 1/64 inches. The dimensions of the smooth and notched specimens are given in the prints in Appendix A. Two cast plates, Plate 3 and Plate 4, with nominally identical properties weremore » considered.« less
Investigating accident causation through information network modelling.
Griffin, T G C; Young, M S; Stanton, N A
2010-02-01
Management of risk in complex domains such as aviation relies heavily on post-event investigations, requiring complex approaches to fully understand the integration of multi-causal, multi-agent and multi-linear accident sequences. The Event Analysis of Systemic Teamwork methodology (EAST; Stanton et al. 2008) offers such an approach based on network models. In this paper, we apply EAST to a well-known aviation accident case study, highlighting communication between agents as a central theme and investigating the potential for finding agents who were key to the accident. Ultimately, this work aims to develop a new model based on distributed situation awareness (DSA) to demonstrate that the risk inherent in a complex system is dependent on the information flowing within it. By identifying key agents and information elements, we can propose proactive design strategies to optimize the flow of information and help work towards avoiding aviation accidents. Statement of Relevance: This paper introduces a novel application of an holistic methodology for understanding aviation accidents. Furthermore, it introduces an ongoing project developing a nonlinear and prospective method that centralises distributed situation awareness and communication as themes. The relevance of findings are discussed in the context of current ergonomic and aviation issues of design, training and human-system interaction.
Parallel Tensor Compression for Large-Scale Scientific Data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolda, Tamara G.; Ballard, Grey; Austin, Woody Nathan
As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memorymore » parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.« less
Three-dimensional earthquake analysis of roller-compacted concrete dams
NASA Astrophysics Data System (ADS)
Kartal, M. E.
2012-07-01
Ground motion effect on a roller-compacted concrete (RCC) dams in the earthquake zone should be taken into account for the most critical conditions. This study presents three-dimensional earthquake response of a RCC dam considering geometrical non-linearity. Besides, material and connection non-linearity are also taken into consideration in the time-history analyses. Bilinear and multilinear kinematic hardening material models are utilized in the materially non-linear analyses for concrete and foundation rock respectively. The contraction joints inside the dam blocks and dam-foundation-reservoir interaction are modeled by the contact elements. The hydrostatic and hydrodynamic pressures of the reservoir water are modeled with the fluid finite elements based on the Lagrangian approach. The gravity and hydrostatic pressure effects are employed as initial condition before the strong ground motion. In the earthquake analyses, viscous dampers are defined in the finite element model to represent infinite boundary conditions. According to numerical solutions, horizontal displacements increase under hydrodynamic pressure. Besides, those also increase in the materially non-linear analyses of the dam. In addition, while the principle stress components by the hydrodynamic pressure effect the reservoir water, those decrease in the materially non-linear time-history analyses.
Real-time display of flow-pressure-volume loops.
Morozoff, P E; Evans, R W
1992-01-01
Graphic display of respiratory waveforms can be valuable for monitoring the progress of ventilated patients. A system has been developed that can display flow-pressure-volume loops as derived from a patient's respiratory circuit in real time. It can also display, store, print, and retrieve ventilatory waveforms. Five loops can be displayed at once: current, previous, reference, "ideal," and previously saved. Two components, the data-display device (DDD) and the data-collection device (DCD), comprise the system. An IBM 286/386 computer with a graphics card (VGA) and bidirectional parallel port is used for the DDD; an eight-bit microprocessor card and an A/D convertor card make up the DCD. A real-time multitasking operating system was written to control the DDD, while the DCD operates from in-line assembly code. The DCD samples the pressure and flow sensors at 100 Hz and looks for a complete flow waveform pattern based on flow slope. These waveforms are then passed to the DDD via the mutual parallel port. Within the DDD a process integrates the flow to create a volume signal and performs a multilinear regression on the pressure, flow, and volume data to calculate the elastance, resistance, pressure offset, and coefficient of determination. Elastance, resistance, and offset are used to calculate Pr and Pc where: Pr[k] = P[k]-offset-(elastance.V[k]) and Pc[k] = P[k]-offset-(resistance.F[k]). Volume vs. Pc and flow vs. Pr can be displayed in real time. Patient data from previous clinical tests were loaded into the device to verify the software calculations. An analog waveform generator was used to simulate flow and pressure waveforms that validated the system.(ABSTRACT TRUNCATED AT 250 WORDS)
Hashimoto, Ken; Zúniga, Concepción; Romero, Eduardo; Morales, Zoraida; Maguire, James H
2015-01-01
Central American countries face a major challenge in the control of Triatoma dimidiata, a widespread vector of Chagas disease that cannot be eliminated. The key to maintaining the risk of transmission of Trypanosoma cruzi at lowest levels is to sustain surveillance throughout endemic areas. Guatemala, El Salvador, and Honduras integrated community-based vector surveillance into local health systems. Community participation was effective in detection of the vector, but some health services had difficulty sustaining their response to reports of vectors from the population. To date, no research has investigated how best to maintain and reinforce health service responsiveness, especially in resource-limited settings. We reviewed surveillance and response records of 12 health centers in Guatemala, El Salvador, and Honduras from 2008 to 2012 and analyzed the data in relation to the volume of reports of vector infestation, local geography, demography, human resources, managerial approach, and results of interviews with health workers. Health service responsiveness was defined as the percentage of households that reported vector infestation for which the local health service provided indoor residual spraying of insecticide or educational advice. Eight potential determinants of responsiveness were evaluated by linear and mixed-effects multi-linear regression. Health service responsiveness (overall 77.4%) was significantly associated with quarterly monitoring by departmental health offices. Other potential determinants of responsiveness were not found to be significant, partly because of short- and long-term strategies, such as temporary adjustments in manpower and redistribution of tasks among local participants in the effort. Consistent monitoring within the local health system contributes to sustainability of health service responsiveness in community-based vector surveillance of Chagas disease. Even with limited resources, countries can improve health service responsiveness with thoughtful strategies and management practices in the local health systems.
Effect of the Barrier Layer on the Upper Ocean Response to MJO Forcing
NASA Astrophysics Data System (ADS)
Bulusu, S.
2014-12-01
Recently, attention has been given to an upper ocean feature known as the Barrier Layer, which has been shown to impact meteorological phenomena from ENSO to tropical cyclones by suppressing vertical mixing, which reduces sea surface cooling and enhances surface heat fluxes. The calculation defines the Barrier Layer as the difference between the Isothermal Layer Depth (ILD) and Mixed Layer Depth (MLD). Proper representation of these features relies on precise observations of SSS to attain accurate measurements of the MLD and subsequently, the BLT. Compared to the many available in situ SSS measurements, the NASA Aquarius salinity mission currently obtains the closest observations to the true SSS. The role of subsurface features will be better understood through increased accuracy of SSS measurements. In this study BLT estimates are derived from satellite measurements using a multilinear regression model (MRM) in the Indian Ocean. The MRM relates BLT to satellite derived SSS, sea surface temperature (SST) and sea surface height anomalies (SSHA). Besides being a variable that responds passively to atmospheric conditions, SSS significantly controls upper ocean density and therefore the MLD. The formation of a Barrier Layer can lead to possible feedbacks that impact the atmospheric component of the Madden-Julian Oscillation (MJO), as stated as one of the three major hypotheses of the DYNAMO field campaign. This layer produces a stable stratification, reducing vertical mixing, which influences surface heat fluxes and thus could possibly impact atmospheric conditions during the MJO. Establishing the magnitude and extent of SSS variations during the MJO will be a useful tool for data assimilation into models to correctly represent both oceanic thermodynamic characteristics and atmospheric processes during intraseasonal variations.
A study on atmospheric and oceanic processes in the north Indian Ocean
NASA Astrophysics Data System (ADS)
Felton, Clifford S.
Studies on oceanic and atmospheric processes in the Indian Ocean are an active and important area of scientific research. Understanding how intraseasonal and interannual variations impact both the ocean and atmosphere will aid in delineating potential feedback mechanisms and global teleconnections. Thanks to recent efforts focused on expanding observational capabilities and developing models for this region, researchers have been able to begin investigating atmospheric and oceanic processes in the Indian Ocean. This study focuses on the impact of the El Nino Southern Oscillation (ENSO) on tropical cyclone activity over the Bay of Bengal (BoB) and on developing a method for estimating the barrier layer thickness (BLT) in the Indian Ocean from satellite observations. National Center for Environmental Prediction (NCEP-2) and Simple Ocean Data Assimilation (SODA) reanalysis data are used to investigate the alterations in atmospheric and oceanic conditions that impact tropical cyclones during ENSO events over a 33-year time frame (1979-2011). Atmospheric conditions are shown to be more favorable for tropical cyclone development during La Nina over the BoB due to the favorable alteration of large-scale wind, moisture, and vorticity distributions. By combining multiple satellite observations, including the recently launched Soil Moisture and Ocean Salinity (SMOS) and Aquarius SAC-D salinity missions, BLT estimates for the Indian Ocean are generated with the use of a multilinear regression model (MRM). The performance of the MRM is evaluated for the Southeast Arabian Sea (SEAS), Bay of Bengal (BoB), and Eastern Equatorial Indian Ocean (EEIO) where barrier layer formation is most rigorous. Results from the MRM suggest that salinity measurements obtained from Aquarius and SMOS can be useful for tracking and predicting the BLT in the Indian Ocean.
NASA Technical Reports Server (NTRS)
Righter, K.; Leeman, W. P.; Hervig, R. L.
2006-01-01
Partitioning of Ni, Co and V between Cr-rich spinels and basaltic melt has been studied experimentally between 1150 and 1325 C, and at controlled oxygen fugacity from the Co-CoO buffer to slightly above the hematite magnetite buffer. These new results, together with new Ni, Co and V analyses of experimental run products from Leeman [Leeman, W.P., 1974. Experimental determination of the partitioning of divalent cations between olivine and basaltic liquid, Pt. II. PhD thesis, Univ. Oregon, 231 - 337.], show that experimentally determined spinel melt partition coefficients (D) are dependent upon temperature (T), oxygen fugacity (fO2) and spinel composition. In particular, partition coefficients determined on doped systems are higher than those in natural (undoped) systems, perhaps due to changing activity coefficients over the composition range defined by the experimental data. Using our new results and published runs (n =85), we obtain a multilinear regression equation that predicts experimental D(V) values as a function of T, fO2, concentration of V in melt and spinel composition. This equation allows prediction of D(V) spinel/melt values for natural mafic liquids at relevant crystallization conditions. Similarly, D(Ni) and D(Co) values can be inferred from our experiments at redox conditions approaching the QFM buffer, temperatures of 1150 to 1250 C and spinel composition (early Cr-bearing and later Ti-magnetite) appropriate for basic magma differentiation. When coupled with major element modelling of liquid lines of descent, these values (D(Ni) sp/melt=10 and D(Co) sp/melt=5) closely reproduce the compositional variation observed in komatiite, mid-ocean ridge basalt (MORB), ocean island basalt (OIB) and basalt to rhyolite suites.
Developmental Outcomes of Late Preterm Infants From Infancy to Kindergarten
Kaciroti, Niko; Richards, Blair; Oh, Wonjung; Lumeng, Julie C.
2016-01-01
OBJECTIVE: To compare developmental outcomes of late preterm infants (34–36 weeks’ gestation) with infants born at early term (37–38 weeks’ gestation) and term (39–41 weeks’ gestation), from infancy through kindergarten. METHODS: Sample included 1000 late preterm, 1800 early term, and 3200 term infants ascertained from the Early Childhood Longitudinal Study, Birth Cohort. Direct assessments of development were performed at 9 and 24 months by using the Bayley Short Form–Research Edition T-scores and at preschool and kindergarten using the Early Childhood Longitudinal Study, Birth Cohort reading and mathematics θ scores. Maternal and infant characteristics were obtained from birth certificate data and parent questionnaires. After controlling for covariates, we compared mean developmental outcomes between late preterm and full-term groups in serial cross-sectional analyses at each timepoint using multilinear regression, with pairwise comparisons testing for group differences by gestational age categories. RESULTS: With covariates controlled at all timepoints, at 9 months late preterm infants demonstrated less optimal developmental outcomes (T = 47.31) compared with infants born early term (T = 49.12) and term (T = 50.09) (P < .0001). This association was not seen at 24 months, (P = .66) but reemerged at preschool. Late preterm infants demonstrated less optimal scores in preschool reading (P = .0006), preschool mathematics (P = .0014), and kindergarten reading (P = .0007) compared with infants born at term gestation. CONCLUSIONS: Although late preterm infants demonstrate comparable developmental outcomes to full-term infants (early term and full-term gestation) at 24 months, they demonstrate less optimal reading outcomes at preschool and kindergarten timepoints. Ongoing developmental surveillance for late preterm infants is warranted into preschool and kindergarten. PMID:27456513
A global ocean climatology of preindustrial and modern ocean δ13C
NASA Astrophysics Data System (ADS)
Eide, Marie; Olsen, Are; Ninnemann, Ulysses S.; Johannessen, Truls
2017-03-01
We present a global ocean climatology of dissolved inorganic carbon δ13C (‰) corrected for the 13C-Suess effect, preindustrial δ13C. This was constructed by first using Olsen and Ninnemann's (2010) back-calculation method on data from 25 World Ocean Circulation Experiment cruises to reconstruct the preindustrial δ13C on sections spanning all major oceans. Next, we developed five multilinear regression equations, one for each major ocean basin, which were applied on the World Ocean Atlas data to construct the climatology. This reveals the natural δ13C distribution in the global ocean. Compared to the modern distribution, the preindustrial δ13C spans a larger range of values. The maxima, of up to 1.8‰, occurs in the subtropical gyres of all basins, in the upper and intermediate waters of the North Atlantic, as well as in mode waters with a Southern Ocean origin. Particularly strong gradients occur at intermediate depths, revealing a strong potential for using δ13C as a tracer for changes in water mass geometry at these levels. Further, we identify a much tighter relationship between δ13C and apparent oxygen utilization (AOU) than between δ13C and phosphate. This arises because, in contrast to phosphate, AOU and δ13C are both partly reset when waters are ventilated in the Southern Ocean and underscore that δ13C is a highly robust proxy for past changes in ocean oxygen content and ocean ventilation. Our global preindustrial δ13C climatology is openly accessible and can be used, for example, for improved model evaluation and interpretation of sediment δ13C records.
NASA Astrophysics Data System (ADS)
Datteri, Ryan; Pallavaram, Srivatsan; Konrad, Peter E.; Neimat, Joseph S.; D'Haese, Pierre-François; Dawant, Benoit M.
2011-03-01
A number of groups have reported on the occurrence of intra-operative brain shift during deep brain stimulation (DBS) surgery. This has a number of implications for the procedure including an increased chance of intra-cranial bleeding and complications due to the need for more exploratory electrodes to account for the brain shift. It has been reported that the amount of pneumocephalus or air invasion into the cranial cavity due to the opening of the dura correlates with intraoperative brain shift. Therefore, pre-operatively predicting the amount of pneumocephalus expected during surgery is of interest toward accounting for brain shift. In this study, we used 64 DBS patients who received bilateral electrode implantations and had a post-operative CT scan acquired immediately after surgery (CT-PI). For each patient, the volumes of the pneumocephalus, left ventricle, right ventricle, third ventricle, white matter, grey matter, and cerebral spinal fluid were calculated. The pneumocephalus was calculated from the CT-PI utilizing a region growing technique that was initialized with an atlas-based image registration method. A multi-atlas-based image segmentation method was used to segment out the ventricles of each patient. The Statistical Parametric Mapping (SPM) software package was utilized to calculate the volumes of the cerebral spinal fluid (CSF), white matter and grey matter. The volume of individual structures had a moderate correlation with pneumocephalus. Utilizing a multi-linear regression between the volume of the pneumocephalus and the statistically relevant individual structures a Pearson's coefficient of r = 0.4123 (p = 0.0103) was found. This study shows preliminary results that could be used to develop a method to predict the amount of pneumocephalus ahead of the surgery.
Subhan, Fatheema Begum; Colman, Ian; McCargar, Linda; Bell, Rhonda C
2017-06-01
Objective To describe the effects of maternal pre-pregnancy body mass index (BMI) and gestational weight gain (GWG) on infant anthropometrics at birth and 3 months and infant growth rates between birth and 3 months. Methods Body weight prior to and during pregnancy and infant weight and length at birth and 3 months were collected from 600 mother-infant pairs. Adherence to GWG was based on IOM recommendations. Age and sex specific z-scores were calculated for infant weight and length at birth and 3 months. Rapid postnatal growth was defined as a difference of >0.67 in weight-for-age z-score between birth and 3 months. Relationships between maternal and infant characteristics were analysed using multilinear regression. Results Most women (65%) had a normal pre-pregnancy BMI and 57% gained above GWG recommendations. Infants were 39.3 ± 1.2 weeks and 3431 ± 447.9 g at birth. At 3 months postpartum 60% were exclusively breast fed while 38% received breast milk and formula. Having a pre-pregnancy BMI >25 kg/m 2 was associated with higher z-scores for birth weight and weight-for-age at 3 months. Gaining above recommendations was associated with higher z-scores for birth weight, weight-for-age and BMI. Infants who experienced rapid postnatal growth had higher odds of being born to women who gained above recommendations. Conclusion for Practice Excessive GWG is associated with higher birth weight and rapid weight gain in infants. Interventions that optimize GWG should explore effects on total and rates of early infant growth.
Sachindra, D. A.; Perera, B. J. C.
2016-01-01
This paper presents a novel approach to incorporate the non-stationarities characterised in the GCM outputs, into the Predictor-Predictand Relationships (PPRs) in statistical downscaling models. In this approach, a series of 42 PPRs based on multi-linear regression (MLR) technique were determined for each calendar month using a 20-year moving window moved at a 1-year time step on the predictor data obtained from the NCEP/NCAR reanalysis data archive and observations of precipitation at 3 stations located in Victoria, Australia, for the period 1950–2010. Then the relationships between the constants and coefficients in the PPRs and the statistics of reanalysis data of predictors were determined for the period 1950–2010, for each calendar month. Thereafter, using these relationships with the statistics of the past data of HadCM3 GCM pertaining to the predictors, new PPRs were derived for the periods 1950–69, 1970–89 and 1990–99 for each station. This process yielded a non-stationary downscaling model consisting of a PPR per calendar month for each of the above three periods for each station. The non-stationarities in the climate are characterised by the long-term changes in the statistics of the climate variables and above process enabled relating the non-stationarities in the climate to the PPRs. These new PPRs were then used with the past data of HadCM3, to reproduce the observed precipitation. It was found that the non-stationary MLR based downscaling model was able to produce more accurate simulations of observed precipitation more often than conventional stationary downscaling models developed with MLR and Genetic Programming (GP). PMID:27997609
Determining Methane Budgets with Eddy Covariance Data ascertained in a heterogeneous Footprint
NASA Astrophysics Data System (ADS)
Rößger, N.; Wille, C.; Kutzbach, L.
2016-12-01
Amplified climate change in the Arctic may cause methane emissions to increase considerably due to more suitable production conditions. With a focus on methane, we studied the carbon turnover on the modern flood plain of Samoylov Island situated in the Lena River Delta (72°22'N, 126°28'E) using the eddy covariance data. In contrast to the ice-wedge polygonal tundra on the delta's river terraces, the flood plains have to date received little attention. During the warm season in 2014 and 2015, the mean methane flux amounted to 0.012 μmol m-2 s-1. This average is the result of a large variability in methane fluxes which is attributed to the complexity of the footprint where methane sources are unevenly distributed. Explaining this variability is based on three modelling approaches: a deterministic model using exponential relationships for flux drivers, a multilinear model created through stepwise regression and a neural network which relies on machine learning techniques. A substantial boost in model performance was achieved through inputting footprint information in the form of the contribution of vegetation classes; this indicates the vegetation is serving as an integrated proxy for potential methane flux drivers. The neural network performed best; however, a robust validation revealed that the deterministic model best captured ecosystem-intrinsic features. Furthermore, the deterministic model allowed a downscaling of the net flux by allocating fractions to three vegetation classes which in turn form the basis for upscaling methane fluxes in order to obtain the budget for the entire flood plain. Arctic methane emissions occur in a spatio-temporally complex pattern and employing fine-scale information is crucial to understanding the flux dynamics.
NASA Astrophysics Data System (ADS)
El-Vilaly, Mohamed Abd Salam; Didan, Kamel; Marsh, Stuart E.; van Leeuwen, Willem J. D.; Crimmins, Michael A.; Munoz, Armando Barreto
2018-03-01
For more than a decade, the Four Corners Region has faced extensive and persistent drought conditions that have impacted vegetation communities and local water resources while exacerbating soil erosion. These persistent droughts threaten ecosystem services, agriculture, and livestock activities, and expose the hypersensitivity of this region to inter-annual climate variability and change. Much of the intermountainWestern United States has sparse climate and vegetation monitoring stations, making fine-scale drought assessments difficult. Remote sensing data offers the opportunity to assess the impacts of the recent droughts on vegetation productivity across these areas. Here, we propose a drought assessment approach that integrates climate and topographical data with remote sensing vegetation index time series. Multisensor Normalized Difference Vegetation Index (NDVI) time series data from 1989 to 2010 at 5.6 km were analyzed to characterize the vegetation productivity changes and responses to the ongoing drought. A multi-linear regression was applied to metrics of vegetation productivity derived from the NDVI time series to detect vegetation productivity, an ecosystem service proxy, and changes. The results show that around 60.13% of the study area is observing a general decline of greenness ( p<0.05), while 3.87% show an unexpected green up, with the remaining areas showing no consistent change. Vegetation in the area show a significant positive correlation with elevation and precipitation gradients. These results, while, confirming the region's vegetation decline due to drought, shed further light on the future directions and challenges to the region's already stressed ecosystems. Whereas the results provide additional insights into this isolated and vulnerable region, the drought assessment approach used in this study may be adapted for application in other regions where surface-based climate and vegetation monitoring record is spatially and temporally limited.
Behera, Sailesh N; Betha, Raghu; Liu, Ping; Balasubramanian, Rajasekhar
2013-05-01
Aerosol acidity is one of the most important parameters that can influence atmospheric visibility, climate change and human health. Based on continuous field measurements of inorganic aerosol species and their thermodynamic modeling on a time resolution of 1h, this study has investigated the acidic properties of PM2.5 and their relation with the formation of secondary inorganic aerosols (SIA). The study was conducted by taking into account the prevailing ambient temperature (T) and relative humidity (RH) in a tropical urban atmosphere. The in-situ aerosol pH (pH(IS)) on a 12h basis ranged from -0.20 to 1.46 during daytime with an average value of 0.48 and 0.23 to 1.53 during nighttime with an average value of 0.72. These diurnal variations suggest that the daytime aerosol was more acidic than that caused by the nighttime aerosol. The hourly values of pH(IS) showed a reverse trend as compared to that of in-situ aerosol acidity ([H(+)]Ins). The pH(IS) had its maximum values at 3:00 and at 20:00 and its minimum during 11:00 to 12:00. Correlation analyses revealed that the molar concentration ratio of ammonium to sulfate (R(N/S)), equivalent concentration ratio of cations to anions (RC/A), T and RH can be used as independent variables for prediction of pH(IS). A multi-linear regression model consisting of RN/S, RC/A, T and RH was developed to estimate aerosol pH(IS). Copyright © 2013 Elsevier B.V. All rights reserved.
Sachindra, D A; Perera, B J C
2016-01-01
This paper presents a novel approach to incorporate the non-stationarities characterised in the GCM outputs, into the Predictor-Predictand Relationships (PPRs) in statistical downscaling models. In this approach, a series of 42 PPRs based on multi-linear regression (MLR) technique were determined for each calendar month using a 20-year moving window moved at a 1-year time step on the predictor data obtained from the NCEP/NCAR reanalysis data archive and observations of precipitation at 3 stations located in Victoria, Australia, for the period 1950-2010. Then the relationships between the constants and coefficients in the PPRs and the statistics of reanalysis data of predictors were determined for the period 1950-2010, for each calendar month. Thereafter, using these relationships with the statistics of the past data of HadCM3 GCM pertaining to the predictors, new PPRs were derived for the periods 1950-69, 1970-89 and 1990-99 for each station. This process yielded a non-stationary downscaling model consisting of a PPR per calendar month for each of the above three periods for each station. The non-stationarities in the climate are characterised by the long-term changes in the statistics of the climate variables and above process enabled relating the non-stationarities in the climate to the PPRs. These new PPRs were then used with the past data of HadCM3, to reproduce the observed precipitation. It was found that the non-stationary MLR based downscaling model was able to produce more accurate simulations of observed precipitation more often than conventional stationary downscaling models developed with MLR and Genetic Programming (GP).
One-Shot Decoupling and Page Curves from a Dynamical Model for Black Hole Evaporation.
Brádler, Kamil; Adami, Christoph
2016-03-11
One-shot decoupling is a powerful primitive in quantum information theory and was hypothesized to play a role in the black hole information paradox. We study black hole dynamics modeled by a trilinear Hamiltonian whose semiclassical limit gives rise to Hawking radiation. An explicit numerical calculation of the discretized path integral of the S matrix shows that decoupling is exact in the continuous limit, implying that quantum information is perfectly transferred from the black hole to radiation. A striking consequence of decoupling is the emergence of an output radiation entropy profile that follows Page's prediction. We argue that information transfer and the emergence of Page curves is a robust feature of any multilinear interaction Hamiltonian with a bounded spectrum.
A temperature match based optimization method for daily load prediction considering DLC effect
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Z.
This paper presents a unique optimization method for short term load forecasting. The new method is based on the optimal template temperature match between the future and past temperatures. The optimal error reduction technique is a new concept introduced in this paper. Two case studies show that for hourly load forecasting, this method can yield results as good as the rather complicated Box-Jenkins Transfer Function method, and better than the Box-Jenkins method; for peak load prediction, this method is comparable in accuracy to the neural network method with back propagation, and can produce more accurate results than the multi-linear regressionmore » method. The DLC effect on system load is also considered in this method.« less
NASA Astrophysics Data System (ADS)
Galfi, H.; Österlund, H.; Marsalek, J.; Viklander, M.
2016-08-01
Four indicator bacteria were measured in association with physico-chemical constituents and selected inorganics during rainfall, baseflow and snowmelt periods in storm sewers of four urban catchments in a northern Swedish city. The variation patterns of coliforms, Escherichia coli, enterococci and Clostridium perfringens concentrations were assessed in manually collected grab samples together with those of phosphorus, nitrogen, solids, and readings of pH, turbidity, water conductivity, temperature and flow rates to examine whether these constituents could serve as potential indicators of bacteria sources. A similar analysis was applied to variation patterns of eight selected inorganics typical for baseflow and stormwater runoff to test the feasibility of using these inorganics to distinguish between natural and anthropogenic sources of inflow into storm sewers. The monitored catchments varied in size, the degree of development, and land use. Catchment and season (i.e., rainy or snowmelt periods) specific variations were investigated for sets of individual stormwater samples by the principal component analysis (PCA) to identify the constituents with variation patterns similar to those of indicator bacteria, and to exclude the constituents with less similarity. In the reduced data set, the similarities were quantified by the clustering correlation analysis. Finally, the positive/negative relationships found between indicator bacteria and the identified associated constituent groups were described by multilinear regressions. In the order of decreasing concentrations, coliforms, E. coli and enterococci were found in the highest mean concentrations during both rainfall and snowmelt generated runoff. Compared to dry weather baseflow, concentrations of these three indicators in stormwater were 10 (snowmelt runoff) to 102 (rain runoff) times higher. C. perfringens mean concentrations were practically constant regardless of the season and catchment. The type and number of variables associated with bacteria depended on the degree of catchment development and the inherent complexity of bacteria sources. The list of variables associated with bacteria included the flow rate, solids with associated inorganics (Fe and Al) and phosphorus, indicating similar sources of constituents regardless of the season. On the other hand, bacteria were associated with water temperature only during rain periods, and somewhat important associations of bacteria with nitrogen and pH were found during the periods of snowmelt. Most of the associated constituents were positively correlated with bacteria responses, but conductivity, with two associated inorganics (Si and Sr), was mostly negatively correlated in all the catchments. Although the study findings do not indicate any distinct surrogates to indicator bacteria, the inclusion of the above identified constituents (flow rate, solids and total phosphorus for all seasons, water temperature for rainfall runoff, and total nitrogen and pH for snowmelt only) in sanitary surveys of northern climate urban catchments would provide additional insight into indicator bacteria sources and their modeling.
Bresch, A; Rullmann, M; Luthardt, J; Becker, G A; Patt, M; Ding, Y-S; Hilbert, A; Sabri, O; Hesse, S
2017-10-01
The relationship between food-intake related behaviours measured by the Three-Factor Eating Questionnaire (TFEQ) and in vivo norepinephrine transporter (NET) availability has not been explored yet. We investigated ten obese individuals (body mass index (BMI) 42.4 ± 3.7 kg/m 2 ) and ten normal-weight healthy controls (HC, BMI 23.9 ± 2.5 kg/m 2 ) with (S,S)-[ 11 C]-O-methylreboxetine ([ 11 C]MRB) positron emission tomography (PET). All participants completed the TFEQ, which measures cognitive restraint, disinhibition and hunger. Image analysis required magnetic resonance imaging data sets onto which volumes-of-interests were drawn. Tissue time activity curves (TACs) were obtained from the dynamic PET data followed by kinetic modeling of these regional brain TACs applying the multilinear reference tissue model (2 parameters) with the occipital cortex as reference region. Obese individuals scored significantly higher on the hunger subscale of the TFEQ. Correlative data analysis showed that a higher degree of hunger correlated negatively with the NET availability of the insular cortex in both obese individuals and HC; however, this finding was more pronounced in obesity. Further, for obese individuals, a negative correlation between disinhibition and NET BP ND of the locus coeruleus was detected. In conclusion, these initial data provide in vivo imaging support for the involvement of the central NE system in maladaptive eating behaviors such as susceptibility to hunger. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lognormal Approximations of Fault Tree Uncertainty Distributions.
El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P
2018-01-26
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolda, Tamara Gibson
We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties ofmore » the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.« less
Simultaneous Tensor Decomposition and Completion Using Factor Priors.
Chen, Yi-Lei; Hsu, Chiou-Ting Candy; Liao, Hong-Yuan Mark
2013-08-27
Tensor completion, which is a high-order extension of matrix completion, has generated a great deal of research interest in recent years. Given a tensor with incomplete entries, existing methods use either factorization or completion schemes to recover the missing parts. However, as the number of missing entries increases, factorization schemes may overfit the model because of incorrectly predefined ranks, while completion schemes may fail to interpret the model factors. In this paper, we introduce a novel concept: complete the missing entries and simultaneously capture the underlying model structure. To this end, we propose a method called Simultaneous Tensor Decomposition and Completion (STDC) that combines a rank minimization technique with Tucker model decomposition. Moreover, as the model structure is implicitly included in the Tucker model, we use factor priors, which are usually known a priori in real-world tensor objects, to characterize the underlying joint-manifold drawn from the model factors. We conducted experiments to empirically verify the convergence of our algorithm on synthetic data, and evaluate its effectiveness on various kinds of real-world data. The results demonstrate the efficacy of the proposed method and its potential usage in tensor-based applications. It also outperforms state-of-the-art methods on multilinear model analysis and visual data completion tasks.
Park, I; Pasquetti, T; Malheiros, R D; Ferket, P R; Kim, S W
2018-01-01
This study was conducted to test the effects of dietary supplementation of feed grade L-Met on growth performance and redox status of turkey poults compared with the use of conventional DL-Met. Three hundred and eighty five newly hatched turkey poults were weighed and allotted to 5 treatments in a completely randomized design and the birds were fed dietary treatments for 28 d, including a basal diet (BD), the BD + 0.17 or 0.33% DL-Met or L-Met (representing 60, 75, and 90% of the requirement by National Research Council (NRC) for S containing AA, respectively). Increasing Met supplementation from 0 to 0.33% increased (P < 0.05) weight gain (690 to 746 g) and feed intake (1,123 to 1,248 g) of turkey poults. Supplementing L-Met tended (P = 0.053) to reduce feed to gain ratio (1.70 to 1.63) compared with DL-Met. The relative bioavailability of L-Met to DL-Met was 160% based on a multilinear regression analysis of weight gain. Supplementing Met regardless of its sources decreased (P < 0.05) malondialdehyde (3.29 to 2.47 nmol/mg protein) in duodenal mucosa compared with birds in the BD. Supplementing L-Met tended (P = 0.094) to decrease malondialdehyde (1.27 to 1.16 nmol/mg protein) and increase glutathione (3.21 to 3.45 nmol/mg protein) in the liver compared with DL-Met. Total antioxidant capacity, protein carbonyl, and morphology of duodenum and jejunum were not affected by Met sources. In conclusion, dietary supplementation of 0.33% Met to a diet with S containing AA meeting 60% of the NRC requirement enhanced weight gain, feed intake, and redox status by reducing oxidative stress in the gut and liver of turkey poults during the first 28 d of age. Use of L-Met tended to enhance feed efficiency and was more effective in reducing oxidative stress and increasing glutathione in the liver compared with the use of DL-Met. The use of L-Met as a source of Met replacing DL-Met seems to be beneficial to turkey poults during the first 28 d of age. © The Author 2017. Published by Oxford University Press on behalf of Poultry Science Association.
Principal component regression analysis with SPSS.
Liu, R X; Kuang, J; Gong, Q; Hou, X L
2003-06-01
The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.
NASA Astrophysics Data System (ADS)
Amil, N.; Latif, M. T.; Khan, M. F.; Mohamad, M.
2015-09-01
This study attempts to investigate the fine particulate matter (PM2.5) variability in the Klang Valley urban-industrial environment. In total, 94 daily PM2.5 samples were collected during a one-year campaign from August 2011 to July 2012, covering all four seasons. The samples were analysed for various inorganic components and black carbon. The chemical compositions were statistically analysed and the aerosol pattern was characterised using descriptive analysis, correlation matrices, enrichment factors (EF), stoichiometric analysis and chemical mass closure (CMC). For source apportionment purposes, a combination of positive matrix factorisation (PMF) and multi-linear regression (MLR) was employed. Further, meteorological-gaseous parameters were incorporated into each analysis for improved assessment. The results showed that PM2.5 mass averaged at 28 ± 18 μg m-3, 2.8 fold higher than the World Health Organisation (WHO) annual guideline. On a daily basis, the PM2.5 mass ranged between 6 and 118 μg m-3 with 43 % exceedance of the daily WHO guideline. The North-East monsoon (NE) was the only season with < 50 % sample exceedance of the daily WHO guideline. On an annual scale, PM2.5 mass correlated positively with temperature (T) and wind speed (WS) but negatively with relative humidity (RH). With the exception of NOx, the gases analysed (CO, NO2, NO and SO2) were found to significantly influence the PM2.5 mass. Seasonal variability unexpectedly showed that rainfall, WS and wind direction (WD) did not significantly correlate with PM2.5 mass. Further analysis on the PM2.5 / PM10, PM2.5 / TSP and PM10 / TSP ratios reveal that meteorological parameters only greatly influenced the coarse particles (PM > 2.5μm) and less so the fine particles at the site. Chemical composition showed that both primary and secondary pollutants of PM2.5 are equally important, albeit with seasonal variability. The CMC components identified were: black carbon (BC) > secondary inorganic aerosols (SIA) > dust > trace elements (TE) > sea salt > K+. The EF analysis distinguished two groups of trace elements: those with anthropogenic sources (Pb, Se, Zn, Cd, As, Bi, Ba, Cu, Rb, V and Ni) and those with a crustal source (Sr, Mn, Co and Li). The five identified factors resulting from PMF 5.0 were: (1) combustion of engine oil; (2) mineral dust; (3) mixed SIA and biomass burning; (4) mixed traffic and industrial; and (5) sea salt. Each of these sources had an annual mean contribution of 17, 14, 42, 10 and 17 %, respectively. The dominance of each identified source largely varied with changing season and a few factors were in agreement with the CMC, EF and stoichiometric analysis, accordingly. In relation to meteorological-gaseous parameters, PM2.5 sources were influenced by different parameters during different seasons. In addition, two air pollution episodes (HAZE) revealed the influence of local and/or regional sources. Overall, our study clearly suggests that the chemical constituents and sources of PM2.5 were greatly influenced and characterised by meteorological and gaseous parameters which largely vary with season.
Evaluating Soil Moisture Retrievals from ESA's SMOS and NASA's SMAP Brightness Temperature Datasets
NASA Technical Reports Server (NTRS)
Al-Yaari, A.; Wigernon, J.-P.; Kerr, Y.; Rodriguez-Fernandez, N.; O'Neill, P. E.; Jackson, T. J.; De Lannoy, G. J. M.; Al Bitar, A.; Mialon, A.; Richaume, P.;
2017-01-01
Two satellites are currently monitoring surface soil moisture (SM) using L-band observations: SMOS (Soil Moisture and Ocean Salinity), a joint ESA (European Space Agency), CNES (Centre national d'tudes spatiales), and CDTI (the Spanish government agency with responsibility for space) satellite launched on November 2, 2009 and SMAP (Soil Moisture Active Passive), a National Aeronautics and Space Administration (NASA) satellite successfully launched in January 2015. In this study, we used a multilinear regression approach to retrieve SM from SMAP data to create a global dataset of SM, which is consistent with SM data retrieved from SMOS. This was achieved by calibrating coefficients of the regression model using the CATDS (Centre Aval de Traitement des Donnes) SMOS Level 3 SM and the horizontally and vertically polarized brightness temperatures (TB) at 40 deg incidence angle, over the 2013 - 2014 period. Next, this model was applied to SMAP L3 TB data from Apr 2015 to Jul 2016. The retrieved SM from SMAP (referred to here as SMAP_Reg) was compared to: (i) the operational SMAP L3 SM (SMAP_SCA), retrieved using the baseline Single Channel retrieval Algorithm (SCA); and (ii) the operational SMOSL3 SM, derived from the multiangular inversion of the L-MEB model (L-MEB algorithm) (SMOSL3). This inter-comparison was made against in situ soil moisture measurements from more than 400 sites spread over the globe, which are used here as a reference soil moisture dataset. The in situ observations were obtained from the International Soil Moisture Network (ISMN; https:ismn.geo.tuwien.ac.at) in North of America (PBO_H2O, SCAN, SNOTEL, iRON, and USCRN), in Australia (Oznet), Africa (DAHRA), and in Europe (REMEDHUS, SMOSMANIA, FMI, and RSMN). The agreement was analyzed in terms of four classical statistical criteria: Root Mean Squared Error (RMSE),Bias, Unbiased RMSE (UnbRMSE), and correlation coefficient (R). Results of the comparison of these various products with in situ observations show that the performance of both SMAP products i.e. SMAP_SCA and SMAP_Reg is 48 similar and marginally better to that of the SMOSL3 product particularly over the PBO_H2O, SCAN, and USCRN sites. However, SMOSL3 SM was closer to the in situ observations over the DAHRA and Oznet sites. We found that the correlation between all three datasets and in situ measurements is best (R 0.80) over the Oznet sites and worst (R 0.58) over the SNOTEL sites for SMAP_SCA and over the DAHRA and SMOSMANIA sites (R 0.51 and R 0.45 for SMAP_Reg and SMOSL3, respectively). The Bias values showed that all products are generally dry, except over RSMN, DAHRA, and Oznet (and FMI for SMAP_SCA). Finally, our analysis provided interesting insights that can be useful to improve the consistency between SMAP and SMOS datasets.
NASA Astrophysics Data System (ADS)
Perez Altimar, Roderick
Brittleness is a key characteristic for effective reservoir stimulation and is mainly controlled by mineralogy in unconventional reservoirs. Unfortunately, there is no universally accepted means of predicting brittleness from measures made in wells or from surface seismic data. Brittleness indices (BI) are based on mineralogy, while brittleness average estimations are based on Young's modulus and Poisson's ratio. I evaluate two of the more popular brittleness estimation techniques and apply them to a Barnett Shale seismic survey in order to estimate its geomechanical properties. Using specialized logging tools such as elemental capture tool, density, and P- and S wave sonic logs calibrated to previous core descriptions and laboratory measurements, I create a survey-specific BI template in Young's modulus versus Poisson's ratio or alternatively lambdarho versus murho space. I use this template to predict BI from elastic parameters computed from surface seismic data, providing a continuous estimate of BI estimate in the Barnett Shale survey. Extracting lambdarho-murho values from microseismic event locations, I compute brittleness index from the template and find that most microsemic events occur in the more brittle part of the reservoir. My template is validated through a suite of microseismic experiments that shows most events occurring in brittle zones, fewer events in the ductile shale, and fewer events still in the limestone fracture barriers. Estimated ultimate recovery (EUR) is an estimate of the expected total production of oil and/or gas for the economic life of a well and is widely used in the evaluation of resource play reserves. In the literature it is possible to find several approaches for forecasting purposes and economic analyses. However, the extension to newer infill wells is somewhat challenging because production forecasts in unconventional reservoirs are a function of both completion effectiveness and reservoir quality. For shale gas reservoirs, completion effectiveness is a function not only of the length of the horizontal wells, but also of the number and size of the hydraulic fracture treatments in a multistage completion. These considerations also include the volume of proppant placed, proppant concentration, total perforation length, and number of clusters, while reservoir quality is dependent on properties such as the spatial variations in permeability, porosity, stress, and mechanical properties. I evaluate parametric methods such as multi-linear regression, and compare it to a non-parameteric ACE to better correlate production to engineering attributes for two datasets in the Haynesville Shale play and the Barnett Shale. I find that the parametric methods are useful for an exploratory analysis of the relationship among several variables and are useful to guide the selection of a more sophisticated parametric functional form, when the underlying functional relationship is unknown. Non-parametric regression, on the other hand, is entirely data-driven and does not rely on a pre-specified functional forms. The transformations generated by the ACE algorithm facilitate the identification of appropriate, and possibly meaningful, functional forms.
Evaluating soil moisture retrievals from ESA’s SMOS and NASA’s SMAP brightness temperature datasets
Al-Yaari, A.; Wigneron, J.-P.; Kerr, Y.; Rodriguez-Fernandez, N.; O’Neill, P. E.; Jackson, T. J.; De Lannoy, G.J.M.; Al Bitar, A; Mialon, A.; Richaume, P.; Walker, JP; Mahmoodi, A.; Yueh, S.
2018-01-01
Two satellites are currently monitoring surface soil moisture (SM) using L-band observations: SMOS (Soil Moisture and Ocean Salinity), a joint ESA (European Space Agency), CNES (Centre national d’études spatiales), and CDTI (the Spanish government agency with responsibility for space) satellite launched on November 2, 2009 and SMAP (Soil Moisture Active Passive), a National Aeronautics and Space Administration (NASA) satellite successfully launched in January 2015. In this study, we used a multilinear regression approach to retrieve SM from SMAP data to create a global dataset of SM, which is consistent with SM data retrieved from SMOS. This was achieved by calibrating coefficients of the regression model using the CATDS (Centre Aval de Traitement des Données) SMOS Level 3 SM and the horizontally and vertically polarized brightness temperatures (TB) at 40° incidence angle, over the 2013 – 2014 period. Next, this model was applied to SMAP L3 TB data from Apr 2015 to Jul 2016. The retrieved SM from SMAP (referred to here as SMAP_Reg) was compared to: (i) the operational SMAP L3 SM (SMAP_SCA), retrieved using the baseline Single Channel retrieval Algorithm (SCA); and (ii) the operational SMOSL3 SM, derived from the multiangular inversion of the L-MEB model (L-MEB algorithm) (SMOSL3). This inter-comparison was made against in situ soil moisture measurements from more than 400 sites spread over the globe, which are used here as a reference soil moisture dataset. The in situ observations were obtained from the International Soil Moisture Network (ISMN; https://ismn.geo.tuwien.ac.at/) in North of America (PBO_H2O, SCAN, SNOTEL, iRON, and USCRN), in Australia (Oznet), Africa (DAHRA), and in Europe (REMEDHUS, SMOSMANIA, FMI, and RSMN). The agreement was analyzed in terms of four classical statistical criteria: Root Mean Squared Error (RMSE), Bias, Unbiased RMSE (UnbRMSE), and correlation coefficient (R). Results of the comparison of these various products with in situ observations show that the performance of both SMAP products i.e. SMAP_SCA and SMAP_Reg is similar and marginally better to that of the SMOSL3 product particularly over the PBO_H2O, SCAN, and USCRN sites. However, SMOSL3 SM was closer to the in situ observations over the DAHRA and Oznet sites. We found that the correlation between all three datasets and in situ measurements is best (R > 0.80) over the Oznet sites and worst (R = 0.58) over the SNOTEL sites for SMAP_SCA and over the DAHRA and SMOSMANIA sites (R= 0.51 and R= 0.45 for SMAP_Reg and SMOSL3, respectively). The Bias values showed that all products are generally dry, except over RSMN, DAHRA, and Oznet (and FMI for SMAP_SCA). Finally, our analysis provided interesting insights that can be useful to improve the consistency between SMAP and SMOS datasets. PMID:29743730
Evaluating soil moisture retrievals from ESA's SMOS and NASA's SMAP brightness temperature datasets.
Al-Yaari, A; Wigneron, J-P; Kerr, Y; Rodriguez-Fernandez, N; O'Neill, P E; Jackson, T J; De Lannoy, G J M; Al Bitar, A; Mialon, A; Richaume, P; Walker, J P; Mahmoodi, A; Yueh, S
2017-05-01
Two satellites are currently monitoring surface soil moisture (SM) using L-band observations: SMOS (Soil Moisture and Ocean Salinity), a joint ESA (European Space Agency), CNES (Centre national d'études spatiales), and CDTI (the Spanish government agency with responsibility for space) satellite launched on November 2, 2009 and SMAP (Soil Moisture Active Passive), a National Aeronautics and Space Administration (NASA) satellite successfully launched in January 2015. In this study, we used a multilinear regression approach to retrieve SM from SMAP data to create a global dataset of SM, which is consistent with SM data retrieved from SMOS. This was achieved by calibrating coefficients of the regression model using the CATDS (Centre Aval de Traitement des Données) SMOS Level 3 SM and the horizontally and vertically polarized brightness temperatures (TB) at 40° incidence angle, over the 2013 - 2014 period. Next, this model was applied to SMAP L3 TB data from Apr 2015 to Jul 2016. The retrieved SM from SMAP (referred to here as SMAP_Reg) was compared to: (i) the operational SMAP L3 SM (SMAP_SCA), retrieved using the baseline Single Channel retrieval Algorithm (SCA); and (ii) the operational SMOSL3 SM, derived from the multiangular inversion of the L-MEB model (L-MEB algorithm) (SMOSL3). This inter-comparison was made against in situ soil moisture measurements from more than 400 sites spread over the globe, which are used here as a reference soil moisture dataset. The in situ observations were obtained from the International Soil Moisture Network (ISMN; https://ismn.geo.tuwien.ac.at/) in North of America (PBO_H2O, SCAN, SNOTEL, iRON, and USCRN), in Australia (Oznet), Africa (DAHRA), and in Europe (REMEDHUS, SMOSMANIA, FMI, and RSMN). The agreement was analyzed in terms of four classical statistical criteria: Root Mean Squared Error (RMSE), Bias, Unbiased RMSE (UnbRMSE), and correlation coefficient (R). Results of the comparison of these various products with in situ observations show that the performance of both SMAP products i.e. SMAP_SCA and SMAP_Reg is similar and marginally better to that of the SMOSL3 product particularly over the PBO_H2O, SCAN, and USCRN sites. However, SMOSL3 SM was closer to the in situ observations over the DAHRA and Oznet sites. We found that the correlation between all three datasets and in situ measurements is best (R > 0.80) over the Oznet sites and worst (R = 0.58) over the SNOTEL sites for SMAP_SCA and over the DAHRA and SMOSMANIA sites (R= 0.51 and R= 0.45 for SMAP_Reg and SMOSL3, respectively). The Bias values showed that all products are generally dry, except over RSMN, DAHRA, and Oznet (and FMI for SMAP_SCA). Finally, our analysis provided interesting insights that can be useful to improve the consistency between SMAP and SMOS datasets.
Regression Analysis by Example. 5th Edition
ERIC Educational Resources Information Center
Chatterjee, Samprit; Hadi, Ali S.
2012-01-01
Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…
Multi-linear model set design based on the nonlinearity measure and H-gap metric.
Shaghaghi, Davood; Fatehi, Alireza; Khaki-Sedigh, Ali
2017-05-01
This paper proposes a model bank selection method for a large class of nonlinear systems with wide operating ranges. In particular, nonlinearity measure and H-gap metric are used to provide an effective algorithm to design a model bank for the system. Then, the proposed model bank is accompanied with model predictive controllers to design a high performance advanced process controller. The advantage of this method is the reduction of excessive switch between models and also decrement of the computational complexity in the controller bank that can lead to performance improvement of the control system. The effectiveness of the method is verified by simulations as well as experimental studies on a pH neutralization laboratory apparatus which confirms the efficiency of the proposed algorithm. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Radiometry simulation within the end-to-end simulation tool SENSOR
NASA Astrophysics Data System (ADS)
Wiest, Lorenz; Boerner, Anko
2001-02-01
12 An end-to-end simulation is a valuable tool for sensor system design, development, optimization, testing, and calibration. This contribution describes the radiometry module of the end-to-end simulation tool SENSOR. It features MODTRAN 4.0-based look up tables in conjunction with a cache-based multilinear interpolation algorithm to speed up radiometry calculations. It employs a linear reflectance parameterization to reduce look up table size, considers effects due to the topology of a digital elevation model (surface slope, sky view factor) and uses a reflectance class feature map to assign Lambertian and BRDF reflectance properties to the digital elevation model. The overall consistency of the radiometry part is demonstrated by good agreement between ATCOR 4-retrieved reflectance spectra of a simulated digital image cube and the original reflectance spectra used to simulate this image data cube.
Image Science and Analysis Group Spacecraft Damage Detection/Characterization
NASA Technical Reports Server (NTRS)
Wheaton, Ira M., Jr.
2010-01-01
This project consisted of several tasks that could be served by an intern to assist the ISAG in detecting damage to spacecrafts during missions. First, this project focused on supporting the Micrometeoroid Orbital Debris (MMOD) damage detection and assessment for the Hubble Space Telescope (HST) using imagery from the last two HST Shuttle servicing missions. In this project, we used coordinates of two windows on the Shuttle Aft flight deck from where images were taken and the coordinates of three ID points in order to calculate the distance from each window to the three points. Then, using the specifications from the camera used, we calculated the image scale in pixels per inch for planes parallel to and planes in the z-direction to the image plane (shown in Table 1). This will help in the future for calculating measurements of objects in the images. Next, tabulation and statistical analysis were conducted for screening results (shown in Table 2) of imagery with Orion Thermal Protection System (TPS) damage. Using the Microsoft Excel CRITBINOM function and Goal Seek, the probabilities of detection of damage to different shuttle tiles were calculated as shown in Table 3. Using developed measuring tools, volume and area measurements will be created from 3D models of Orion TPS damage. Last, mathematical expertise was provided to the Photogrammetry Team. These mathematical tasks consisted of developing elegant image space error equations for observations along 3D lines, circles, planes, etc. and checking proofs for minimal sets of sufficient multi-linear constraints. Some of the processes and resulting equations are displayed in Figure 1.
Taljaard, Monica; McKenzie, Joanne E; Ramsay, Craig R; Grimshaw, Jeremy M
2014-06-19
An interrupted time series design is a powerful quasi-experimental approach for evaluating effects of interventions introduced at a specific point in time. To utilize the strength of this design, a modification to standard regression analysis, such as segmented regression, is required. In segmented regression analysis, the change in intercept and/or slope from pre- to post-intervention is estimated and used to test causal hypotheses about the intervention. We illustrate segmented regression using data from a previously published study that evaluated the effectiveness of a collaborative intervention to improve quality in pre-hospital ambulance care for acute myocardial infarction (AMI) and stroke. In the original analysis, a standard regression model was used with time as a continuous variable. We contrast the results from this standard regression analysis with those from segmented regression analysis. We discuss the limitations of the former and advantages of the latter, as well as the challenges of using segmented regression in analysing complex quality improvement interventions. Based on the estimated change in intercept and slope from pre- to post-intervention using segmented regression, we found insufficient evidence of a statistically significant effect on quality of care for stroke, although potential clinically important effects for AMI cannot be ruled out. Segmented regression analysis is the recommended approach for analysing data from an interrupted time series study. Several modifications to the basic segmented regression analysis approach are available to deal with challenges arising in the evaluation of complex quality improvement interventions.
Standardized Regression Coefficients as Indices of Effect Sizes in Meta-Analysis
ERIC Educational Resources Information Center
Kim, Rae Seon
2011-01-01
When conducting a meta-analysis, it is common to find many collected studies that report regression analyses, because multiple regression analysis is widely used in many fields. Meta-analysis uses effect sizes drawn from individual studies as a means of synthesizing a collection of results. However, indices of effect size from regression analyses…
NASA Astrophysics Data System (ADS)
Madurell, T.; Cartes, J. E.
2005-11-01
Daily food consumption of the eight dominant demersal fish species of the bathyal eastern Ionian Sea were determined from field data on four seasonal cruises (April 1999, July August 1999, November 1999 and February 2000). Daily ration (DR) estimates ranged from 0.198 to 4.273% WW/WW. Overall, DR estimates were independent of the model used, and they were comparable to the daily consumption of other deep-sea fauna (e.g. fish and crustaceans). Both sharks studied ( Galeus melastomus and Etmopterus spinax) exhibited the highest values of DRs, together with the macrourid Coelorhynchus coelorhynchus in August. Among osteichthyes, DR estimates were related (in a multi-linear regression model) to the nature of their diet (i.e. their trophic level deduced from δ15N isotopic composition, the mean number of prey and trophic diversity). Thus, species feeding at a lower trophic level, ingesting a large number of prey items and with a very diversified diet had higher DR than species from higher trophic level and feeding fewer prey items. By season, the DR of species feeding mainly on mesopelagic prey ( Hoplostethus mediterraneus and Helicolenus dactylopterus) were higher in summer, while DR for benthos/suprabenthos feeders (i.e. C. coelorhynchus and Nezumia sclerorhynchus) were higher in spring. Higher food consumption coincides with maximum food availability, both among mesopelagic feeders (higher availability of euphausiids, Pasiphaea sivado and Sergestes arcticus in summer) and among Macrouridae (higher suprabenthos densities in spring). In a tentative estimate the energy intake deduced from diet (i.e. mean energy value of food ingested) was constant in all seasons for each species studied. Results for the energy intake also indicate higher energy intake in the diet of mesopelagic feeders than in the diet of benthic feeders. Overall results are discussed in relation to the deep-sea ecosystem structure and functioning.
Kritikos, Nikolaos; Tsantili-Kakoulidou, Anna; Loukas, Yannis L; Dotsikas, Yannis
2015-07-17
In the current study, quantitative structure-retention relationships (QSRR) were constructed based on data obtained by a LC-(ESI)-QTOF-MS/MS method for the determination of amino acid analogues, following their derivatization via chloroformate esters. Molecules were derivatized via n-propyl chloroformate/n-propanol mediated reaction. Derivatives were acquired through a liquid-liquid extraction procedure. Chromatographic separation is based on gradient elution using methanol/water mixtures from a 70/30% composition to an 85/15% final one, maintaining a constant rate of change. The group of examined molecules was diverse, including mainly α-amino acids, yet also β- and γ-amino acids, γ-amino acid analogues, decarboxylated and phosphorylated analogues and dipeptides. Projection to latent structures (PLS) method was selected for the formation of QSRRs, resulting in a total of three PLS models with high cross-validated coefficients of determination Q(2)Y. For this reason, molecular structures were previously described through the use of descriptors. Through stratified random sampling procedures, 57 compounds were split to a training set and a test set. Model creation was based on multiple criteria including principal component significance and eigenvalue, variable importance, form of residuals, etc. Validation was based on statistical metrics Rpred(2),QextF2(2),QextF3(2) for the test set and Roy's metrics rm(Av)(2) and rm(δ)(2), assessing both predictive stability and internal validity. Based on aforementioned models, simplified equivalent were then created using a multi-linear regression (MLR) method. MLR models were also validated with the same metrics. The suggested models are considered useful for the estimation of retention times of amino acid analogues for a series of applications. Copyright © 2015 Elsevier B.V. All rights reserved.
Biobrane versus topical agents in the treatment of adult scald burns.
Krezdorn, Nicco; Könneker, Sören; Paprottka, Felix Julian; Tapking, Christian; Mett, Tobias R; Brölsch, G Felix; Boyce, Maria; Ipaktchi, Ramin; Vogt, Peter M
2017-02-01
Limited data is available for treatment of scald lesions in adults. The use of the biosynthetic matrix Biobrane ® has been suggested as treatment option with more benefits over topical dressings. Application of Biobrane ® in scalds in our center led to a perceived increase of infection, secondary deepening, surgery and length of stay. We therefore assessed the effect of different treatment options in adult scalds in our center. We performed a retrospective cohort study of adult patients that have been admitted with scalds in our center between 2011 and 2014. We assessed two groups, group 1 with Biobrane ® as initial treatment and group 2 with topical treatment using polyhexanid hydrogel and fatty gauze. Primary outcome variables were rate of secondary deepening, surgery, infection (defined as positive microbiological swabs and antibiotic treatment) and length of stay. Total body surface area (TBSA) as well as diabetes mellitus (DM), hypertension, smoking and alcohol consumption as potential confounders were included. A total of 52 patients were included in this study. 36 patients received treatment with Biobrane ® and 16 with ointment and fatty gauze. No significant differences were found for age and TBSA whereas gender ratio was different (25/11 male/female in group 1 vs 4/12 in group 2, p=0.003). Rate of secondary deepening, surgery, infection as well as days of hospital stay (DOHS) were comparable. Logistic and multilinear regression showed TBSA to be a predictive factor for infection (p=0.041), and TBSA and age for length of stay (age p=0.036; TBSA p=0.042) in group 1. The use of Biobrane ® in adult scald lesions is safe and non-inferior to topical treatment options. In elder patients and larger TBSA Biobrane ® may increase the risk of infection or a prolonged stay in hospital. Level 3 - retrospective cohort study. Copyright © 2016 Elsevier Ltd and ISBI. All rights reserved.
Hashimoto, Ken; Zúniga, Concepción; Romero, Eduardo; Morales, Zoraida; Maguire, James H.
2015-01-01
Background Central American countries face a major challenge in the control of Triatoma dimidiata, a widespread vector of Chagas disease that cannot be eliminated. The key to maintaining the risk of transmission of Trypanosoma cruzi at lowest levels is to sustain surveillance throughout endemic areas. Guatemala, El Salvador, and Honduras integrated community-based vector surveillance into local health systems. Community participation was effective in detection of the vector, but some health services had difficulty sustaining their response to reports of vectors from the population. To date, no research has investigated how best to maintain and reinforce health service responsiveness, especially in resource-limited settings. Methodology/Principal Findings We reviewed surveillance and response records of 12 health centers in Guatemala, El Salvador, and Honduras from 2008 to 2012 and analyzed the data in relation to the volume of reports of vector infestation, local geography, demography, human resources, managerial approach, and results of interviews with health workers. Health service responsiveness was defined as the percentage of households that reported vector infestation for which the local health service provided indoor residual spraying of insecticide or educational advice. Eight potential determinants of responsiveness were evaluated by linear and mixed-effects multi-linear regression. Health service responsiveness (overall 77.4%) was significantly associated with quarterly monitoring by departmental health offices. Other potential determinants of responsiveness were not found to be significant, partly because of short- and long-term strategies, such as temporary adjustments in manpower and redistribution of tasks among local participants in the effort. Conclusions/Significance Consistent monitoring within the local health system contributes to sustainability of health service responsiveness in community-based vector surveillance of Chagas disease. Even with limited resources, countries can improve health service responsiveness with thoughtful strategies and management practices in the local health systems. PMID:26252767
Present and future responses of growing degree days for Crete Island in Greece
NASA Astrophysics Data System (ADS)
Paparrizos, Spyridon; Matzarakis, Andreas
2017-02-01
Climate affects practically all the physiological processes that determine plant life (IPCC, 2014). A major challenge and objective of the agricultural science is to predict the occurrences of specific physical or biological events. For this reason, flower phenology has been widely used to study the flowering in plant species of economic interest, and in this concept, temperature and heat units have been widely accepted as the most important factors affecting processes leading to flowering. The determination of heat requirements in the first developing phases of plants has been expressed as Growing Degree Days (GDD). Determination of GDD is useful for achieving a better understanding of the flowering season development in several plant species, and for forecasting when flowering will occur (Paparrizos and Matzarakis, 2017). Temperature and GDD represent two important spatially-dynamic climatic variables, as they both play vital roles in influencing forest development by directly affecting plant functions such as evapotranspiration, photosynthesis and plant transpiration. Understanding the spatial distribution of GDD is crucial to the practice of sustainable agricultural and forest management, as GDD relates to the integration of growth and provides precise point estimates (Hasan et al., 2007; Matzarakis et al., 2007). The aim of the current study was to estimate and map through downscaling spatial interpolation and multi-linear regression techniques, the future variation of GDD for the periods 2021-2050 and 2071-2100, under the A1B and B1 IPCC emission scenarios in relation with the reference periods for Crete Island in Greece. Future temperature data were obtained, validated and analysed from the ENSEMBLES European project. A combination of dynamical and statistical approach was conducted in order to downscale and perform the spatial interpolation of GDD through ArcGIS 10.2.1. The results indicated that in the future, GDD will be increased and the existing cultivations can reach maturity sooner. Nevertheless, rough topography will act as an inhibitor towards the expansion of the existing cultivations in higher altitudes.
Goel, Purva; Bapat, Sanket; Vyas, Renu; Tambe, Amruta; Tambe, Sanjeev S
2015-11-13
The development of quantitative structure-retention relationships (QSRR) aims at constructing an appropriate linear/nonlinear model for the prediction of the retention behavior (such as Kovats retention index) of a solute on a chromatographic column. Commonly, multi-linear regression and artificial neural networks are used in the QSRR development in the gas chromatography (GC). In this study, an artificial intelligence based data-driven modeling formalism, namely genetic programming (GP), has been introduced for the development of quantitative structure based models predicting Kovats retention indices (KRI). The novelty of the GP formalism is that given an example dataset, it searches and optimizes both the form (structure) and the parameters of an appropriate linear/nonlinear data-fitting model. Thus, it is not necessary to pre-specify the form of the data-fitting model in the GP-based modeling. These models are also less complex, simple to understand, and easy to deploy. The effectiveness of GP in constructing QSRRs has been demonstrated by developing models predicting KRIs of light hydrocarbons (case study-I) and adamantane derivatives (case study-II). In each case study, two-, three- and four-descriptor models have been developed using the KRI data available in the literature. The results of these studies clearly indicate that the GP-based models possess an excellent KRI prediction accuracy and generalization capability. Specifically, the best performing four-descriptor models in both the case studies have yielded high (>0.9) values of the coefficient of determination (R(2)) and low values of root mean squared error (RMSE) and mean absolute percent error (MAPE) for training, test and validation set data. The characteristic feature of this study is that it introduces a practical and an effective GP-based method for developing QSRRs in gas chromatography that can be gainfully utilized for developing other types of data-driven models in chromatography science. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ramos, Alexandre M.; Cordeiro Pires, Ana; Sousa, Pedro M.; Trigo, Ricardo M.
2013-04-01
Coastal upwelling is a phenomenon that occurs in most western oceanic coasts due to the presence of mid-latitude high-pressure systems that generate equatorward winds along the coast and consequent offshore displacement of surface waters that in turn cause deeper, colder, nutrient-rich waters to arise. In western Iberian Peninsula (IP) the high-pressure system associated to northerly winds occurs mainly during spring and summer. Upwelling systems are economically relevant, being the most productive regions of the world ocean and crucial for fisheries. In this work, we evaluate the intra- and inter-annual variability of the Upwelling Index (UI) off the western coast of the IP considering four locations at various latitudes: Rias Baixas, Aveiro, Figueira da Foz and Cabo da Roca. In addition, the relationship between the variability of the occurrence of several circulation weather types (Ramos et al., 2011) and the UI variability along this coast was assessed in detail, allowing to discriminate which types are frequently associated with strong and weak upwelling activity. It is shown that upwelling activity is mostly driven by wind flow from the northern quadrant, for which the obtained correlation coefficients (for the N and NE types) are higher than 0.5 for the four considered test locations. Taking into account these significant relationships, we then developed statistical multi-linear regression models to hindcast upwelling series (April to September) at the four referred locations, using monthly frequencies of circulation weather types as predictors. Modelled monthly series reproduce quite accurately observational data, with correlation coefficients above 0.7 for all locations, and relatively small absolute errors. Ramos AM, Ramos R, Sousa P, Trigo RM, Janeira M, Prior V (2011) Cloud to ground lightning activity over Portugal and its association with Circulation Weather Types. Atmospheric Research 101:84-101. doi: 10.1016/j.atmosres.2011.01
Using Dominance Analysis to Determine Predictor Importance in Logistic Regression
ERIC Educational Resources Information Center
Azen, Razia; Traxel, Nicole
2009-01-01
This article proposes an extension of dominance analysis that allows researchers to determine the relative importance of predictors in logistic regression models. Criteria for choosing logistic regression R[superscript 2] analogues were determined and measures were selected that can be used to perform dominance analysis in logistic regression. A…
NASA Astrophysics Data System (ADS)
Caseiro, Alexandre; Oliveira, César; Pio, Casimiro; Nunes, Teresa; Santos, Patrícia; Mao, Hongjun; Sokhi, Ranjeet; Luhanna, Lakhu
2010-05-01
Particulate matter, either with aerodynamical diameter below 10 μm (PM10) or the fine (aerodynamical diameter below 2.5 μm, PM2.5) or coarse (aerodynamical diameter between 2.5 and 10 μm, PM2.5-10) modes only, are presently regarded as one of the main threats to public health instigated by air pollution. The levels of ambient air particulates are regulated but the limits are frequently surpassed. It is therefore necessary to identify and quantify PM sources and their variability, as well as the biogenic processes that to some extent control their ambient load, in order to effectively regulate on the anthropogenic activities which originate PM. PM2.5-10 and PM2.5 were monitored in Oporto, NW Portugal, at two contrasting sites (directly impacted by traffic, roadside, and at the urban background) during two one-month campaigns (winter and summer). Sampling was conducted independently during daytime and night-time. Out of the 207 sampling periods analysed, 38 (18%) were above the European legal PM10 limit of 50 ?g m-3. PM2.5 concentrations above the limit of 25 ?g m-3 proposed by the EC occurred in 70 out of 202 sampling (35%). More exceedances occurred in winter than in summer and at roadside than at the urban background. Within the scope of this work, the relationship between PM concentrations, namely the occurrence of exceeding PM limit values, and meteorological variables or the sampling period (day/night, work day/weekend) and will be presented. Besides PM mass, the soluble ionic composition (Cl-, SO42-, NO3-, Na+, NH4+, K+, Ca2+ and Mg2+) as well as the elemental composition (Al, Si, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Ni, Cu, Zn, Ga, As, Se, Br, Rb, Sr, Zr, Sn, Ba and Pb) were also determined. This allowed the application of multivariate analysis (principal component analysis with multi-linear regression analysis, PCA-MLRA, and positive matrix factorisation, PMF). Five main sources were identified in the fine and coarse modes (direct road traffic emissions, industrial activities related with refuse incineration or metallurgy, soil dust emissions, sea salt and fuel oil combustion coupled to secondary formation). The contribution of the various sources or source types to the PM load was calculated. A comparison between the relative contribution of the various sources or source types during exceeding and non-exceeding periods is conducted in order to assess if the exceeding periods may be attributed to a particular origin. Also, the concentration and relative contribution to total PM mass of the various PM constituents measured during exceedance and non-exceedance episodes is compared in order to assess their variability between the two types of events.
Van Laere, Koen; Ahmad, Rawaha U; Hudyana, Hendra; Dubois, Kristof; Schmidt, Mark E; Celen, Sofie; Bormans, Guy; Koole, Michel
2013-08-01
Phosphodiesterase 10A (PDE10A) plays a central role in striatal signaling and is implicated in several neuropsychiatric disorders, such as movement disorders and schizophrenia. We performed initial brain kinetic modeling of the novel PDE10A tracer (18)F-JNJ-42259152 (2-[[4-[1-(2-(18)F-fluoroethyl)-4-(4-pyridinyl)-1H-pyrazol-3-yl]phenoxy]methyl]-3,5-dimethyl-pyridine) and studied test-retest reproducibility in healthy volunteers. Twelve healthy volunteers (5 men, 7 women; age range, 42-77 y) were scanned dynamically up to 135 min after bolus injection of 172.5 ± 10.3 MBq of (18)F-JNJ42259152. Four volunteers (2 men, 2 women) underwent retest scanning, with a mean interscan interval of 37 d. Input functions and tracer parent fractions were determined using arterial sampling and high-performance liquid chromatography analysis. Volumes of interest for the putamen, caudate nucleus, ventral striatum, substantia nigra, thalamus, frontal cortex, and cerebellum were delineated using individual volumetric T1 MR imaging scans. One-tissue (1T) and 2-tissue (2T) models were evaluated to calculate total distribution volume (VT). Simplified models were also tested to calculate binding potential (BPND), including the simplified reference tissue model (SRTM) and multilinear reference tissue model, using the frontal cortex as the optimal reference tissue. The stability of VT and BPND was assessed down to a 60-min scan time. The average intact tracer half-life in blood was 90 min. The 2T model VT values for the putamen, caudate nucleus, ventral striatum, substantia nigra, thalamus, frontal cortex, and cerebellum were 1.54 ± 0.37, 0.90 ± 0.24, 0.64 ± 0.18, 0.42 ± 0.09, 0.35 ± 0.09, 0.30 ± 0.07, and 0.36 ± 0.12, respectively. The 1T model provided significantly lower VT values, which were well correlated to the 2T VT. SRTM BPND values referenced to the frontal cortex were 3.45 ± 0.43, 1.78 ± 0.35, 1.10 ± 0.31, and 0.44 ± 0.09 for the respective target regions putamen, caudate nucleus, ventral striatum, and substantia nigra, with similar values for the multilinear reference tissue model. Good correlations were found for the target regions putamen, caudate nucleus, ventral striatum, and substantia nigra between the 2T-compartment model BPND and the SRTM BPND (r = 0.57, 0.82, 0.70, and 0.64, respectively). SRTM BPND using a 90- and 60-min acquisition interval showed low bias. Test-retest variability was 5%-19% for 2T VT and 5%-12% for BPND SRTM. Kinetic modeling of (18)F-JNJ-42259152 shows that PDE10A activity can be reliably quantified and simplified using a reference tissue model with the frontal cortex as reference and a 60-min acquisition period.
An industrial perspective of the LANDSAT opportunity
NASA Technical Reports Server (NTRS)
Williams, B. F.
1981-01-01
The feasibility of enhancing LANDSAT products to provide the greatest usability low cost data possible can be determined through government sponsorship and finance of one or more task forces composed of a critical number of experts in multiple disciplines from many industries and academia. The synergism of multiple minds addressing singular problems without the creation of permanent or perpetual structures must yield output in the form of implementable specifications, even if presented as alternatives. Changes are needed within the spacecraft in order to account for Sun angle changes. The use of pointing accuracy to make geometric corrections (and possible radiometric corrections, is needed more than onboard data reduction and information extraction, which assume a proper knowledge of application and reduce potential utilization. Multilinear arrays need to be investigated and methods for sensor calibration and for determining the effects of atmospheric inversion, as well as the best way to back out the modulation transfer function must be determined.
Decomposition of conditional probability for high-order symbolic Markov chains.
Melnik, S S; Usatenko, O V
2017-07-01
The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.
Decomposition of conditional probability for high-order symbolic Markov chains
NASA Astrophysics Data System (ADS)
Melnik, S. S.; Usatenko, O. V.
2017-07-01
The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.
Acculturation and reacculturation influence: multilayer contexts in therapy.
Abu Baker, K
1999-12-01
Clients who live within a minority culture while being influenced by a dominant culture usually bring to therapy the impact of their multilayered cultural experience. Migration literature point to separation and marginalization processes during the acculturation process as the main cause of relocators' psychosocial problems. In contrast to other studies that appreciate assimilation and integration within the dominant culture, this study shows that these processes often lead to disharmony and disintegration within the home culture, especially among those who remigrate back home or those who continue to live simultaneously within the sending culture and the receiving culture. Additionally, this study emphasizes that acculturation often happens as a multilinear and multidimensional process within the host culture and the sending culture. Therapists may help clients when they become aware of the complexity of the multidirectional process of acculturation and its various levels, such as the interfamilial, the intrafamilial, and the social. Three case studies will illustrate the theoretical framework.
NASA Astrophysics Data System (ADS)
Višňák, Jakub; Steudtner, Robin; Kassahun, Andrea; Hoth, Nils
2017-09-01
Natural waters' uranium level monitoring is of great importance for health and environmental protection. One possible detection method is the Time-Resolved Laser-Induced Fluorescence Spectroscopy (TRLFS), which offers the possibility to distinguish different uranium species. The analytical identification of aqueous uranium species in natural water samples is of distinct importance since individual species differ significantly in sorption properties and mobility in the environment. Samples originate from former uranium mine sites and have been provided by Wismut GmbH, Germany. They have been characterized by total elemental concentrations and TRLFS spectra. Uranium in the samples is supposed to be in form of uranyl(VI) complexes mostly with carbonate (CO32- ) and bicarbonate (HCO3- ) and to lesser extend with sulphate (SO42- ), arsenate (AsO43- ), hydroxo (OH- ), nitrate (NO3- ) and other ligands. Presence of alkaline earth metal dications (M = Ca2+ , Mg2+ , Sr2+ ) will cause most of uranyl to prefer ternary complex species, e.g. Mn(UO2)(CO3)32n-4 (n ɛ {1; 2}). From species quenching the luminescence, Cl- and Fe2+ should be mentioned. Measurement has been done under cryogenic conditions to increase the luminescence signal. Data analysis has been based on Singular Value Decomposition and monoexponential fit of corresponding loadings (for separate TRLFS spectra, the "Factor analysis of Time Series" (FATS) method) and Parallel Factor Analysis (PARAFAC, all data analysed simultaneously). From individual component spectra, excitation energies T00, uranyl symmetric mode vibrational frequencies ωgs and excitation driven U-Oyl bond elongation ΔR have been determined and compared with quasirelativistic (TD)DFT/B3LYP theoretical predictions to cross -check experimental data interpretation. Note to the reader: Several errors have been produced in the initial version of this article. This new version published on 23 October 2017 contains all the corrections.
Regression: The Apple Does Not Fall Far From the Tree.
Vetter, Thomas R; Schober, Patrick
2018-05-15
Researchers and clinicians are frequently interested in either: (1) assessing whether there is a relationship or association between 2 or more variables and quantifying this association; or (2) determining whether 1 or more variables can predict another variable. The strength of such an association is mainly described by the correlation. However, regression analysis and regression models can be used not only to identify whether there is a significant relationship or association between variables but also to generate estimations of such a predictive relationship between variables. This basic statistical tutorial discusses the fundamental concepts and techniques related to the most common types of regression analysis and modeling, including simple linear regression, multiple regression, logistic regression, ordinal regression, and Poisson regression, as well as the common yet often underrecognized phenomenon of regression toward the mean. The various types of regression analysis are powerful statistical techniques, which when appropriately applied, can allow for the valid interpretation of complex, multifactorial data. Regression analysis and models can assess whether there is a relationship or association between 2 or more observed variables and estimate the strength of this association, as well as determine whether 1 or more variables can predict another variable. Regression is thus being applied more commonly in anesthesia, perioperative, critical care, and pain research. However, it is crucial to note that regression can identify plausible risk factors; it does not prove causation (a definitive cause and effect relationship). The results of a regression analysis instead identify independent (predictor) variable(s) associated with the dependent (outcome) variable. As with other statistical methods, applying regression requires that certain assumptions be met, which can be tested with specific diagnostics.
Applied Multiple Linear Regression: A General Research Strategy
ERIC Educational Resources Information Center
Smith, Brandon B.
1969-01-01
Illustrates some of the basic concepts and procedures for using regression analysis in experimental design, analysis of variance, analysis of covariance, and curvilinear regression. Applications to evaluation of instruction and vocational education programs are illustrated. (GR)
NASA Technical Reports Server (NTRS)
Parsons, Vickie s.
2009-01-01
The request to conduct an independent review of regression models, developed for determining the expected Launch Commit Criteria (LCC) External Tank (ET)-04 cycle count for the Space Shuttle ET tanking process, was submitted to the NASA Engineering and Safety Center NESC on September 20, 2005. The NESC team performed an independent review of regression models documented in Prepress Regression Analysis, Tom Clark and Angela Krenn, 10/27/05. This consultation consisted of a peer review by statistical experts of the proposed regression models provided in the Prepress Regression Analysis. This document is the consultation's final report.
Shi, Guoliang; Chen, Gang; Liu, Guirong; Wang, Haiting; Tian, Yingze; Feng, Yinchang
2016-10-01
Modeled results are very important for environmental management. Unreasonable modeled result can lead to wrong strategy for air pollution management. In this work, an improved physically constrained source apportionment (PCSA) technology known as Multilinear Engine 2-species ratios (ME2-SR) was developed to the 11-h daytime and nighttime fine ambient particulate matter in urban area. Firstly, synthetic studies were carried out to explore the effectiveness of ME2-SR. The estimated source contributions were compared with the true values. The results suggest that, compared with the positive matrix factorization (PMF) model, the ME2-SR method could obtain more physically reliable outcomes, indicating that ME2-SR was effective, especially when apportioning the datasets with no unknown source. Additionally, 11-h daytime and nighttime PM2.5 samples were collected from Tianjin in China. The sources of the 11-h daytime and nighttime fine ambient particulate matter in China were identified using the new method and the PMF model. The calculated source contributions for ME2-SR for daytime PM2.5 samples are resuspended dust (38.91 μg m(-3), 26.60%), sulfate and nitrate (38.60 μg m(-3), 26.39%), vehicle exhaust and road dust (38.26 μg m(-3), 26.16%) and coal combustion (20.14 μg m(-3), 13.77%), and those for nighttime PM2.5 samples are resuspended dust (18.78 μg m(-3), 12.91%), sulfate and nitrate (41.57 μg m(-3), 28.58%), vehicle exhaust and road dust (38.39 μg m(-3), 26.39%), and coal combustion (36.76 μg m(-3), 25.27%). The comparisons of the constrained versus unconstrained outcomes clearly suggest that the physical meaning of the ME2-SR results is interpretable and reliable, not only for the specified species values but also for source contributions. The findings indicate that the ME2-SR method can be a useful tool in source apportionment studies, for air pollution management. Copyright © 2016 Elsevier Ltd. All rights reserved.
Palm, Brett B.; de Sá, Suzane S.; Day, Douglas A.; ...
2018-01-17
Secondary organic aerosol (SOA) formation from ambient air was studied using an oxidation flow reactor (OFR) coupled to an aerosol mass spectrometer (AMS) during both the wet and dry seasons at the Observations and Modeling of the Green Ocean Amazon (GoAmazon2014/5) field campaign. Measurements were made at two sites downwind of the city of Manaus, Brazil. Ambient air was oxidized in the OFR using variable concentrations of either OH or O 3, over ranges from hours to days (O 3) or weeks (OH) of equivalent atmospheric aging. The amount of SOA formed in the OFR ranged from 0 to asmore » much as 10 μg m -3, depending on the amount of SOA precursor gases in ambient air. Typically, more SOA was formed during nighttime than daytime, and more from OH than from O 3 oxidation. SOA yields of individual organic precursors under OFR conditions were measured by standard addition into ambient air, and confirmed to be consistent with published environmental chamber-derived SOA yields. Positive matrix factorization of organic aerosol (OA) after OH oxidation showed formation of typical oxidized OA factors and a loss of primary OA factors as OH aging increased. After OH oxidation in the OFR, the hygroscopicity of the OA increased with increasing elemental O : C up to O : C ~ 1.0, and then decreased as O : C increased further. Some possible reasons for this decrease are discussed. The measured SOA formation was compared to the amount predicted from the concentrations of measured ambient SOA precursors and their SOA yields. And while measured ambient precursors were sufficient to explain the amount of SOA formed from O 3, they could only explain 10–50 % of the SOA formed from OH. This is consistent with previous OFR studies which showed that typically unmeasured semivolatile and intermediate volatility gases (that tend to lack C = C bonds) are present in ambient air and can explain such additional SOA formation. To investigate the sources of the unmeasured SOA-forming gases during this campaign, multilinear regression analysis was performed between measured SOA formation and the concentration of gas-phase tracers representing different precursor sources. The majority of SOA-forming gases present during both seasons were of biogenic origin. Urban sources also contributed substantially in both seasons, while biomass burning sources were more important during the dry season. Our study enables a better understanding of SOA formation in environments with diverse emission sources.« less
NASA Astrophysics Data System (ADS)
Palm, Brett B.; de Sá, Suzane S.; Day, Douglas A.; Campuzano-Jost, Pedro; Hu, Weiwei; Seco, Roger; Sjostedt, Steven J.; Park, Jeong-Hoo; Guenther, Alex B.; Kim, Saewung; Brito, Joel; Wurm, Florian; Artaxo, Paulo; Thalman, Ryan; Wang, Jian; Yee, Lindsay D.; Wernis, Rebecca; Isaacman-VanWertz, Gabriel; Goldstein, Allen H.; Liu, Yingjun; Springston, Stephen R.; Souza, Rodrigo; Newburn, Matt K.; Lizabeth Alexander, M.; Martin, Scot T.; Jimenez, Jose L.
2018-01-01
Secondary organic aerosol (SOA) formation from ambient air was studied using an oxidation flow reactor (OFR) coupled to an aerosol mass spectrometer (AMS) during both the wet and dry seasons at the Observations and Modeling of the Green Ocean Amazon (GoAmazon2014/5) field campaign. Measurements were made at two sites downwind of the city of Manaus, Brazil. Ambient air was oxidized in the OFR using variable concentrations of either OH or O3, over ranges from hours to days (O3) or weeks (OH) of equivalent atmospheric aging. The amount of SOA formed in the OFR ranged from 0 to as much as 10 µg m-3, depending on the amount of SOA precursor gases in ambient air. Typically, more SOA was formed during nighttime than daytime, and more from OH than from O3 oxidation. SOA yields of individual organic precursors under OFR conditions were measured by standard addition into ambient air and were confirmed to be consistent with published environmental chamber-derived SOA yields. Positive matrix factorization of organic aerosol (OA) after OH oxidation showed formation of typical oxidized OA factors and a loss of primary OA factors as OH aging increased. After OH oxidation in the OFR, the hygroscopicity of the OA increased with increasing elemental O : C up to O : C ˜ 1.0, and then decreased as O : C increased further. Possible reasons for this decrease are discussed. The measured SOA formation was compared to the amount predicted from the concentrations of measured ambient SOA precursors and their SOA yields. While measured ambient precursors were sufficient to explain the amount of SOA formed from O3, they could only explain 10-50 % of the SOA formed from OH. This is consistent with previous OFR studies, which showed that typically unmeasured semivolatile and intermediate volatility gases (that tend to lack C = C bonds) are present in ambient air and can explain such additional SOA formation. To investigate the sources of the unmeasured SOA-forming gases during this campaign, multilinear regression analysis was performed between measured SOA formation and the concentration of gas-phase tracers representing different precursor sources. The majority of SOA-forming gases present during both seasons were of biogenic origin. Urban sources also contributed substantially in both seasons, while biomass burning sources were more important during the dry season. This study enables a better understanding of SOA formation in environments with diverse emission sources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palm, Brett B.; de Sá, Suzane S.; Day, Douglas A.
Secondary organic aerosol (SOA) formation from ambient air was studied using an oxidation flow reactor (OFR) coupled to an aerosol mass spectrometer (AMS) during both the wet and dry seasons at the Observations and Modeling of the Green Ocean Amazon (GoAmazon2014/5) field campaign. Measurements were made at two sites downwind of the city of Manaus, Brazil. Ambient air was oxidized in the OFR using variable concentrations of either OH or O 3, over ranges from hours to days (O 3) or weeks (OH) of equivalent atmospheric aging. The amount of SOA formed in the OFR ranged from 0 to asmore » much as 10 μg m -3, depending on the amount of SOA precursor gases in ambient air. Typically, more SOA was formed during nighttime than daytime, and more from OH than from O 3 oxidation. SOA yields of individual organic precursors under OFR conditions were measured by standard addition into ambient air, and confirmed to be consistent with published environmental chamber-derived SOA yields. Positive matrix factorization of organic aerosol (OA) after OH oxidation showed formation of typical oxidized OA factors and a loss of primary OA factors as OH aging increased. After OH oxidation in the OFR, the hygroscopicity of the OA increased with increasing elemental O : C up to O : C ~ 1.0, and then decreased as O : C increased further. Some possible reasons for this decrease are discussed. The measured SOA formation was compared to the amount predicted from the concentrations of measured ambient SOA precursors and their SOA yields. And while measured ambient precursors were sufficient to explain the amount of SOA formed from O 3, they could only explain 10–50 % of the SOA formed from OH. This is consistent with previous OFR studies which showed that typically unmeasured semivolatile and intermediate volatility gases (that tend to lack C = C bonds) are present in ambient air and can explain such additional SOA formation. To investigate the sources of the unmeasured SOA-forming gases during this campaign, multilinear regression analysis was performed between measured SOA formation and the concentration of gas-phase tracers representing different precursor sources. The majority of SOA-forming gases present during both seasons were of biogenic origin. Urban sources also contributed substantially in both seasons, while biomass burning sources were more important during the dry season. Our study enables a better understanding of SOA formation in environments with diverse emission sources.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palm, Brett B.; de Sá, Suzane S.; Day, Douglas A.
Secondary organic aerosol (SOA) formation from ambient air was studied using an oxidation flow reactor (OFR) coupled to an aerosol mass spectrometer (AMS) during both the wet and dry seasons at the Observations and Modeling of the Green Ocean Amazon (GoAmazon2014/5) field campaign. Measurements were made at two sites downwind of the city of Manaus, Brazil. Ambient air was oxidized in the OFR using variable concentrations of either OH or O 3, over ranges from hours to days (O 3) or weeks (OH) of equivalent atmospheric aging. The amount of SOA formed in the OFR ranged from 0 to asmore » much as 10 µg m −3, depending on the amount of SOA precursor gases in ambient air. Typically, more SOA was formed during nighttime than daytime, and more from OH than from O 3 oxidation. SOA yields of individual organic precursors under OFR conditions were measured by standard addition into ambient air and were confirmed to be consistent with published environmental chamber-derived SOA yields. Positive matrix factorization of organic aerosol (OA) after OH oxidation showed formation of typical oxidized OA factors and a loss of primary OA factors as OH aging increased. After OH oxidation in the OFR, the hygroscopicity of the OA increased with increasing elemental O : C up to O : C ∼ 1.0, and then decreased as O : C increased further. Possible reasons for this decrease are discussed. The measured SOA formation was compared to the amount predicted from the concentrations of measured ambient SOA precursors and their SOA yields. While measured ambient precursors were sufficient to explain the amount of SOA formed from O 3, they could only explain 10–50 % of the SOA formed from OH. This is consistent with previous OFR studies, which showed that typically unmeasured semivolatile and intermediate volatility gases (that tend to lack C = C bonds) are present in ambient air and can explain such additional SOA formation. To investigate the sources of the unmeasured SOA-forming gases during this campaign, multilinear regression analysis was performed between measured SOA formation and the concentration of gas-phase tracers representing different precursor sources. The majority of SOA-forming gases present during both seasons were of biogenic origin. Urban sources also contributed substantially in both seasons, while biomass burning sources were more important during the dry season. This study enables a better understanding of SOA formation in environments with diverse emission sources.« less
Maintenance Operations in Mission Oriented Protective Posture Level IV (MOPPIV)
1987-10-01
Repair FADAC Printed Circuit Board ............. 6 3. Data Analysis Techniques ............................. 6 a. Multiple Linear Regression... ANALYSIS /DISCUSSION ............................... 12 1. Exa-ple of Regression Analysis ..................... 12 S2. Regression results for all tasks...6 * TABLE 9. Task Grouping for Analysis ........................ 7 "TABXLE 10. Remove/Replace H60A3 Power Pack................. 8 TABLE
NASA Technical Reports Server (NTRS)
Rummler, D. R.
1976-01-01
The results are presented of investigations to apply regression techniques to the development of methodology for creep-rupture data analysis. Regression analysis techniques are applied to the explicit description of the creep behavior of materials for space shuttle thermal protection systems. A regression analysis technique is compared with five parametric methods for analyzing three simulated and twenty real data sets, and a computer program for the evaluation of creep-rupture data is presented.
Resting-state functional magnetic resonance imaging: the impact of regression analysis.
Yeh, Chia-Jung; Tseng, Yu-Sheng; Lin, Yi-Ru; Tsai, Shang-Yueh; Huang, Teng-Yi
2015-01-01
To investigate the impact of regression methods on resting-state functional magnetic resonance imaging (rsfMRI). During rsfMRI preprocessing, regression analysis is considered effective for reducing the interference of physiological noise on the signal time course. However, it is unclear whether the regression method benefits rsfMRI analysis. Twenty volunteers (10 men and 10 women; aged 23.4 ± 1.5 years) participated in the experiments. We used node analysis and functional connectivity mapping to assess the brain default mode network by using five combinations of regression methods. The results show that regressing the global mean plays a major role in the preprocessing steps. When a global regression method is applied, the values of functional connectivity are significantly lower (P ≤ .01) than those calculated without a global regression. This step increases inter-subject variation and produces anticorrelated brain areas. rsfMRI data processed using regression should be interpreted carefully. The significance of the anticorrelated brain areas produced by global signal removal is unclear. Copyright © 2014 by the American Society of Neuroimaging.
NASA Technical Reports Server (NTRS)
Deng, Xiaomin; Newman, James C., Jr.
1997-01-01
ZIP2DL is a two-dimensional, elastic-plastic finte element program for stress analysis and crack growth simulations, developed for the NASA Langley Research Center. It has many of the salient features of the ZIP2D program. For example, ZIP2DL contains five material models (linearly elastic, elastic-perfectly plastic, power-law hardening, linear hardening, and multi-linear hardening models), and it can simulate mixed-mode crack growth for prescribed crack growth paths under plane stress, plane strain and mixed state of stress conditions. Further, as an extension of ZIP2D, it also includes a number of new capabilities. The large-deformation kinematics in ZIP2DL will allow it to handle elastic problems with large strains and large rotations, and elastic-plastic problems with small strains and large rotations. Loading conditions in terms of surface traction, concentrated load, and nodal displacement can be applied with a default linear time dependence or they can be programmed according to a user-defined time dependence through a user subroutine. The restart capability of ZIP2DL will make it possible to stop the execution of the program at any time, analyze the results and/or modify execution options and resume and continue the execution of the program. This report includes three sectons: a theoretical manual section, a user manual section, and an example manual secton. In the theoretical secton, the mathematics behind the various aspects of the program are concisely outlined. In the user manual section, a line-by-line explanation of the input data is given. In the example manual secton, three types of examples are presented to demonstrate the accuracy and illustrate the use of this program.
Standards for Standardized Logistic Regression Coefficients
ERIC Educational Resources Information Center
Menard, Scott
2011-01-01
Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…
Linear regression analysis: part 14 of a series on evaluation of scientific publications.
Schneider, Astrid; Hommel, Gerhard; Blettner, Maria
2010-11-01
Regression analysis is an important statistical method for the analysis of medical data. It enables the identification and characterization of relationships among multiple factors. It also enables the identification of prognostically relevant risk factors and the calculation of risk scores for individual prognostication. This article is based on selected textbooks of statistics, a selective review of the literature, and our own experience. After a brief introduction of the uni- and multivariable regression models, illustrative examples are given to explain what the important considerations are before a regression analysis is performed, and how the results should be interpreted. The reader should then be able to judge whether the method has been used correctly and interpret the results appropriately. The performance and interpretation of linear regression analysis are subject to a variety of pitfalls, which are discussed here in detail. The reader is made aware of common errors of interpretation through practical examples. Both the opportunities for applying linear regression analysis and its limitations are presented.
An improved multiple linear regression and data analysis computer program package
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
NEWRAP, an improved version of a previous multiple linear regression program called RAPIER, CREDUC, and CRSPLT, allows for a complete regression analysis including cross plots of the independent and dependent variables, correlation coefficients, regression coefficients, analysis of variance tables, t-statistics and their probability levels, rejection of independent variables, plots of residuals against the independent and dependent variables, and a canonical reduction of quadratic response functions useful in optimum seeking experimentation. A major improvement over RAPIER is that all regression calculations are done in double precision arithmetic.
[A SAS marco program for batch processing of univariate Cox regression analysis for great database].
Yang, Rendong; Xiong, Jie; Peng, Yangqin; Peng, Xiaoning; Zeng, Xiaomin
2015-02-01
To realize batch processing of univariate Cox regression analysis for great database by SAS marco program. We wrote a SAS macro program, which can filter, integrate, and export P values to Excel by SAS9.2. The program was used for screening survival correlated RNA molecules of ovarian cancer. A SAS marco program could finish the batch processing of univariate Cox regression analysis, the selection and export of the results. The SAS macro program has potential applications in reducing the workload of statistical analysis and providing a basis for batch processing of univariate Cox regression analysis.
Exact Analysis of Squared Cross-Validity Coefficient in Predictive Regression Models
ERIC Educational Resources Information Center
Shieh, Gwowen
2009-01-01
In regression analysis, the notion of population validity is of theoretical interest for describing the usefulness of the underlying regression model, whereas the presumably more important concept of population cross-validity represents the predictive effectiveness for the regression equation in future research. It appears that the inference…
USDA-ARS?s Scientific Manuscript database
Selective principal component regression analysis (SPCR) uses a subset of the original image bands for principal component transformation and regression. For optimal band selection before the transformation, this paper used genetic algorithms (GA). In this case, the GA process used the regression co...
Development of a User Interface for a Regression Analysis Software Tool
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred; Volden, Thomas R.
2010-01-01
An easy-to -use user interface was implemented in a highly automated regression analysis tool. The user interface was developed from the start to run on computers that use the Windows, Macintosh, Linux, or UNIX operating system. Many user interface features were specifically designed such that a novice or inexperienced user can apply the regression analysis tool with confidence. Therefore, the user interface s design minimizes interactive input from the user. In addition, reasonable default combinations are assigned to those analysis settings that influence the outcome of the regression analysis. These default combinations will lead to a successful regression analysis result for most experimental data sets. The user interface comes in two versions. The text user interface version is used for the ongoing development of the regression analysis tool. The official release of the regression analysis tool, on the other hand, has a graphical user interface that is more efficient to use. This graphical user interface displays all input file names, output file names, and analysis settings for a specific software application mode on a single screen which makes it easier to generate reliable analysis results and to perform input parameter studies. An object-oriented approach was used for the development of the graphical user interface. This choice keeps future software maintenance costs to a reasonable limit. Examples of both the text user interface and graphical user interface are discussed in order to illustrate the user interface s overall design approach.
Regression Analysis and the Sociological Imagination
ERIC Educational Resources Information Center
De Maio, Fernando
2014-01-01
Regression analysis is an important aspect of most introductory statistics courses in sociology but is often presented in contexts divorced from the central concerns that bring students into the discipline. Consequently, we present five lesson ideas that emerge from a regression analysis of income inequality and mortality in the USA and Canada.
Multivariate Regression Analysis and Slaughter Livestock,
AGRICULTURE, *ECONOMICS), (*MEAT, PRODUCTION), MULTIVARIATE ANALYSIS, REGRESSION ANALYSIS , ANIMALS, WEIGHT, COSTS, PREDICTIONS, STABILITY, MATHEMATICAL MODELS, STORAGE, BEEF, PORK, FOOD, STATISTICAL DATA, ACCURACY
Simultaneous tensor decomposition and completion using factor priors.
Chen, Yi-Lei; Hsu, Chiou-Ting; Liao, Hong-Yuan Mark
2014-03-01
The success of research on matrix completion is evident in a variety of real-world applications. Tensor completion, which is a high-order extension of matrix completion, has also generated a great deal of research interest in recent years. Given a tensor with incomplete entries, existing methods use either factorization or completion schemes to recover the missing parts. However, as the number of missing entries increases, factorization schemes may overfit the model because of incorrectly predefined ranks, while completion schemes may fail to interpret the model factors. In this paper, we introduce a novel concept: complete the missing entries and simultaneously capture the underlying model structure. To this end, we propose a method called simultaneous tensor decomposition and completion (STDC) that combines a rank minimization technique with Tucker model decomposition. Moreover, as the model structure is implicitly included in the Tucker model, we use factor priors, which are usually known a priori in real-world tensor objects, to characterize the underlying joint-manifold drawn from the model factors. By exploiting this auxiliary information, our method leverages two classic schemes and accurately estimates the model factors and missing entries. We conducted experiments to empirically verify the convergence of our algorithm on synthetic data and evaluate its effectiveness on various kinds of real-world data. The results demonstrate the efficacy of the proposed method and its potential usage in tensor-based applications. It also outperforms state-of-the-art methods on multilinear model analysis and visual data completion tasks.
Regression Analysis: Legal Applications in Institutional Research
ERIC Educational Resources Information Center
Frizell, Julie A.; Shippen, Benjamin S., Jr.; Luna, Andrew L.
2008-01-01
This article reviews multiple regression analysis, describes how its results should be interpreted, and instructs institutional researchers on how to conduct such analyses using an example focused on faculty pay equity between men and women. The use of multiple regression analysis will be presented as a method with which to compare salaries of…
RAWS II: A MULTIPLE REGRESSION ANALYSIS PROGRAM,
This memorandum gives instructions for the use and operation of a revised version of RAWS, a multiple regression analysis program. The program...of preprocessed data, the directed retention of variable, listing of the matrix of the normal equations and its inverse, and the bypassing of the regression analysis to provide the input variable statistics only. (Author)
Wang, D Z; Wang, C; Shen, C F; Zhang, Y; Zhang, H; Song, G D; Xue, X D; Xu, Z L; Zhang, S; Jiang, G H
2017-05-10
We described the time trend of acute myocardial infarction (AMI) from 1999 to 2013 in Tianjin incidence rate with Cochran-Armitage trend (CAT) test and linear regression analysis, and the results were compared. Based on actual population, CAT test had much stronger statistical power than linear regression analysis for both overall incidence trend and age specific incidence trend (Cochran-Armitage trend P value
A primer for biomedical scientists on how to execute model II linear regression analysis.
Ludbrook, John
2012-04-01
1. There are two very different ways of executing linear regression analysis. One is Model I, when the x-values are fixed by the experimenter. The other is Model II, in which the x-values are free to vary and are subject to error. 2. I have received numerous complaints from biomedical scientists that they have great difficulty in executing Model II linear regression analysis. This may explain the results of a Google Scholar search, which showed that the authors of articles in journals of physiology, pharmacology and biochemistry rarely use Model II regression analysis. 3. I repeat my previous arguments in favour of using least products linear regression analysis for Model II regressions. I review three methods for executing ordinary least products (OLP) and weighted least products (WLP) regression analysis: (i) scientific calculator and/or computer spreadsheet; (ii) specific purpose computer programs; and (iii) general purpose computer programs. 4. Using a scientific calculator and/or computer spreadsheet, it is easy to obtain correct values for OLP slope and intercept, but the corresponding 95% confidence intervals (CI) are inaccurate. 5. Using specific purpose computer programs, the freeware computer program smatr gives the correct OLP regression coefficients and obtains 95% CI by bootstrapping. In addition, smatr can be used to compare the slopes of OLP lines. 6. When using general purpose computer programs, I recommend the commercial programs systat and Statistica for those who regularly undertake linear regression analysis and I give step-by-step instructions in the Supplementary Information as to how to use loss functions. © 2011 The Author. Clinical and Experimental Pharmacology and Physiology. © 2011 Blackwell Publishing Asia Pty Ltd.
Water quality parameter measurement using spectral signatures
NASA Technical Reports Server (NTRS)
White, P. E.
1973-01-01
Regression analysis is applied to the problem of measuring water quality parameters from remote sensing spectral signature data. The equations necessary to perform regression analysis are presented and methods of testing the strength and reliability of a regression are described. An efficient algorithm for selecting an optimal subset of the independent variables available for a regression is also presented.
Krishan, Kewal; Kanchan, Tanuj; Sharma, Abhilasha
2012-05-01
Estimation of stature is an important parameter in identification of human remains in forensic examinations. The present study is aimed to compare the reliability and accuracy of stature estimation and to demonstrate the variability in estimated stature and actual stature using multiplication factor and regression analysis methods. The study is based on a sample of 246 subjects (123 males and 123 females) from North India aged between 17 and 20 years. Four anthropometric measurements; hand length, hand breadth, foot length and foot breadth taken on the left side in each subject were included in the study. Stature was measured using standard anthropometric techniques. Multiplication factors were calculated and linear regression models were derived for estimation of stature from hand and foot dimensions. Derived multiplication factors and regression formula were applied to the hand and foot measurements in the study sample. The estimated stature from the multiplication factors and regression analysis was compared with the actual stature to find the error in estimated stature. The results indicate that the range of error in estimation of stature from regression analysis method is less than that of multiplication factor method thus, confirming that the regression analysis method is better than multiplication factor analysis in stature estimation. Copyright © 2012 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.
Ritz, Christian; Van der Vliet, Leana
2009-09-01
The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.
Using Robust Standard Errors to Combine Multiple Regression Estimates with Meta-Analysis
ERIC Educational Resources Information Center
Williams, Ryan T.
2012-01-01
Combining multiple regression estimates with meta-analysis has continued to be a difficult task. A variety of methods have been proposed and used to combine multiple regression slope estimates with meta-analysis, however, most of these methods have serious methodological and practical limitations. The purpose of this study was to explore the use…
A Quality Assessment Tool for Non-Specialist Users of Regression Analysis
ERIC Educational Resources Information Center
Argyrous, George
2015-01-01
This paper illustrates the use of a quality assessment tool for regression analysis. It is designed for non-specialist "consumers" of evidence, such as policy makers. The tool provides a series of questions such consumers of evidence can ask to interrogate regression analysis, and is illustrated with reference to a recent study published…
Park, Ji Hyun; Kim, Hyeon-Young; Lee, Hanna; Yun, Eun Kyoung
2015-12-01
This study compares the performance of the logistic regression and decision tree analysis methods for assessing the risk factors for infection in cancer patients undergoing chemotherapy. The subjects were 732 cancer patients who were receiving chemotherapy at K university hospital in Seoul, Korea. The data were collected between March 2011 and February 2013 and were processed for descriptive analysis, logistic regression and decision tree analysis using the IBM SPSS Statistics 19 and Modeler 15.1 programs. The most common risk factors for infection in cancer patients receiving chemotherapy were identified as alkylating agents, vinca alkaloid and underlying diabetes mellitus. The logistic regression explained 66.7% of the variation in the data in terms of sensitivity and 88.9% in terms of specificity. The decision tree analysis accounted for 55.0% of the variation in the data in terms of sensitivity and 89.0% in terms of specificity. As for the overall classification accuracy, the logistic regression explained 88.0% and the decision tree analysis explained 87.2%. The logistic regression analysis showed a higher degree of sensitivity and classification accuracy. Therefore, logistic regression analysis is concluded to be the more effective and useful method for establishing an infection prediction model for patients undergoing chemotherapy. Copyright © 2015 Elsevier Ltd. All rights reserved.
Zarb, Francis; McEntee, Mark F; Rainford, Louise
2015-06-01
To evaluate visual grading characteristics (VGC) and ordinal regression analysis during head CT optimisation as a potential alternative to visual grading assessment (VGA), traditionally employed to score anatomical visualisation. Patient images (n = 66) were obtained using current and optimised imaging protocols from two CT suites: a 16-slice scanner at the national Maltese centre for trauma and a 64-slice scanner in a private centre. Local resident radiologists (n = 6) performed VGA followed by VGC and ordinal regression analysis. VGC alone indicated that optimised protocols had similar image quality as current protocols. Ordinal logistic regression analysis provided an in-depth evaluation, criterion by criterion allowing the selective implementation of the protocols. The local radiology review panel supported the implementation of optimised protocols for brain CT examinations (including trauma) in one centre, achieving radiation dose reductions ranging from 24 % to 36 %. In the second centre a 29 % reduction in radiation dose was achieved for follow-up cases. The combined use of VGC and ordinal logistic regression analysis led to clinical decisions being taken on the implementation of the optimised protocols. This improved method of image quality analysis provided the evidence to support imaging protocol optimisation, resulting in significant radiation dose savings. • There is need for scientifically based image quality evaluation during CT optimisation. • VGC and ordinal regression analysis in combination led to better informed clinical decisions. • VGC and ordinal regression analysis led to dose reductions without compromising diagnostic efficacy.
REGRESSION ANALYSIS OF SEA-SURFACE-TEMPERATURE PATTERNS FOR THE NORTH PACIFIC OCEAN.
SEA WATER, *SURFACE TEMPERATURE, *OCEANOGRAPHIC DATA, PACIFIC OCEAN, REGRESSION ANALYSIS , STATISTICAL ANALYSIS, UNDERWATER EQUIPMENT, DETECTION, UNDERWATER COMMUNICATIONS, DISTRIBUTION, THERMAL PROPERTIES, COMPUTERS.
The process and utility of classification and regression tree methodology in nursing research
Kuhn, Lisa; Page, Karen; Ward, John; Worrall-Carter, Linda
2014-01-01
Aim This paper presents a discussion of classification and regression tree analysis and its utility in nursing research. Background Classification and regression tree analysis is an exploratory research method used to illustrate associations between variables not suited to traditional regression analysis. Complex interactions are demonstrated between covariates and variables of interest in inverted tree diagrams. Design Discussion paper. Data sources English language literature was sourced from eBooks, Medline Complete and CINAHL Plus databases, Google and Google Scholar, hard copy research texts and retrieved reference lists for terms including classification and regression tree* and derivatives and recursive partitioning from 1984–2013. Discussion Classification and regression tree analysis is an important method used to identify previously unknown patterns amongst data. Whilst there are several reasons to embrace this method as a means of exploratory quantitative research, issues regarding quality of data as well as the usefulness and validity of the findings should be considered. Implications for Nursing Research Classification and regression tree analysis is a valuable tool to guide nurses to reduce gaps in the application of evidence to practice. With the ever-expanding availability of data, it is important that nurses understand the utility and limitations of the research method. Conclusion Classification and regression tree analysis is an easily interpreted method for modelling interactions between health-related variables that would otherwise remain obscured. Knowledge is presented graphically, providing insightful understanding of complex and hierarchical relationships in an accessible and useful way to nursing and other health professions. PMID:24237048
The process and utility of classification and regression tree methodology in nursing research.
Kuhn, Lisa; Page, Karen; Ward, John; Worrall-Carter, Linda
2014-06-01
This paper presents a discussion of classification and regression tree analysis and its utility in nursing research. Classification and regression tree analysis is an exploratory research method used to illustrate associations between variables not suited to traditional regression analysis. Complex interactions are demonstrated between covariates and variables of interest in inverted tree diagrams. Discussion paper. English language literature was sourced from eBooks, Medline Complete and CINAHL Plus databases, Google and Google Scholar, hard copy research texts and retrieved reference lists for terms including classification and regression tree* and derivatives and recursive partitioning from 1984-2013. Classification and regression tree analysis is an important method used to identify previously unknown patterns amongst data. Whilst there are several reasons to embrace this method as a means of exploratory quantitative research, issues regarding quality of data as well as the usefulness and validity of the findings should be considered. Classification and regression tree analysis is a valuable tool to guide nurses to reduce gaps in the application of evidence to practice. With the ever-expanding availability of data, it is important that nurses understand the utility and limitations of the research method. Classification and regression tree analysis is an easily interpreted method for modelling interactions between health-related variables that would otherwise remain obscured. Knowledge is presented graphically, providing insightful understanding of complex and hierarchical relationships in an accessible and useful way to nursing and other health professions. © 2013 The Authors. Journal of Advanced Nursing Published by John Wiley & Sons Ltd.
Hoch, Jeffrey S; Dewa, Carolyn S
2014-04-01
Economic evaluations commonly accompany trials of new treatments or interventions; however, regression methods and their corresponding advantages for the analysis of cost-effectiveness data are not well known. To illustrate regression-based economic evaluation, we present a case study investigating the cost-effectiveness of a collaborative mental health care program for people receiving short-term disability benefits for psychiatric disorders. We implement net benefit regression to illustrate its strengths and limitations. Net benefit regression offers a simple option for cost-effectiveness analyses of person-level data. By placing economic evaluation in a regression framework, regression-based techniques can facilitate the analysis and provide simple solutions to commonly encountered challenges. Economic evaluations of person-level data (eg, from a clinical trial) should use net benefit regression to facilitate analysis and enhance results.
CADDIS Volume 4. Data Analysis: Basic Analyses
Use of statistical tests to determine if an observation is outside the normal range of expected values. Details of CART, regression analysis, use of quantile regression analysis, CART in causal analysis, simplifying or pruning resulting trees.
Efficient Tensor Completion for Color Image and Video Recovery: Low-Rank Tensor Train.
Bengua, Johann A; Phien, Ho N; Tuan, Hoang Duong; Do, Minh N
2017-05-01
This paper proposes a novel approach to tensor completion, which recovers missing entries of data represented by tensors. The approach is based on the tensor train (TT) rank, which is able to capture hidden information from tensors thanks to its definition from a well-balanced matricization scheme. Accordingly, new optimization formulations for tensor completion are proposed as well as two new algorithms for their solution. The first one called simple low-rank tensor completion via TT (SiLRTC-TT) is intimately related to minimizing a nuclear norm based on TT rank. The second one is from a multilinear matrix factorization model to approximate the TT rank of a tensor, and is called tensor completion by parallel matrix factorization via TT (TMac-TT). A tensor augmentation scheme of transforming a low-order tensor to higher orders is also proposed to enhance the effectiveness of SiLRTC-TT and TMac-TT. Simulation results for color image and video recovery show the clear advantage of our method over all other methods.
Characterizing land processes in the biosphere
NASA Technical Reports Server (NTRS)
Erickson, J. D.; Tuyahov, A. J.
1984-01-01
NASA long-term planning for the satellite remote sensing of land areas is discussed from the perspective of a holistic interdisciplinary approach to the study of the biosphere. The earth is characterized as a biogeochemical system; the impact of human activity on this system is considered; and the primary scientific goals for their study are defined. Remote-sensing programs are seen as essential in gaining an improved understanding of energy budgets, the hydrological cycle, other biogeological cycles, and the coupling between these cycles, with the construction of a global data base and eventually the development of predictive simulation models which can be used to assess the impact of planned human activities. Current sensor development at NASA includes a multilinear array for the visible and IR and the L-band Shuttle Imaging Radar B, both to be flown on Shuttle missions in the near future; for the 1990s, a large essentially permanent man-tended interdisciplinary multisensor platform connected to an advanced data network is being planned.
Population heterogeneity in the salience of multiple risk factors for adolescent delinquency.
Lanza, Stephanie T; Cooper, Brittany R; Bray, Bethany C
2014-03-01
To present mixture regression analysis as an alternative to more standard regression analysis for predicting adolescent delinquency. We demonstrate how mixture regression analysis allows for the identification of population subgroups defined by the salience of multiple risk factors. We identified population subgroups (i.e., latent classes) of individuals based on their coefficients in a regression model predicting adolescent delinquency from eight previously established risk indices drawn from the community, school, family, peer, and individual levels. The study included N = 37,763 10th-grade adolescents who participated in the Communities That Care Youth Survey. Standard, zero-inflated, and mixture Poisson and negative binomial regression models were considered. Standard and mixture negative binomial regression models were selected as optimal. The five-class regression model was interpreted based on the class-specific regression coefficients, indicating that risk factors had varying salience across classes of adolescents. Standard regression showed that all risk factors were significantly associated with delinquency. Mixture regression provided more nuanced information, suggesting a unique set of risk factors that were salient for different subgroups of adolescents. Implications for the design of subgroup-specific interventions are discussed. Copyright © 2014 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Dolan, Conor V.; Wicherts, Jelte M.; Molenaar, Peter C. M.
2004-01-01
We consider the question of how variation in the number and reliability of indicators affects the power to reject the hypothesis that the regression coefficients are zero in latent linear regression analysis. We show that power remains constant as long as the coefficient of determination remains unchanged. Any increase in the number of indicators…
Moderation analysis using a two-level regression model.
Yuan, Ke-Hai; Cheng, Ying; Maxwell, Scott
2014-10-01
Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.
Multiple Correlation versus Multiple Regression.
ERIC Educational Resources Information Center
Huberty, Carl J.
2003-01-01
Describes differences between multiple correlation analysis (MCA) and multiple regression analysis (MRA), showing how these approaches involve different research questions and study designs, different inferential approaches, different analysis strategies, and different reported information. (SLD)
Functional Relationships and Regression Analysis.
ERIC Educational Resources Information Center
Preece, Peter F. W.
1978-01-01
Using a degenerate multivariate normal model for the distribution of organismic variables, the form of least-squares regression analysis required to estimate a linear functional relationship between variables is derived. It is suggested that the two conventional regression lines may be considered to describe functional, not merely statistical,…
Isolating and Examining Sources of Suppression and Multicollinearity in Multiple Linear Regression
ERIC Educational Resources Information Center
Beckstead, Jason W.
2012-01-01
The presence of suppression (and multicollinearity) in multiple regression analysis complicates interpretation of predictor-criterion relationships. The mathematical conditions that produce suppression in regression analysis have received considerable attention in the methodological literature but until now nothing in the way of an analytic…
General Nature of Multicollinearity in Multiple Regression Analysis.
ERIC Educational Resources Information Center
Liu, Richard
1981-01-01
Discusses multiple regression, a very popular statistical technique in the field of education. One of the basic assumptions in regression analysis requires that independent variables in the equation should not be highly correlated. The problem of multicollinearity and some of the solutions to it are discussed. (Author)
Logistic Regression: Concept and Application
ERIC Educational Resources Information Center
Cokluk, Omay
2010-01-01
The main focus of logistic regression analysis is classification of individuals in different groups. The aim of the present study is to explain basic concepts and processes of binary logistic regression analysis intended to determine the combination of independent variables which best explain the membership in certain groups called dichotomous…
Carbon Dioxide Evasion from Boreal Lakes: Drivers, Variability and Revised Global Estimate
NASA Astrophysics Data System (ADS)
Hastie, A. T.; Lauerwald, R.; Weyhenmeyer, G. A.; Sobek, S.; Verpoorter, C.; Regnier, P. A. G.
2016-12-01
Carbon dioxide evasion (FCO2) from lakes and reservoirs is established as an important component of the global carbon (C) cycle, a fact reflected by the inclusion of these waterbodies in the most recent IPCC assessment report. In this study we developed a statistical model driven by environmental geodata, to predict CO2 partial pressure (pCO2) in boreal lakes, and to create the first high resolution map (0.5°) of boreal (50°- 70°) lake pCO2. The resulting map of pCO2 was combined with lake area (lakes >0.01km2) from the recently developed GLOWABO database (Verpoorter et al., 2014) and estimates of gas transfer velocity k, to produce the first high resolution map of boreal lake FCO2. Before training our model, the geodata as well as approximately 27,000 samples of `open water' (excluding periods of ice cover) pCO2 from the boreal region, were gridded at 0.5° resolution and log transformed where necessary. A multilinear regression was used to derive a prediction equation for log10 pCO2 as a function of log10 lake area, net primary productivity (NPP), precipitation, wind speed and soil pH (r2= 0.66), and then applied in ArcGIS to build the map of pCO2. After validation, the map of boreal lake pCO2 was used to derive a map of boreal lake FCO2. For the boreal region we estimate an average, lake area weighted, pCO2 of 930 μatm and FCO2 of 170 (121-243) Tg C yr-1. Our estimate of FCO2 will soon be updated with the incorporation of the smallest lakes (<0.01km2). Despite the current exclusion of the smallest lakes, our estimate is higher than the highest previous estimate of approximately 110 Tg C yr-1 (Aufdenkampe et al, 2011). Moreover, our empirical approach driven by environmental geodata can be used as the basis for estimating future FCO2 from boreal lakes, and their sensitivity to climate change.
NASA Astrophysics Data System (ADS)
Dodds, S. F.; Mock, C. J.
2009-12-01
All available instrumental winter precipitation data for the Central Valley of California back to 1850 were digitized and analyzed to construct continuous time series. Many of these data, in paper or microfilm format, extend prior to modern National Weather Service Cooperative Data Program and Historical Climate Network data, and were recorded by volunteer observers from networks such as the US Army Surgeon General, Smithsonian Institution, and US Army Signal Service. Given incomplete individual records temporally, detailed documentary data from newspapers, personal diaries and journals, ship logbooks, and weather enthusiasts’ instrumental data, were used in conjunction with instrumental data to reconstruct precipitation frequency per month and season, continuous days of precipitation, and to identify anomalous precipitation events. Multilinear regression techniques, using surrounding stations and the relationships between modern and historical records, bridge timeframes lacking data and provided homogeneous nature of time series. The metadata for each station was carefully screened, and notes were made about any possible changes to the instrumentation, location of instruments, or an untrained observer to verify that anomalous events were not recorded incorrectly. Precipitation in the Central Valley varies throughout the entire region, but waterways link the differing elevations and latitudes. This study integrates the individual station data with additional accounts of flood descriptions through unique newspaper and journal data. River heights and flood extent inundating cities, agricultural lands, and individual homes are often recorded within unique documentary sources, which add to the understanding of flood occurrence within this area. Comparisons were also made between dam and levee construction through time and how waters are diverted through cities in natural and anthropogenically changed environments. Some precipitation that lead to flooding events that occur in the Central Valley in the mid-19th century through the early 20th century are more outstanding at some particular stations than the modern records include. Several years that are included in the study are 1850, 1862, 1868, 1878, 1881, 1890, and 1907. These flood years were compared to the modern record and reconstructed through time series and maps. Incorporating the extent and effects these anomalous events in future climate studies could improve models and preparedness for the future floods.
Dynamics of the Seychelles-Chagos Thermocline Ridge
NASA Astrophysics Data System (ADS)
Bulusu, S.
2016-02-01
The southwest tropical Indian Ocean (SWTIO) features a unique, seasonal upwelling of the thermocline also known as the Seychelles-Chagos Thermocline Ridge (SCTR). More recently, this ridge or "dome"-like feature in the thermocline depth at (55°E-65°E, 5°S-12°S) in the SWTIO has been linked to interannual variability in the semi-annual Indian Ocean monsoon seasons as well as the Madden-Julian Oscillation (MJO) and El Niño Southern Oscillation (ENSO). The SCTR is a region where the MJO is associated with strong SST variability. Normally more cyclones are found generated in this SCTR region when the thermocline is deeper, which has a positive relation to the arrival of a downwelling Rossby wave from the southeast tropical Indian Ocean. Previous studies have focused their efforts solely on sea surface temperature (SST) because they determined salinity variability to be low, but with the Soil Moisture and Ocean Salinity (SMOS), and Aquarius salinity missions new insight can be shed on the effects that the seasonal upwelling of the thermocline has on Sea Surface Salinity (SSS). Seasonal SSS anomalies these missions will reveal the magnitude of seasonal SSS variability, while Argo depth profiles will show the link between changes in subsurface salinity and temperature structure. A seasonal increase in SST and a decrease in SSS associated with the downwelling of the thermocline have also been shown to occasionally generate MJO events, an extremely important part of climate variability in the Indian ocean. Satellite derives salinity and Argo data can help link changes in surface and subsurface salinity structure to the generation of the important MJO events. This study uses satellite derived salinity from Soil Moisture and Ocean Salinity (SMOS), and Aquarius to see if these satellites can yield new information on seasonal and interannual surface variability. In this study barrier layer thickness (BLT) estimates will be derived from satellite measurements using a multilinear regression model (MRM). This study will help to improve monsoon modeling and forecasting, two areas that remain highly inaccurate after decades of research work.
AgIIS, Agricultural Irrigation Imaging System, design and application
NASA Astrophysics Data System (ADS)
Haberland, Julio Andres
Remote sensing is a tool that is increasingly used in agriculture for crop management purposes. A ground-based remote sensing data acquisition system was designed, constructed, and implemented to collect high spatial and temporal resolution data in irrigated agriculture. The system was composed of a rail that mounts on a linear move irrigation machine, and a small cart that runs back and forth on the rail. The cart was equipped with a sensors package that measured reflectance in four discrete wavelengths (550 nm, 660 nm, 720 nm, and 810 nm, all 10 nm bandwidth) and an infrared thermometer. A global positioning system and triggers on the rail indicated cart position. The data was postprocessed in order to generate vegetation maps, N and water status maps and other indices relevant for site-specific crop management. A geographic information system (GIS) was used to generate images of the field on any desired day. The system was named AgIIS (A&barbelow;gricultural I&barbelow;rrigation I&barbelow;maging S&barbelow;ystem). This ground based remote sensing acquisition system was developed at the Agricultural and Biosystems Engineering Department at the University of Arizona in conjunction with the U.S. Water Conservation Laboratory in Phoenix, as part of a cooperative study primarily funded by the Idaho National Environmental and Engineering Laboratory. A second phase of the study utilized data acquired with AgIIS during the 1999 cotton growing season to model petiole nitrate (PNO3 -) and total leaf N. A latin square experimental design with optimal and low water and optimal and low N was used to evaluate N status under water and no water stress conditions. Multivariable models were generated with neural networks (NN) and multilinear regression (MLR). Single variable models were generated from chlorophyll meter readings (SPAD) and from the Canopy Chlorophyll Content Index (CCCI). All models were evaluated against observed PNO3- and total leaf N levels. The NN models showed the highest correlation with PNO3- and total leaf N. AgIIS was a reliable and efficient data acquisition system for research and also showed potential for use in commercial farming systems.
Applying Regression Analysis to Problems in Institutional Research.
ERIC Educational Resources Information Center
Bohannon, Tom R.
1988-01-01
Regression analysis is one of the most frequently used statistical techniques in institutional research. Principles of least squares, model building, residual analysis, influence statistics, and multi-collinearity are described and illustrated. (Author/MSE)
Multicollinearity in Regression Analyses Conducted in Epidemiologic Studies
Vatcheva, Kristina P.; Lee, MinJae; McCormick, Joseph B.; Rahbar, Mohammad H.
2016-01-01
The adverse impact of ignoring multicollinearity on findings and data interpretation in regression analysis is very well documented in the statistical literature. The failure to identify and report multicollinearity could result in misleading interpretations of the results. A review of epidemiological literature in PubMed from January 2004 to December 2013, illustrated the need for a greater attention to identifying and minimizing the effect of multicollinearity in analysis of data from epidemiologic studies. We used simulated datasets and real life data from the Cameron County Hispanic Cohort to demonstrate the adverse effects of multicollinearity in the regression analysis and encourage researchers to consider the diagnostic for multicollinearity as one of the steps in regression analysis. PMID:27274911
Multicollinearity in Regression Analyses Conducted in Epidemiologic Studies.
Vatcheva, Kristina P; Lee, MinJae; McCormick, Joseph B; Rahbar, Mohammad H
2016-04-01
The adverse impact of ignoring multicollinearity on findings and data interpretation in regression analysis is very well documented in the statistical literature. The failure to identify and report multicollinearity could result in misleading interpretations of the results. A review of epidemiological literature in PubMed from January 2004 to December 2013, illustrated the need for a greater attention to identifying and minimizing the effect of multicollinearity in analysis of data from epidemiologic studies. We used simulated datasets and real life data from the Cameron County Hispanic Cohort to demonstrate the adverse effects of multicollinearity in the regression analysis and encourage researchers to consider the diagnostic for multicollinearity as one of the steps in regression analysis.
NASA Astrophysics Data System (ADS)
Amato, F.; Pandolfi, M.; Escrig, A.; Querol, X.; Alastuey, A.; Pey, J.; Perez, N.; Hopke, P. K.
Atmospheric PM pollution from traffic comprises not only direct emissions but also non-exhaust emissions because resuspension of road dust that can produce high human exposure to heavy metals, metalloids, and mineral matter. A key task for establishing mitigation or preventive measures is estimating the contribution of road dust resuspension to the atmospheric PM mixture. Several source apportionment studies, applying receptor modeling at urban background sites, have shown the difficulty in identifying a road dust source separately from other mineral sources or vehicular exhausts. The Multilinear Engine (ME-2) is a computer program that can solve the Positive Matrix Factorization (PMF) problem. ME-2 uses a programming language permitting the solution to be guided toward some possible targets that can be derived from a priori knowledge of sources (chemical profile, ratios, etc.). This feature makes it especially suitable for source apportionment studies where partial knowledge of the sources is available. In the present study ME-2 was applied to data from an urban background site of Barcelona (Spain) to quantify the contribution of road dust resuspension to PM 10 and PM 2.5 concentrations. Given that recently the emission profile of local resuspended road dust was obtained (Amato, F., Pandolfi, M., Viana, M., Querol, X., Alastuey, A., Moreno, T., 2009. Spatial and chemical patterns of PM 10 in road dust deposited in urban environment. Atmospheric Environment 43 (9), 1650-1659), such a priori information was introduced in the model as auxiliary terms of the object function to be minimized by the implementation of the so-called "pulling equations". ME-2 permitted to enhance the basic PMF solution (obtained by PMF2) identifying, beside the seven sources of PMF2, the road dust source which accounted for 6.9 μg m -3 (17%) in PM 10, 2.2 μg m -3 (8%) of PM 2.5 and 0.3 μg m -3 (2%) of PM 1. This reveals that resuspension was responsible of the 37%, 15% and 3% of total traffic emissions respectively in PM 10, PM 2.5 and PM 1. Therefore the overall traffic contribution resulted in 18 μg m -3 (46%) in PM 10, 14 μg m -3 (51%) in PM 2.5 and 8 μg m -3 (48%) in PM 1. In PMF2 this mass explained by road dust resuspension was redistributed among the rest of sources, increasing mostly the mineral, secondary nitrate and aged sea salt contributions.
Stepwise versus Hierarchical Regression: Pros and Cons
ERIC Educational Resources Information Center
Lewis, Mitzi
2007-01-01
Multiple regression is commonly used in social and behavioral data analysis. In multiple regression contexts, researchers are very often interested in determining the "best" predictors in the analysis. This focus may stem from a need to identify those predictors that are supportive of theory. Alternatively, the researcher may simply be interested…
Interpreting Bivariate Regression Coefficients: Going beyond the Average
ERIC Educational Resources Information Center
Halcoussis, Dennis; Phillips, G. Michael
2010-01-01
Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…
Regression Commonality Analysis: A Technique for Quantitative Theory Building
ERIC Educational Resources Information Center
Nimon, Kim; Reio, Thomas G., Jr.
2011-01-01
When it comes to multiple linear regression analysis (MLR), it is common for social and behavioral science researchers to rely predominately on beta weights when evaluating how predictors contribute to a regression model. Presenting an underutilized statistical technique, this article describes how organizational researchers can use commonality…
Precision Efficacy Analysis for Regression.
ERIC Educational Resources Information Center
Brooks, Gordon P.
When multiple linear regression is used to develop a prediction model, sample size must be large enough to ensure stable coefficients. If the derivation sample size is inadequate, the model may not predict well for future subjects. The precision efficacy analysis for regression (PEAR) method uses a cross- validity approach to select sample sizes…
Kanada, Yoshikiyo; Sakurai, Hiroaki; Sugiura, Yoshito; Arai, Tomoaki; Koyama, Soichiro; Tanabe, Shigeo
2017-11-01
[Purpose] To create a regression formula in order to estimate 1RM for knee extensors, based on the maximal isometric muscle strength measured using a hand-held dynamometer and data regarding the body composition. [Subjects and Methods] Measurement was performed in 21 healthy males in their twenties to thirties. Single regression analysis was performed, with measurement values representing 1RM and the maximal isometric muscle strength as dependent and independent variables, respectively. Furthermore, multiple regression analysis was performed, with data regarding the body composition incorporated as another independent variable, in addition to the maximal isometric muscle strength. [Results] Through single regression analysis with the maximal isometric muscle strength as an independent variable, the following regression formula was created: 1RM (kg)=0.714 + 0.783 × maximal isometric muscle strength (kgf). On multiple regression analysis, only the total muscle mass was extracted. [Conclusion] A highly accurate regression formula to estimate 1RM was created based on both the maximal isometric muscle strength and body composition. Using a hand-held dynamometer and body composition analyzer, it was possible to measure these items in a short time, and obtain clinically useful results.
Regression Model Optimization for the Analysis of Experimental Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2009-01-01
A candidate math model search algorithm was developed at Ames Research Center that determines a recommended math model for the multivariate regression analysis of experimental data. The search algorithm is applicable to classical regression analysis problems as well as wind tunnel strain gage balance calibration analysis applications. The algorithm compares the predictive capability of different regression models using the standard deviation of the PRESS residuals of the responses as a search metric. This search metric is minimized during the search. Singular value decomposition is used during the search to reject math models that lead to a singular solution of the regression analysis problem. Two threshold dependent constraints are also applied. The first constraint rejects math models with insignificant terms. The second constraint rejects math models with near-linear dependencies between terms. The math term hierarchy rule may also be applied as an optional constraint during or after the candidate math model search. The final term selection of the recommended math model depends on the regressor and response values of the data set, the user s function class combination choice, the user s constraint selections, and the result of the search metric minimization. A frequently used regression analysis example from the literature is used to illustrate the application of the search algorithm to experimental data.
Cao, Qingqing; Wu, Zhenqiang; Sun, Ying; Wang, Tiezhu; Han, Tengwei; Gu, Chaomei; Sun, Yehuan
2011-11-01
To Eexplore the application of negative binomial regression and modified Poisson regression analysis in analyzing the influential factors for injury frequency and the risk factors leading to the increase of injury frequency. 2917 primary and secondary school students were selected from Hefei by cluster random sampling method and surveyed by questionnaire. The data on the count event-based injuries used to fitted modified Poisson regression and negative binomial regression model. The risk factors incurring the increase of unintentional injury frequency for juvenile students was explored, so as to probe the efficiency of these two models in studying the influential factors for injury frequency. The Poisson model existed over-dispersion (P < 0.0001) based on testing by the Lagrangemultiplier. Therefore, the over-dispersion dispersed data using a modified Poisson regression and negative binomial regression model, was fitted better. respectively. Both showed that male gender, younger age, father working outside of the hometown, the level of the guardian being above junior high school and smoking might be the results of higher injury frequencies. On a tendency of clustered frequency data on injury event, both the modified Poisson regression analysis and negative binomial regression analysis can be used. However, based on our data, the modified Poisson regression fitted better and this model could give a more accurate interpretation of relevant factors affecting the frequency of injury.
Deng, Yingyuan; Wang, Tianfu; Chen, Siping; Liu, Weixiang
2017-01-01
The aim of the study is to screen the significant sonographic features by logistic regression analysis and fit a model to diagnose thyroid nodules. A total of 525 pathological thyroid nodules were retrospectively analyzed. All the nodules underwent conventional ultrasonography (US), strain elastosonography (SE), and contrast -enhanced ultrasound (CEUS). Those nodules’ 12 suspicious sonographic features were used to assess thyroid nodules. The significant features of diagnosing thyroid nodules were picked out by logistic regression analysis. All variables that were statistically related to diagnosis of thyroid nodules, at a level of p < 0.05 were embodied in a logistic regression analysis model. The significant features in the logistic regression model of diagnosing thyroid nodules were calcification, suspected cervical lymph node metastasis, hypoenhancement pattern, margin, shape, vascularity, posterior acoustic, echogenicity, and elastography score. According to the results of logistic regression analysis, the formula that could predict whether or not thyroid nodules are malignant was established. The area under the receiver operating curve (ROC) was 0.930 and the sensitivity, specificity, accuracy, positive predictive value, and negative predictive value were 83.77%, 89.56%, 87.05%, 86.04%, and 87.79% respectively. PMID:29228030
Pang, Tiantian; Huang, Leidan; Deng, Yingyuan; Wang, Tianfu; Chen, Siping; Gong, Xuehao; Liu, Weixiang
2017-01-01
The aim of the study is to screen the significant sonographic features by logistic regression analysis and fit a model to diagnose thyroid nodules. A total of 525 pathological thyroid nodules were retrospectively analyzed. All the nodules underwent conventional ultrasonography (US), strain elastosonography (SE), and contrast -enhanced ultrasound (CEUS). Those nodules' 12 suspicious sonographic features were used to assess thyroid nodules. The significant features of diagnosing thyroid nodules were picked out by logistic regression analysis. All variables that were statistically related to diagnosis of thyroid nodules, at a level of p < 0.05 were embodied in a logistic regression analysis model. The significant features in the logistic regression model of diagnosing thyroid nodules were calcification, suspected cervical lymph node metastasis, hypoenhancement pattern, margin, shape, vascularity, posterior acoustic, echogenicity, and elastography score. According to the results of logistic regression analysis, the formula that could predict whether or not thyroid nodules are malignant was established. The area under the receiver operating curve (ROC) was 0.930 and the sensitivity, specificity, accuracy, positive predictive value, and negative predictive value were 83.77%, 89.56%, 87.05%, 86.04%, and 87.79% respectively.
Bennett, Bradley C; Husby, Chad E
2008-03-28
Botanical pharmacopoeias are non-random subsets of floras, with some taxonomic groups over- or under-represented. Moerman [Moerman, D.E., 1979. Symbols and selectivity: a statistical analysis of Native American medical ethnobotany, Journal of Ethnopharmacology 1, 111-119] introduced linear regression/residual analysis to examine these patterns. However, regression, the commonly-employed analysis, suffers from several statistical flaws. We use contingency table and binomial analyses to examine patterns of Shuar medicinal plant use (from Amazonian Ecuador). We first analyzed the Shuar data using Moerman's approach, modified to better meet requirements of linear regression analysis. Second, we assessed the exact randomization contingency table test for goodness of fit. Third, we developed a binomial model to test for non-random selection of plants in individual families. Modified regression models (which accommodated assumptions of linear regression) reduced R(2) to from 0.59 to 0.38, but did not eliminate all problems associated with regression analyses. Contingency table analyses revealed that the entire flora departs from the null model of equal proportions of medicinal plants in all families. In the binomial analysis, only 10 angiosperm families (of 115) differed significantly from the null model. These 10 families are largely responsible for patterns seen at higher taxonomic levels. Contingency table and binomial analyses offer an easy and statistically valid alternative to the regression approach.
The Precision Efficacy Analysis for Regression Sample Size Method.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.
The general purpose of this study was to examine the efficiency of the Precision Efficacy Analysis for Regression (PEAR) method for choosing appropriate sample sizes in regression studies used for precision. The PEAR method, which is based on the algebraic manipulation of an accepted cross-validity formula, essentially uses an effect size to…
Effect of Contact Damage on the Strength of Ceramic Materials.
1982-10-01
variables that are important to erosion, and a multivariate , linear regression analysis is used to fit the data to the dimensional analysis. The...of Equations 7 and 8 by a multivariable regression analysis (room tem- perature data) Exponent Regression Standard error Computed coefficient of...1980) 593. WEAVER, Proc. Brit. Ceram. Soc. 22 (1973) 125. 39. P. W. BRIDGMAN, "Dimensional Analaysis ", (Yale 18. R. W. RICE, S. W. FREIMAN and P. F
Common pitfalls in statistical analysis: Linear regression analysis
Aggarwal, Rakesh; Ranganathan, Priya
2017-01-01
In a previous article in this series, we explained correlation analysis which describes the strength of relationship between two continuous variables. In this article, we deal with linear regression analysis which predicts the value of one continuous variable from another. We also discuss the assumptions and pitfalls associated with this analysis. PMID:28447022
Groundwater salinity in a floodplain forest impacted by saltwater intrusion
NASA Astrophysics Data System (ADS)
Kaplan, David A.; Muñoz-Carpena, Rafael
2014-11-01
Coastal wetlands occupy a delicate position at the intersection of fresh and saline waters. Changing climate and watershed hydrology can lead to saltwater intrusion into historically freshwater systems, causing plant mortality and loss of freshwater habitat. Understanding the hydrological functioning of tidally influenced floodplain forests is essential for advancing ecosystem protection and restoration goals, however finding direct relationships between hydrological inputs and floodplain hydrology is complicated by interactions between surface water, groundwater, and atmospheric fluxes in variably saturated soils with heterogeneous vegetation and topography. Thus, an alternative method for identifying common trends and causal factors is required. Dynamic factor analysis (DFA), a time series dimension reduction technique, models temporal variation in observed data as linear combinations of common trends, which represent unexplained common variability, and explanatory variables. DFA was applied to model shallow groundwater salinity in the forested floodplain wetlands of the Loxahatchee River (Florida, USA), where altered watershed hydrology has led to changing hydroperiod and salinity regimes and undesired vegetative changes. Long-term, high-resolution groundwater salinity datasets revealed dynamics over seasonal and yearly time periods as well as over tidal cycles and storm events. DFA identified shared trends among salinity time series and a full dynamic factor model simulated observed series well (overall coefficient of efficiency, Ceff = 0.85; 0.52 ≤ Ceff ≤ 0.99). A reduced multilinear model based solely on explanatory variables identified in the DFA had fair to good results (Ceff = 0.58; 0.38 ≤ Ceff ≤ 0.75) and may be used to assess the effects of restoration and management scenarios on shallow groundwater salinity in the Loxahatchee River floodplain.
Quality of life in breast cancer patients--a quantile regression analysis.
Pourhoseingholi, Mohamad Amin; Safaee, Azadeh; Moghimi-Dehkordi, Bijan; Zeighami, Bahram; Faghihzadeh, Soghrat; Tabatabaee, Hamid Reza; Pourhoseingholi, Asma
2008-01-01
Quality of life study has an important role in health care especially in chronic diseases, in clinical judgment and in medical resources supplying. Statistical tools like linear regression are widely used to assess the predictors of quality of life. But when the response is not normal the results are misleading. The aim of this study is to determine the predictors of quality of life in breast cancer patients, using quantile regression model and compare to linear regression. A cross-sectional study conducted on 119 breast cancer patients that admitted and treated in chemotherapy ward of Namazi hospital in Shiraz. We used QLQ-C30 questionnaire to assessment quality of life in these patients. A quantile regression was employed to assess the assocciated factors and the results were compared to linear regression. All analysis carried out using SAS. The mean score for the global health status for breast cancer patients was 64.92+/-11.42. Linear regression showed that only grade of tumor, occupational status, menopausal status, financial difficulties and dyspnea were statistically significant. In spite of linear regression, financial difficulties were not significant in quantile regression analysis and dyspnea was only significant for first quartile. Also emotion functioning and duration of disease statistically predicted the QOL score in the third quartile. The results have demonstrated that using quantile regression leads to better interpretation and richer inference about predictors of the breast cancer patient quality of life.
The microcomputer scientific software series 2: general linear model--regression.
Harold M. Rauscher
1983-01-01
The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...
USAF (United States Air Force) Stability and Control DATCOM (Data Compendium)
1978-04-01
regression analysis involves the study of a group of variables to determine their effect on a given parameter. Because of the empirical nature of this...regression analysis of mathematical statistics. In general, a regression analysis involves the study of a group of variables to determine their effect on a...Excperiment, OSR TN 58-114, MIT Fluid Dynamics Research Group Rapt. 57-5, 1957. (U) 90. Kennet, H., and Ashley, H.: Review of Unsteady Aerodynamic Studies in
Tokunaga, Makoto; Watanabe, Susumu; Sonoda, Shigeru
2017-09-01
Multiple linear regression analysis is often used to predict the outcome of stroke rehabilitation. However, the predictive accuracy may not be satisfactory. The objective of this study was to elucidate the predictive accuracy of a method of calculating motor Functional Independence Measure (mFIM) at discharge from mFIM effectiveness predicted by multiple regression analysis. The subjects were 505 patients with stroke who were hospitalized in a convalescent rehabilitation hospital. The formula "mFIM at discharge = mFIM effectiveness × (91 points - mFIM at admission) + mFIM at admission" was used. By including the predicted mFIM effectiveness obtained through multiple regression analysis in this formula, we obtained the predicted mFIM at discharge (A). We also used multiple regression analysis to directly predict mFIM at discharge (B). The correlation between the predicted and the measured values of mFIM at discharge was compared between A and B. The correlation coefficients were .916 for A and .878 for B. Calculating mFIM at discharge from mFIM effectiveness predicted by multiple regression analysis had a higher degree of predictive accuracy of mFIM at discharge than that directly predicted. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bougiatioti, Aikaterini; Bezantakos, Spiros; Stavroulas, Iasonas; Kalivitis, Nikos; Kokkalis, Panagiotis; Biskos, George; Mihalopoulos, Nikolaos; Papayannis, Alexandros; Nenes, Athanasios
2016-06-01
This study investigates the concentration, cloud condensation nuclei (CCN) activity and hygroscopic properties of particles influenced by biomass burning in the eastern Mediterranean and their impacts on cloud droplet formation. Air masses sampled were subject to a range of atmospheric processing (several hours up to 3 days). Values of the hygroscopicity parameter, κ, were derived from CCN measurements and a Hygroscopic Tandem Differential Mobility Analyzer (HTDMA). An Aerosol Chemical Speciation Monitor (ACSM) was also used to determine the chemical composition and mass concentration of non-refractory components of the submicron aerosol fraction. During fire events, the increased organic content (and lower inorganic fraction) of the aerosol decreases the values of κ, for all particle sizes. Particle sizes smaller than 80 nm exhibited considerable chemical dispersion (where hygroscopicity varied up to 100 % for particles of same size); larger particles, however, exhibited considerably less dispersion owing to the effects of condensational growth and cloud processing. ACSM measurements indicate that the bulk composition reflects the hygroscopicity and chemical nature of the largest particles (having a diameter of ˜ 100 nm at dry conditions) sampled. Based on positive matrix factorization (PMF) analysis of the organic ACSM spectra, CCN concentrations follow a similar trend as the biomass-burning organic aerosol (BBOA) component, with the former being enhanced between 65 and 150 % (for supersaturations ranging between 0.2 and 0.7 %) with the arrival of the smoke plumes. Using multilinear regression of the PMF factors (BBOA, OOA-BB and OOA) and the observed hygroscopicity parameter, the inferred hygroscopicity of the oxygenated organic aerosol components is determined. We find that the transformation of freshly emitted biomass burning (BBOA) to more oxidized organic aerosol (OOA-BB) can result in a 2-fold increase of the inferred organic hygroscopicity; about 10 % of the total aerosol hygroscopicity is related to the two biomass-burning components (BBOA and OOA-BB), which in turn contribute almost 35 % to the fine-particle organic water of the aerosol. Observation-derived calculations of the cloud droplet concentrations that develop for typical boundary layer cloud conditions suggest that biomass burning increases droplet number, on average by 8.5 %. The strongly sublinear response of clouds to biomass-burning (BB) influences is a result of strong competition of CCN for water vapor, which results in very low maximum supersaturation (0.08 % on average). Attributing droplet number variations to the total aerosol number and the chemical composition variations shows that the importance of chemical composition increases with distance, contributing up to 25 % of the total droplet variability. Therefore, although BB may strongly elevate CCN numbers, the impact on droplet number is limited by water vapor availability and depends on the aerosol particle concentration levels associated with the background.
Xiayun, Zuo; Chaohua, Lou; Ersheng, Gao; Yan, Cheng; Hongfeng, Niu; Zabin, Laurie S.
2014-01-01
Purpose Gender is an important factor in understanding premarital sexual attitudes and behaviors. Many studies indicate that males are more likely to initiate sexual intercourse and have more permissive perceptions about sex than females. Yet few studies have explored possible reasons for these gender differences. With samples of unmarried adolescents in three Asian cities influenced by Confucian cultures, this paper investigates the relationship between underlying gender norms and these differences in adolescents’ premarital sexual permissiveness. Methods 16,554 unmarried participants aged 15–24 were recruited in the Three-City Asian Study of Adolescents and Youth, a collaborative survey conducted in 2006–2007 in urban and rural areas of Hanoi, Shanghai and Taipei, with 6204, 6023 and 4327 from each city respectively. All of the adolescents were administered face-to-face interviews, coupled with Computer Assisted Self Interview (CASI) for sensitive questions. Scales on gender-role attitudes and on premarital sexual permissiveness for both male and female respondents were developed and applied to our analysis of the data. Multi-linear regression was used to analyze the relationship between gender-role attitudes and sexual permissiveness. Results Male respondents in each city held more permissive attitudes towards premarital sex than did females with both boys and girls expressing greater permissiveness to male premarital sexual behaviors. Boys also expressed more traditional attitudes to gender roles (condoning greater inequality) than did girls in each city. Adolescents’ gender-role attitudes and permissiveness to premarital sex varied considerably across the three cities, with the Vietnamese the most traditional, the Taiwanese the least traditional, and the adolescents in Shanghai in the middle. A negative association between traditional gender roles and premarital sexual permissiveness was only found among girls in Shanghai and Taipei. In Shanghai, female respondents who held more traditional gender role attitudes were more likely to exercise a double standard with respect to male as opposed to female premarital sex (OR=1.18). This relationship also applied to attitudes of both girls and boys in Taipei (OR=1.20 and OR=1.22, respectively). Conclusions Although with variation across sites, gender differences in premarital sexual permissiveness and attitudes to gender roles among adolescents were very significant in each of the three Asian cities influenced by Confucian-based values. Traditional gender norms may still be deeply rooted in the three cities, especially among females, while it is important to advocate gender equity in adolescent reproductive health programs, the pathway of traditional gender norms in influencing adolescent reproductive health outcomes must be understood, as must differences and similarities across regions. PMID:22340852
Zuo, Xiayun; Lou, Chaohua; Gao, Ersheng; Cheng, Yan; Niu, Hongfeng; Zabin, Laurie S
2012-03-01
Gender is an important factor in understanding premarital sexual attitudes and behaviors. Many studies indicate that males are more likely to initiate sexual intercourse and have more permissive perceptions about sex than females. Yet few studies have explored possible reasons for these gender differences. With samples of unmarried adolescents in three Asian cities influenced by Confucian cultures, this article investigates the relationship between underlying gender norms and these differences in adolescents' premarital sexual permissiveness (PSP). In a collaborative survey conducted in 2006-2007 in urban and rural areas of Hanoi, Shanghai, and Taipei, 16,554 unmarried participants aged 15-24 years were recruited in the three-City Asian Study of Adolescents and Youth, with 6,204, 6,023, and 4,327 respondents from each city, respectively. All the adolescents were administered face-to-face interviews, coupled with computer-assisted self-interview for sensitive questions. Scales on gender-role attitudes and on PSP for both male and female respondents were developed and applied to our analysis of the data. Multilinear regression was used to analyze the relationship between gender-role attitudes and sexual permissiveness. Male respondents in each city held more permissive attitudes toward premarital sex than did females, with both boys and girls expressing greater permissiveness to male premarital sexual behaviors. Boys also expressed more traditional attitudes to gender roles (condoning greater inequality) than did girls in each city. Adolescents' gender-role attitudes and permissiveness to premarital sex varied considerably across the three cities, with the Vietnamese the most traditional, the Taiwanese the least traditional, and the adolescents in Shanghai in the middle. A negative association between traditional gender roles and PSP was only found among girls in Shanghai and Taipei. In Shanghai, female respondents who held more traditional gender-role attitudes were more likely to exercise a double standard with respect to male as opposed to female premarital sex (odds ratio [OR] = 1.18). This relationship also applied to attitudes of both girls and boys in Taipei (OR = 1.20 and OR = 1.22, respectively). Although with variation across sites, gender differences in PSP and attitudes to gender roles among adolescents were very significant in each of the three Asian cities influenced by Confucian-based values. Traditional gender norms may still be deeply rooted in the three cities, especially among females; while it is important to advocate gender equity in adolescent reproductive health programs, the pathway of traditional gender norms in influencing adolescent reproductive health outcomes must be understood, as must differences and similarities across regions. Copyright © 2012 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
John W. Edwards; Susan C. Loeb; David C. Guynn
1994-01-01
Multiple regression and use-availability analyses are two methods for examining habitat selection. Use-availability analysis is commonly used to evaluate macrohabitat selection whereas multiple regression analysis can be used to determine microhabitat selection. We compared these techniques using behavioral observations (n = 5534) and telemetry locations (n = 2089) of...
Anderson, Carl A; McRae, Allan F; Visscher, Peter M
2006-07-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.
Nie, Z Q; Ou, Y Q; Zhuang, J; Qu, Y J; Mai, J Z; Chen, J M; Liu, X Q
2016-05-01
Conditional logistic regression analysis and unconditional logistic regression analysis are commonly used in case control study, but Cox proportional hazard model is often used in survival data analysis. Most literature only refer to main effect model, however, generalized linear model differs from general linear model, and the interaction was composed of multiplicative interaction and additive interaction. The former is only statistical significant, but the latter has biological significance. In this paper, macros was written by using SAS 9.4 and the contrast ratio, attributable proportion due to interaction and synergy index were calculated while calculating the items of logistic and Cox regression interactions, and the confidence intervals of Wald, delta and profile likelihood were used to evaluate additive interaction for the reference in big data analysis in clinical epidemiology and in analysis of genetic multiplicative and additive interactions.
Prediction by regression and intrarange data scatter in surface-process studies
Toy, T.J.; Osterkamp, W.R.; Renard, K.G.
1993-01-01
Modeling is a major component of contemporary earth science, and regression analysis occupies a central position in the parameterization, calibration, and validation of geomorphic and hydrologic models. Although this methodology can be used in many ways, we are primarily concerned with the prediction of values for one variable from another variable. Examination of the literature reveals considerable inconsistency in the presentation of the results of regression analysis and the occurrence of patterns in the scatter of data points about the regression line. Both circumstances confound utilization and evaluation of the models. Statisticians are well aware of various problems associated with the use of regression analysis and offer improved practices; often, however, their guidelines are not followed. After a review of the aforementioned circumstances and until standard criteria for model evaluation become established, we recommend, as a minimum, inclusion of scatter diagrams, the standard error of the estimate, and sample size in reporting the results of regression analyses for most surface-process studies. ?? 1993 Springer-Verlag.
Quantile regression for the statistical analysis of immunological data with many non-detects.
Eilers, Paul H C; Röder, Esther; Savelkoul, Huub F J; van Wijk, Roy Gerth
2012-07-07
Immunological parameters are hard to measure. A well-known problem is the occurrence of values below the detection limit, the non-detects. Non-detects are a nuisance, because classical statistical analyses, like ANOVA and regression, cannot be applied. The more advanced statistical techniques currently available for the analysis of datasets with non-detects can only be used if a small percentage of the data are non-detects. Quantile regression, a generalization of percentiles to regression models, models the median or higher percentiles and tolerates very high numbers of non-detects. We present a non-technical introduction and illustrate it with an implementation to real data from a clinical trial. We show that by using quantile regression, groups can be compared and that meaningful linear trends can be computed, even if more than half of the data consists of non-detects. Quantile regression is a valuable addition to the statistical methods that can be used for the analysis of immunological datasets with non-detects.
NASA Astrophysics Data System (ADS)
Nishidate, Izumi; Wiswadarma, Aditya; Hase, Yota; Tanaka, Noriyuki; Maeda, Takaaki; Niizeki, Kyuichi; Aizu, Yoshihisa
2011-08-01
In order to visualize melanin and blood concentrations and oxygen saturation in human skin tissue, a simple imaging technique based on multispectral diffuse reflectance images acquired at six wavelengths (500, 520, 540, 560, 580 and 600nm) was developed. The technique utilizes multiple regression analysis aided by Monte Carlo simulation for diffuse reflectance spectra. Using the absorbance spectrum as a response variable and the extinction coefficients of melanin, oxygenated hemoglobin, and deoxygenated hemoglobin as predictor variables, multiple regression analysis provides regression coefficients. Concentrations of melanin and total blood are then determined from the regression coefficients using conversion vectors that are deduced numerically in advance, while oxygen saturation is obtained directly from the regression coefficients. Experiments with a tissue-like agar gel phantom validated the method. In vivo experiments with human skin of the human hand during upper limb occlusion and of the inner forearm exposed to UV irradiation demonstrated the ability of the method to evaluate physiological reactions of human skin tissue.
CADDIS Volume 4. Data Analysis: PECBO Appendix - R Scripts for Non-Parametric Regressions
Script for computing nonparametric regression analysis. Overview of using scripts to infer environmental conditions from biological observations, statistically estimating species-environment relationships, statistical scripts.
Kovalska, M P; Bürki, E; Schoetzau, A; Orguel, S F; Orguel, S; Grieshaber, M C
2011-04-01
The distinction of real progression from test variability in visual field (VF) series may be based on clinical judgment, on trend analysis based on follow-up of test parameters over time, or on identification of a significant change related to the mean of baseline exams (event analysis). The aim of this study was to compare a new population-based method (Octopus field analysis, OFA) with classic regression analyses and clinical judgment for detecting glaucomatous VF changes. 240 VF series of 240 patients with at least 9 consecutive examinations available were included into this study. They were independently classified by two experienced investigators. The results of such a classification served as a reference for comparison for the following statistical tests: (a) t-test global, (b) r-test global, (c) regression analysis of 10 VF clusters and (d) point-wise linear regression analysis. 32.5 % of the VF series were classified as progressive by the investigators. The sensitivity and specificity were 89.7 % and 92.0 % for r-test, and 73.1 % and 93.8 % for the t-test, respectively. In the point-wise linear regression analysis, the specificity was comparable (89.5 % versus 92 %), but the sensitivity was clearly lower than in the r-test (22.4 % versus 89.7 %) at a significance level of p = 0.01. A regression analysis for the 10 VF clusters showed a markedly higher sensitivity for the r-test (37.7 %) than the t-test (14.1 %) at a similar specificity (88.3 % versus 93.8 %) for a significant trend (p = 0.005). In regard to the cluster distribution, the paracentral clusters and the superior nasal hemifield progressed most frequently. The population-based regression analysis seems to be superior to the trend analysis in detecting VF progression in glaucoma, and may eliminate the drawbacks of the event analysis. Further, it may assist the clinician in the evaluation of VF series and may allow better visualization of the correlation between function and structure owing to VF clusters. © Georg Thieme Verlag KG Stuttgart · New York.
ERIC Educational Resources Information Center
Berenson, Mark L.
2013-01-01
There is consensus in the statistical literature that severe departures from its assumptions invalidate the use of regression modeling for purposes of inference. The assumptions of regression modeling are usually evaluated subjectively through visual, graphic displays in a residual analysis but such an approach, taken alone, may be insufficient…
L.R. Grosenbaugh
1967-01-01
Describes an expansible computerized system that provides data needed in regression or covariance analysis of as many as 50 variables, 8 of which may be dependent. Alternatively, it can screen variously generated combinations of independent variables to find the regression with the smallest mean-squared-residual, which will be fitted if desired. The user can easily...
Karabatsos, George
2017-02-01
Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.
Applications of statistics to medical science, III. Correlation and regression.
Watanabe, Hiroshi
2012-01-01
In this third part of a series surveying medical statistics, the concepts of correlation and regression are reviewed. In particular, methods of linear regression and logistic regression are discussed. Arguments related to survival analysis will be made in a subsequent paper.
Optimizing methods for linking cinematic features to fMRI data.
Kauttonen, Janne; Hlushchuk, Yevhen; Tikka, Pia
2015-04-15
One of the challenges of naturalistic neurosciences using movie-viewing experiments is how to interpret observed brain activations in relation to the multiplicity of time-locked stimulus features. As previous studies have shown less inter-subject synchronization across viewers of random video footage than story-driven films, new methods need to be developed for analysis of less story-driven contents. To optimize the linkage between our fMRI data collected during viewing of a deliberately non-narrative silent film 'At Land' by Maya Deren (1944) and its annotated content, we combined the method of elastic-net regularization with the model-driven linear regression and the well-established data-driven independent component analysis (ICA) and inter-subject correlation (ISC) methods. In the linear regression analysis, both IC and region-of-interest (ROI) time-series were fitted with time-series of a total of 36 binary-valued and one real-valued tactile annotation of film features. The elastic-net regularization and cross-validation were applied in the ordinary least-squares linear regression in order to avoid over-fitting due to the multicollinearity of regressors, the results were compared against both the partial least-squares (PLS) regression and the un-regularized full-model regression. Non-parametric permutation testing scheme was applied to evaluate the statistical significance of regression. We found statistically significant correlation between the annotation model and 9 ICs out of 40 ICs. Regression analysis was also repeated for a large set of cubic ROIs covering the grey matter. Both IC- and ROI-based regression analyses revealed activations in parietal and occipital regions, with additional smaller clusters in the frontal lobe. Furthermore, we found elastic-net based regression more sensitive than PLS and un-regularized regression since it detected a larger number of significant ICs and ROIs. Along with the ISC ranking methods, our regression analysis proved a feasible method for ordering the ICs based on their functional relevance to the annotated cinematic features. The novelty of our method is - in comparison to the hypothesis-driven manual pre-selection and observation of some individual regressors biased by choice - in applying data-driven approach to all content features simultaneously. We found especially the combination of regularized regression and ICA useful when analyzing fMRI data obtained using non-narrative movie stimulus with a large set of complex and correlated features. Copyright © 2015. Published by Elsevier Inc.
Gao, Jian; Peng, Xing; Chen, Gang; Xu, Jiao; Shi, Guo-Liang; Zhang, Yue-Chong; Feng, Yin-Chang
2016-01-15
As the widespread application of online instruments penetrates the environmental fields, it is interesting to investigate the sources of fine particulate matter (PM2.5) based on the data monitored by online instruments. In this study, online analyzers with 1-h time resolution were employed to observe PM2.5 composition data, including carbon components, inorganic ions, heavy metals and gas pollutants, during a summer in Beijing. Chemical characteristics, temporal patterns and sources of PM2.5 are discussed. On the basis of hourly data, the mean concentration value of PM2.5 was 62.16±39.37 μg m(-3) (ranging from 6.69 to 183.67 μg m(-3)). The average concentrations of NO3(-), SO4(2-), NH4(+), OC and EC, the major chemical species, were 15.18±13.12, 14.80±14.53, 8.90±9.51, 9.32±4.16 and 3.08±1.43 μg m(-3), respectively. The concentration of PM2.5 varied during the online-sampling period, initially increasing and then subsequently decreasing. Three factor analysis models, including principal component analysis (PCA), positive matrix factorization (PMF) and Multilinear Engine 2 (ME2), were applied to apportion the PM2.5 sources. Source apportionment results obtained by the three different models were in agreement. Four sources were identified in Beijing during the sampling campaign, including secondary sources (38-39%), crustal dust (17-22%), vehicle exhaust (25-28%) and coal combustion (15-16%). Similar source profiles and contributions of PM2.5 were derived from ME2 and PMF, indicating the results of the two models are reasonable. The finding provides information that could be exploited for regular air control strategies. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seong W. Lee
During this reporting period, the literature survey including the gasifier temperature measurement literature, the ultrasonic application and its background study in cleaning application, and spray coating process are completed. The gasifier simulator (cold model) testing has been successfully conducted. Four factors (blower voltage, ultrasonic application, injection time intervals, particle weight) were considered as significant factors that affect the temperature measurement. The Analysis of Variance (ANOVA) was applied to analyze the test data. The analysis shows that all four factors are significant to the temperature measurements in the gasifier simulator (cold model). The regression analysis for the case with the normalizedmore » room temperature shows that linear model fits the temperature data with 82% accuracy (18% error). The regression analysis for the case without the normalized room temperature shows 72.5% accuracy (27.5% error). The nonlinear regression analysis indicates a better fit than that of the linear regression. The nonlinear regression model's accuracy is 88.7% (11.3% error) for normalized room temperature case, which is better than the linear regression analysis. The hot model thermocouple sleeve design and fabrication are completed. The gasifier simulator (hot model) design and the fabrication are completed. The system tests of the gasifier simulator (hot model) have been conducted and some modifications have been made. Based on the system tests and results analysis, the gasifier simulator (hot model) has met the proposed design requirement and the ready for system test. The ultrasonic cleaning method is under evaluation and will be further studied for the gasifier simulator (hot model) application. The progress of this project has been on schedule.« less
The Economic Value of Mangroves: A Meta-Analysis
Marwa Salem; D. Evan Mercer
2012-01-01
This paper presents a synthesis of the mangrove ecosystem valuation literature through a meta-regression analysis. The main contribution of this study is that it is the first meta-analysis focusing solely on mangrove forests, whereas previous studies have included different types of wetlands. The number of studies included in the regression analysis is 44 for a total...
2014-01-01
Background Support vector regression (SVR) and Gaussian process regression (GPR) were used for the analysis of electroanalytical experimental data to estimate diffusion coefficients. Results For simulated cyclic voltammograms based on the EC, Eqr, and EqrC mechanisms these regression algorithms in combination with nonlinear kernel/covariance functions yielded diffusion coefficients with higher accuracy as compared to the standard approach of calculating diffusion coefficients relying on the Nicholson-Shain equation. The level of accuracy achieved by SVR and GPR is virtually independent of the rate constants governing the respective reaction steps. Further, the reduction of high-dimensional voltammetric signals by manual selection of typical voltammetric peak features decreased the performance of both regression algorithms compared to a reduction by downsampling or principal component analysis. After training on simulated data sets, diffusion coefficients were estimated by the regression algorithms for experimental data comprising voltammetric signals for three organometallic complexes. Conclusions Estimated diffusion coefficients closely matched the values determined by the parameter fitting method, but reduced the required computational time considerably for one of the reaction mechanisms. The automated processing of voltammograms according to the regression algorithms yields better results than the conventional analysis of peak-related data. PMID:24987463
Zhang, Chao; Jia, Pengli; Yu, Liu; Xu, Chang
2018-05-01
Dose-response meta-analysis (DRMA) is widely applied to investigate the dose-specific relationship between independent and dependent variables. Such methods have been in use for over 30 years and are increasingly employed in healthcare and clinical decision-making. In this article, we give an overview of the methodology used in DRMA. We summarize the commonly used regression model and the pooled method in DRMA. We also use an example to illustrate how to employ a DRMA by these methods. Five regression models, linear regression, piecewise regression, natural polynomial regression, fractional polynomial regression, and restricted cubic spline regression, were illustrated in this article to fit the dose-response relationship. And two types of pooling approaches, that is, one-stage approach and two-stage approach are illustrated to pool the dose-response relationship across studies. The example showed similar results among these models. Several dose-response meta-analysis methods can be used for investigating the relationship between exposure level and the risk of an outcome. However the methodology of DRMA still needs to be improved. © 2018 Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.
NASA Astrophysics Data System (ADS)
Amil, Norhaniza; Talib Latif, Mohd; Firoz Khan, Md; Mohamad, Maznorizan
2016-04-01
This study investigates the fine particulate matter (PM2.5) variability in the Klang Valley urban-industrial environment. In total, 94 daily PM2.5 samples were collected during a 1-year campaign from August 2011 to July 2012. This is the first paper on PM2.5 mass, chemical composition and sources in the tropical environment of Southeast Asia, covering all four seasons (distinguished by the wind flow patterns) including haze events. The samples were analysed for various inorganic components and black carbon (BC). The chemical compositions were statistically analysed and the temporal aerosol pattern (seasonal) was characterised using descriptive analysis, correlation matrices, enrichment factor (EF), stoichiometric analysis and chemical mass closure (CMC). For source apportionment purposes, a combination of positive matrix factorisation (PMF) and multi-linear regression (MLR) was employed. Further, meteorological-gaseous parameters were incorporated into each analysis for improved assessment. In addition, secondary data of total suspended particulate (TSP) and coarse particulate matter (PM10) sampled at the same location and time with this study (collected by Malaysian Meteorological Department) were used for PM ratio assessment. The results showed that PM2.5 mass averaged at 28 ± 18 µg m-3, 2.8-fold higher than the World Health Organisation (WHO) annual guideline. On a daily basis, the PM2.5 mass ranged between 6 and 118 µg m-3 with the daily WHO guideline exceeded 43 % of the time. The north-east (NE) monsoon was the only season with less than 50 % sample exceedance of the daily WHO guideline. On an annual scale, PM2.5 mass correlated positively with temperature (T) and wind speed (WS) but negatively with relative humidity (RH). With the exception of NOx, the gases analysed (CO, NO2, NO and SO2) were found to significantly influence the PM2.5 mass. Seasonal variability unexpectedly showed that rainfall, WS and wind direction (WD) did not significantly correlate with PM2.5 mass. Further analysis on the PM2.5 / PM10, PM2.5 / TSP and PM10 / TSP ratios reveal that meteorological parameters only greatly influenced the coarse particles (particles with an aerodynamic diameter of greater than 2.5 µm) and less so the fine particles at the site. Chemical composition showed that both primary and secondary pollutants of PM2.5 are equally important, albeit with seasonal variability. The CMC components identified were in the decreasing order of (mass contribution) BC > secondary inorganic aerosols (SIA) > dust > trace elements > sea salt > K+. The EF analysis distinguished two groups of trace elements: those with anthropogenic sources (Pb, Se, Zn, Cd, As, Bi, Ba, Cu, Rb, V and Ni) and those with a crustal source (Sr, Mn, Co and Li). The five identified factors resulting from PMF 5.0 were (1) combustion of engine oil, (2) mineral dust, (3) mixed SIA and biomass burning, (4) mixed traffic and industrial and (5) sea salt. Each of these sources had an annual mean contribution of 17, 14, 42, 10 and 17 % respectively. The dominance of each identified source largely varied with changing season and a few factors were in agreement with the CMC, EF and stoichiometric analysis, accordingly. In relation to meteorological-gaseous parameters, PM2.5 sources were influenced by different parameters during different seasons. In addition, two air pollution episodes (HAZE) revealed the influence of local and/or regional sources. Overall, our study clearly suggests that the chemical constituents and sources of PM2.5 were greatly influenced and characterised by meteorological and gaseous parameters which vary greatly with season.
Suzuki, Taku; Iwamoto, Takuji; Shizu, Kanae; Suzuki, Katsuji; Yamada, Harumoto; Sato, Kazuki
2017-05-01
This retrospective study was designed to investigate prognostic factors for postoperative outcomes for cubital tunnel syndrome (CubTS) using multiple logistic regression analysis with a large number of patients. Eighty-three patients with CubTS who underwent surgeries were enrolled. The following potential prognostic factors for disease severity were selected according to previous reports: sex, age, type of surgery, disease duration, body mass index, cervical lesion, presence of diabetes mellitus, Workers' Compensation status, preoperative severity, and preoperative electrodiagnostic testing. Postoperative severity of disease was assessed 2 years after surgery by Messina's criteria which is an outcome measure specifically for CubTS. Bivariate analysis was performed to select candidate prognostic factors for multiple linear regression analyses. Multiple logistic regression analysis was conducted to identify the association between postoperative severity and selected prognostic factors. Both bivariate and multiple linear regression analysis revealed only preoperative severity as an independent risk factor for poor prognosis, while other factors did not show any significant association. Although conflicting results exist regarding prognosis of CubTS, this study supports evidence from previous studies and concludes early surgical intervention portends the most favorable prognosis. Copyright © 2017 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved.
Brunetti, Natale Daniele; Santoro, Francesco; De Gennaro, Luisa; Correale, Michele; Gaglione, Antonio; Di Biase, Matteo
2016-07-01
In a recent paper Singh et al. analyzed the effect of drug treatment on recurrence of takotsubo cardiomyopathy (TTC) in a comprehensive meta-analysis. The study found that recurrence rates were independent of clinic utilization of BB prescription, but inversely correlated with ACEi/ARB prescription: authors therefore conclude that ACEi/ARB rather than BB may reduce risk of recurrence. We aimed to re-analyze data reported in the study, now weighted for populations' size, in a meta-regression analysis. After multiple meta-regression analysis, we found a significant regression between rates of prescription of ACEi and rates of recurrence of TTC; regression was not statistically significant for BBs. On the bases of our re-analysis, we confirm that rates of recurrence of TTC are lower in populations of patients with higher rates of treatment with ACEi/ARB. That could not necessarily imply that ACEi may prevent recurrence of TTC, but barely that, for example, rates of recurrence are lower in cohorts more compliant with therapy or more prescribed with ACEi because more carefully followed. Randomized prospective studies are surely warranted. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A general framework for the use of logistic regression models in meta-analysis.
Simmonds, Mark C; Higgins, Julian Pt
2016-12-01
Where individual participant data are available for every randomised trial in a meta-analysis of dichotomous event outcomes, "one-stage" random-effects logistic regression models have been proposed as a way to analyse these data. Such models can also be used even when individual participant data are not available and we have only summary contingency table data. One benefit of this one-stage regression model over conventional meta-analysis methods is that it maximises the correct binomial likelihood for the data and so does not require the common assumption that effect estimates are normally distributed. A second benefit of using this model is that it may be applied, with only minor modification, in a range of meta-analytic scenarios, including meta-regression, network meta-analyses and meta-analyses of diagnostic test accuracy. This single model can potentially replace the variety of often complex methods used in these areas. This paper considers, with a range of meta-analysis examples, how random-effects logistic regression models may be used in a number of different types of meta-analyses. This one-stage approach is compared with widely used meta-analysis methods including Bayesian network meta-analysis and the bivariate and hierarchical summary receiver operating characteristic (ROC) models for meta-analyses of diagnostic test accuracy. © The Author(s) 2014.
Examination of influential observations in penalized spline regression
NASA Astrophysics Data System (ADS)
Türkan, Semra
2013-10-01
In parametric or nonparametric regression models, the results of regression analysis are affected by some anomalous observations in the data set. Thus, detection of these observations is one of the major steps in regression analysis. These observations are precisely detected by well-known influence measures. Pena's statistic is one of them. In this study, Pena's approach is formulated for penalized spline regression in terms of ordinary residuals and leverages. The real data and artificial data are used to see illustrate the effectiveness of Pena's statistic as to Cook's distance on detecting influential observations. The results of the study clearly reveal that the proposed measure is superior to Cook's Distance to detect these observations in large data set.
Robust analysis of trends in noisy tokamak confinement data using geodesic least squares regression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verdoolaege, G., E-mail: geert.verdoolaege@ugent.be; Laboratory for Plasma Physics, Royal Military Academy, B-1000 Brussels; Shabbir, A.
Regression analysis is a very common activity in fusion science for unveiling trends and parametric dependencies, but it can be a difficult matter. We have recently developed the method of geodesic least squares (GLS) regression that is able to handle errors in all variables, is robust against data outliers and uncertainty in the regression model, and can be used with arbitrary distribution models and regression functions. We here report on first results of application of GLS to estimation of the multi-machine scaling law for the energy confinement time in tokamaks, demonstrating improved consistency of the GLS results compared to standardmore » least squares.« less
Multilinear stress-strain and failure calibrations for Ti-6Al-4V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corona, Edmundo
This memo concerns calibration of an elastic-plastic J 2 material model for Ti-6Al-4V (grade 5) alloy based on tensile uniaxial stress-strain data obtained in the laboratory. In addition, tension tests on notched specimens provided data to calibrate two ductile failure models: Johnson-Cook and Wellman's tearing parameter. The tests were conducted by Kim Haulen- beek and Dave Johnson (1528) in the Structural Mechanics Laboratory (SML) during late March and early April, 2017. The SML EWP number was 4162. The stock material was a TIMETALR® 6-4 Titanium billet with 9 in. by 9 in. square section and length of 137 in. Themore » product description indicates that it was a forging delivered in annealed condition (2 hours @ 1300oF, AC at the mill). The tensile mechanical properties reported in the material certi cation are given in Table 1, where σ o represents the 0.2% strain offset yield stress, σ u the ultimate stress, ε f the elongation at failure and R.A. the reduction in area.« less
Evolution of genetic architecture under directional selection.
Hansen, Thomas F; Alvarez-Castro, José M; Carter, Ashley J R; Hermisson, Joachim; Wagner, Günter P
2006-08-01
We investigate the multilinear epistatic model under mutation-limited directional selection. We confirm previous results that only directional epistasis, in which genes on average reinforce or diminish each other's effects, contribute to the initial evolution of mutational effects. Thus, either canalization or decanalization can occur under directional selection, depending on whether positive or negative epistasis is prevalent. We then focus on the evolution of the epistatic coefficients themselves. In the absence of higher-order epistasis, positive pairwise epistasis will tend to weaken relative to additive effects, while negative pairwise epistasis will tend to become strengthened. Positive third-order epistasis will counteract these effects, while negative third-order epistasis will reinforce them. More generally, gene interactions of all orders have an inherent tendency for negative changes under directional selection, which can only be modified by higher-order directional epistasis. We identify three types of nonadditive quasi-equilibrium architectures that, although not strictly stable, can be maintained for an extended time: (1) nondirectional epistatic architectures; (2) canalized architectures with strong epistasis; and (3) near-additive architectures in which additive effects keep increasing relative to epistasis.
Tensor Spectral Clustering for Partitioning Higher-order Network Structures.
Benson, Austin R; Gleich, David F; Leskovec, Jure
2015-01-01
Spectral graph theory-based methods represent an important class of tools for studying the structure of networks. Spectral methods are based on a first-order Markov chain derived from a random walk on the graph and thus they cannot take advantage of important higher-order network substructures such as triangles, cycles, and feed-forward loops. Here we propose a Tensor Spectral Clustering (TSC) algorithm that allows for modeling higher-order network structures in a graph partitioning framework. Our TSC algorithm allows the user to specify which higher-order network structures (cycles, feed-forward loops, etc.) should be preserved by the network clustering. Higher-order network structures of interest are represented using a tensor, which we then partition by developing a multilinear spectral method. Our framework can be applied to discovering layered flows in networks as well as graph anomaly detection, which we illustrate on synthetic networks. In directed networks, a higher-order structure of particular interest is the directed 3-cycle, which captures feedback loops in networks. We demonstrate that our TSC algorithm produces large partitions that cut fewer directed 3-cycles than standard spectral clustering algorithms.
Tensor Spectral Clustering for Partitioning Higher-order Network Structures
Benson, Austin R.; Gleich, David F.; Leskovec, Jure
2016-01-01
Spectral graph theory-based methods represent an important class of tools for studying the structure of networks. Spectral methods are based on a first-order Markov chain derived from a random walk on the graph and thus they cannot take advantage of important higher-order network substructures such as triangles, cycles, and feed-forward loops. Here we propose a Tensor Spectral Clustering (TSC) algorithm that allows for modeling higher-order network structures in a graph partitioning framework. Our TSC algorithm allows the user to specify which higher-order network structures (cycles, feed-forward loops, etc.) should be preserved by the network clustering. Higher-order network structures of interest are represented using a tensor, which we then partition by developing a multilinear spectral method. Our framework can be applied to discovering layered flows in networks as well as graph anomaly detection, which we illustrate on synthetic networks. In directed networks, a higher-order structure of particular interest is the directed 3-cycle, which captures feedback loops in networks. We demonstrate that our TSC algorithm produces large partitions that cut fewer directed 3-cycles than standard spectral clustering algorithms. PMID:27812399
Multiresolution generalized N dimension PCA for ultrasound image denoising
2014-01-01
Background Ultrasound images are usually affected by speckle noise, which is a type of random multiplicative noise. Thus, reducing speckle and improving image visual quality are vital to obtaining better diagnosis. Method In this paper, a novel noise reduction method for medical ultrasound images, called multiresolution generalized N dimension PCA (MR-GND-PCA), is presented. In this method, the Gaussian pyramid and multiscale image stacks on each level are built first. GND-PCA as a multilinear subspace learning method is used for denoising. Each level is combined to achieve the final denoised image based on Laplacian pyramids. Results The proposed method is tested with synthetically speckled and real ultrasound images, and quality evaluation metrics, including MSE, SNR and PSNR, are used to evaluate its performance. Conclusion Experimental results show that the proposed method achieved the lowest noise interference and improved image quality by reducing noise and preserving the structure. Our method is also robust for the image with a much higher level of speckle noise. For clinical images, the results show that MR-GND-PCA can reduce speckle and preserve resolvable details. PMID:25096917
Retro-regression--another important multivariate regression improvement.
Randić, M
2001-01-01
We review the serious problem associated with instabilities of the coefficients of regression equations, referred to as the MRA (multivariate regression analysis) "nightmare of the first kind". This is manifested when in a stepwise regression a descriptor is included or excluded from a regression. The consequence is an unpredictable change of the coefficients of the descriptors that remain in the regression equation. We follow with consideration of an even more serious problem, referred to as the MRA "nightmare of the second kind", arising when optimal descriptors are selected from a large pool of descriptors. This process typically causes at different steps of the stepwise regression a replacement of several previously used descriptors by new ones. We describe a procedure that resolves these difficulties. The approach is illustrated on boiling points of nonanes which are considered (1) by using an ordered connectivity basis; (2) by using an ordering resulting from application of greedy algorithm; and (3) by using an ordering derived from an exhaustive search for optimal descriptors. A novel variant of multiple regression analysis, called retro-regression (RR), is outlined showing how it resolves the ambiguities associated with both "nightmares" of the first and the second kind of MRA.
Regression analysis for solving diagnosis problem of children's health
NASA Astrophysics Data System (ADS)
Cherkashina, Yu A.; Gerget, O. M.
2016-04-01
The paper includes results of scientific researches. These researches are devoted to the application of statistical techniques, namely, regression analysis, to assess the health status of children in the neonatal period based on medical data (hemostatic parameters, parameters of blood tests, the gestational age, vascular-endothelial growth factor) measured at 3-5 days of children's life. In this paper a detailed description of the studied medical data is given. A binary logistic regression procedure is discussed in the paper. Basic results of the research are presented. A classification table of predicted values and factual observed values is shown, the overall percentage of correct recognition is determined. Regression equation coefficients are calculated, the general regression equation is written based on them. Based on the results of logistic regression, ROC analysis was performed, sensitivity and specificity of the model are calculated and ROC curves are constructed. These mathematical techniques allow carrying out diagnostics of health of children providing a high quality of recognition. The results make a significant contribution to the development of evidence-based medicine and have a high practical importance in the professional activity of the author.
Regression analysis using dependent Polya trees.
Schörgendorfer, Angela; Branscum, Adam J
2013-11-30
Many commonly used models for linear regression analysis force overly simplistic shape and scale constraints on the residual structure of data. We propose a semiparametric Bayesian model for regression analysis that produces data-driven inference by using a new type of dependent Polya tree prior to model arbitrary residual distributions that are allowed to evolve across increasing levels of an ordinal covariate (e.g., time, in repeated measurement studies). By modeling residual distributions at consecutive covariate levels or time points using separate, but dependent Polya tree priors, distributional information is pooled while allowing for broad pliability to accommodate many types of changing residual distributions. We can use the proposed dependent residual structure in a wide range of regression settings, including fixed-effects and mixed-effects linear and nonlinear models for cross-sectional, prospective, and repeated measurement data. A simulation study illustrates the flexibility of our novel semiparametric regression model to accurately capture evolving residual distributions. In an application to immune development data on immunoglobulin G antibodies in children, our new model outperforms several contemporary semiparametric regression models based on a predictive model selection criterion. Copyright © 2013 John Wiley & Sons, Ltd.
A comparison of methods for the analysis of binomial clustered outcomes in behavioral research.
Ferrari, Alberto; Comelli, Mario
2016-12-01
In behavioral research, data consisting of a per-subject proportion of "successes" and "failures" over a finite number of trials often arise. This clustered binary data are usually non-normally distributed, which can distort inference if the usual general linear model is applied and sample size is small. A number of more advanced methods is available, but they are often technically challenging and a comparative assessment of their performances in behavioral setups has not been performed. We studied the performances of some methods applicable to the analysis of proportions; namely linear regression, Poisson regression, beta-binomial regression and Generalized Linear Mixed Models (GLMMs). We report on a simulation study evaluating power and Type I error rate of these models in hypothetical scenarios met by behavioral researchers; plus, we describe results from the application of these methods on data from real experiments. Our results show that, while GLMMs are powerful instruments for the analysis of clustered binary outcomes, beta-binomial regression can outperform them in a range of scenarios. Linear regression gave results consistent with the nominal level of significance, but was overall less powerful. Poisson regression, instead, mostly led to anticonservative inference. GLMMs and beta-binomial regression are generally more powerful than linear regression; yet linear regression is robust to model misspecification in some conditions, whereas Poisson regression suffers heavily from violations of the assumptions when used to model proportion data. We conclude providing directions to behavioral scientists dealing with clustered binary data and small sample sizes. Copyright © 2016 Elsevier B.V. All rights reserved.
Regression analysis of informative current status data with the additive hazards model.
Zhao, Shishun; Hu, Tao; Ma, Ling; Wang, Peijie; Sun, Jianguo
2015-04-01
This paper discusses regression analysis of current status failure time data arising from the additive hazards model in the presence of informative censoring. Many methods have been developed for regression analysis of current status data under various regression models if the censoring is noninformative, and also there exists a large literature on parametric analysis of informative current status data in the context of tumorgenicity experiments. In this paper, a semiparametric maximum likelihood estimation procedure is presented and in the method, the copula model is employed to describe the relationship between the failure time of interest and the censoring time. Furthermore, I-splines are used to approximate the nonparametric functions involved and the asymptotic consistency and normality of the proposed estimators are established. A simulation study is conducted and indicates that the proposed approach works well for practical situations. An illustrative example is also provided.
Comparison of cranial sex determination by discriminant analysis and logistic regression.
Amores-Ampuero, Anabel; Alemán, Inmaculada
2016-04-05
Various methods have been proposed for estimating dimorphism. The objective of this study was to compare sex determination results from cranial measurements using discriminant analysis or logistic regression. The study sample comprised 130 individuals (70 males) of known sex, age, and cause of death from San José cemetery in Granada (Spain). Measurements of 19 neurocranial dimensions and 11 splanchnocranial dimensions were subjected to discriminant analysis and logistic regression, and the percentages of correct classification were compared between the sex functions obtained with each method. The discriminant capacity of the selected variables was evaluated with a cross-validation procedure. The percentage accuracy with discriminant analysis was 78.2% for the neurocranium (82.4% in females and 74.6% in males) and 73.7% for the splanchnocranium (79.6% in females and 68.8% in males). These percentages were higher with logistic regression analysis: 85.7% for the neurocranium (in both sexes) and 94.1% for the splanchnocranium (100% in females and 91.7% in males).
Building Regression Models: The Importance of Graphics.
ERIC Educational Resources Information Center
Dunn, Richard
1989-01-01
Points out reasons for using graphical methods to teach simple and multiple regression analysis. Argues that a graphically oriented approach has considerable pedagogic advantages in the exposition of simple and multiple regression. Shows that graphical methods may play a central role in the process of building regression models. (Author/LS)
Testing Different Model Building Procedures Using Multiple Regression.
ERIC Educational Resources Information Center
Thayer, Jerome D.
The stepwise regression method of selecting predictors for computer assisted multiple regression analysis was compared with forward, backward, and best subsets regression, using 16 data sets. The results indicated the stepwise method was preferred because of its practical nature, when the models chosen by different selection methods were similar…
NASA Astrophysics Data System (ADS)
Yitayew, M.; Didan, K.; Barreto-munoz, A.
2013-12-01
The Nile Basin is one of the world's water resources hotspot that is home to over 437 million people in ten riparian countries with 54% or 238 millions live directly within the basin. The basin like all other basins of the world is facing water resources challenges exacerbated by climate change and increased demand. Nowadays any water resource management action in the basin has to assess the impacts of climate change to be able to predict future water supply and also to help in the negotiation process. Presently, there is a lack of basin wide weather networks to understand sensitivity of the vegetation cover to the impacts of climate change. Vegetation plays major economic and ecological functions in the basin and provides key services ranging from pastoralism, agricultural production, firewood, habitat and food sources for the rich wildlife, as well as a major role in the carbon cycle and climate regulation of the region. Under the threat of climate change and the incessant anthropogenic pressure the distribution and services of the region's ecosystems are projected to change The goal of this work is to assess and characterize how the basin vegetation productivity, distribution, and phenology have changed over the last 30+ years and what are the key climatic drivers of this change. This work makes use of a newly generated multi-sensor long-term land surface data set about vegetation and phenology. Vegetation indices derived from remotely sensed surface reflectance data are commonly used to characterize phenology or vegetation dynamics accurately and with enough spatial and temporal resolution to support change detection. We used more than 30 years of vegetation index and growing season data from AVHRR and MODIS sensors compiled by the Vegetation Index and Phenology laboratory (VIP LAB) at the University of Arizona. Available climate data about precipitation and temperature for the corresponding 30 years period is also used for this analysis. We looked at the changes in the vegetation index signal and to a lesser degree the change in land cover and land use over the last 30 years. Using the climate data record we looked at the drivers of this change. The sensitivity of the basin to climate change was assessed using the multi-linear regression analysis on the covariance of the change in key phenology parameters and the two climate drivers considered here. The overall response was very complex owing to the complicated climate regime and topography of the region. Vegetation response was mostly stable in high lands with a slightly decreasing trend over low and mid-elevations. Over the same period we also observed an intensification of agriculture production corresponding to an increase in percent cover and productivity. We also observed a decrease in forest cover associated with land use conversion. These changes were mostly driven by the precipitation regimes with little impact of the temperature. Climate models project an eventual decrease in precipitation and increase in temperature over the basin. Coupled with these results and observations these projected changes point to major challenges to the vegetation cover, productivity, and associated ecosystem services of the Nile basin.
Yu, Xiaojin; Liu, Pei; Min, Jie; Chen, Qiguang
2009-01-01
To explore the application of regression on order statistics (ROS) in estimating nondetects for food exposure assessment. Regression on order statistics was adopted in analysis of cadmium residual data set from global food contaminant monitoring, the mean residual was estimated basing SAS programming and compared with the results from substitution methods. The results show that ROS method performs better obviously than substitution methods for being robust and convenient for posterior analysis. Regression on order statistics is worth to adopt,but more efforts should be make for details of application of this method.
Ebrahimzadeh, Farzad; Hajizadeh, Ebrahim; Vahabi, Nasim; Almasian, Mohammad; Bakhteyar, Katayoon
2015-01-01
Background: Unwanted pregnancy not intended by at least one of the parents has undesirable consequences for the family and the society. In the present study, three classification models were used and compared to predict unwanted pregnancies in an urban population. Methods: In this cross-sectional study, 887 pregnant mothers referring to health centers in Khorramabad, Iran, in 2012 were selected by the stratified and cluster sampling; relevant variables were measured and for prediction of unwanted pregnancy, logistic regression, discriminant analysis, and probit regression models and SPSS software version 21 were used. To compare these models, indicators such as sensitivity, specificity, the area under the ROC curve, and the percentage of correct predictions were used. Results: The prevalence of unwanted pregnancies was 25.3%. The logistic and probit regression models indicated that parity and pregnancy spacing, contraceptive methods, household income and number of living male children were related to unwanted pregnancy. The performance of the models based on the area under the ROC curve was 0.735, 0.733, and 0.680 for logistic regression, probit regression, and linear discriminant analysis, respectively. Conclusion: Given the relatively high prevalence of unwanted pregnancies in Khorramabad, it seems necessary to revise family planning programs. Despite the similar accuracy of the models, if the researcher is interested in the interpretability of the results, the use of the logistic regression model is recommended. PMID:26793655
Ebrahimzadeh, Farzad; Hajizadeh, Ebrahim; Vahabi, Nasim; Almasian, Mohammad; Bakhteyar, Katayoon
2015-01-01
Unwanted pregnancy not intended by at least one of the parents has undesirable consequences for the family and the society. In the present study, three classification models were used and compared to predict unwanted pregnancies in an urban population. In this cross-sectional study, 887 pregnant mothers referring to health centers in Khorramabad, Iran, in 2012 were selected by the stratified and cluster sampling; relevant variables were measured and for prediction of unwanted pregnancy, logistic regression, discriminant analysis, and probit regression models and SPSS software version 21 were used. To compare these models, indicators such as sensitivity, specificity, the area under the ROC curve, and the percentage of correct predictions were used. The prevalence of unwanted pregnancies was 25.3%. The logistic and probit regression models indicated that parity and pregnancy spacing, contraceptive methods, household income and number of living male children were related to unwanted pregnancy. The performance of the models based on the area under the ROC curve was 0.735, 0.733, and 0.680 for logistic regression, probit regression, and linear discriminant analysis, respectively. Given the relatively high prevalence of unwanted pregnancies in Khorramabad, it seems necessary to revise family planning programs. Despite the similar accuracy of the models, if the researcher is interested in the interpretability of the results, the use of the logistic regression model is recommended.
Regression Analysis of Physician Distribution to Identify Areas of Need: Some Preliminary Findings.
ERIC Educational Resources Information Center
Morgan, Bruce B.; And Others
A regression analysis was conducted of factors that help to explain the variance in physician distribution and which identify those factors that influence the maldistribution of physicians. Models were developed for different geographic areas to determine the most appropriate unit of analysis for the Western Missouri Area Health Education Center…
Criteria for the use of regression analysis for remote sensing of sediment and pollutants
NASA Technical Reports Server (NTRS)
Whitlock, C. H.; Kuo, C. Y.; Lecroy, S. R. (Principal Investigator)
1982-01-01
Data analysis procedures for quantification of water quality parameters that are already identified and are known to exist within the water body are considered. The liner multiple-regression technique was examined as a procedure for defining and calibrating data analysis algorithms for such instruments as spectrometers and multispectral scanners.
The Analysis of the Regression-Discontinuity Design in R
ERIC Educational Resources Information Center
Thoemmes, Felix; Liao, Wang; Jin, Ze
2017-01-01
This article describes the analysis of regression-discontinuity designs (RDDs) using the R packages rdd, rdrobust, and rddtools. We discuss similarities and differences between these packages and provide directions on how to use them effectively. We use real data from the Carolina Abecedarian Project to show how an analysis of an RDD can be…
Tu, Yu-Kang; Krämer, Nicole; Lee, Wen-Chung
2012-07-01
In the analysis of trends in health outcomes, an ongoing issue is how to separate and estimate the effects of age, period, and cohort. As these 3 variables are perfectly collinear by definition, regression coefficients in a general linear model are not unique. In this tutorial, we review why identification is a problem, and how this problem may be tackled using partial least squares and principal components regression analyses. Both methods produce regression coefficients that fulfill the same collinearity constraint as the variables age, period, and cohort. We show that, because the constraint imposed by partial least squares and principal components regression is inherent in the mathematical relation among the 3 variables, this leads to more interpretable results. We use one dataset from a Taiwanese health-screening program to illustrate how to use partial least squares regression to analyze the trends in body heights with 3 continuous variables for age, period, and cohort. We then use another dataset of hepatocellular carcinoma mortality rates for Taiwanese men to illustrate how to use partial least squares regression to analyze tables with aggregated data. We use the second dataset to show the relation between the intrinsic estimator, a recently proposed method for the age-period-cohort analysis, and partial least squares regression. We also show that the inclusion of all indicator variables provides a more consistent approach. R code for our analyses is provided in the eAppendix.
MODELING SNAKE MICROHABITAT FROM RADIOTELEMETRY STUDIES USING POLYTOMOUS LOGISTIC REGRESSION
Multivariate analysis of snake microhabitat has historically used techniques that were derived under assumptions of normality and common covariance structure (e.g., discriminant function analysis, MANOVA). In this study, polytomous logistic regression (PLR which does not require ...
NASA Technical Reports Server (NTRS)
Kalton, G.
1983-01-01
A number of surveys were conducted to study the relationship between the level of aircraft or traffic noise exposure experienced by people living in a particular area and their annoyance with it. These surveys generally employ a clustered sample design which affects the precision of the survey estimates. Regression analysis of annoyance on noise measures and other variables is often an important component of the survey analysis. Formulae are presented for estimating the standard errors of regression coefficients and ratio of regression coefficients that are applicable with a two- or three-stage clustered sample design. Using a simple cost function, they also determine the optimum allocation of the sample across the stages of the sample design for the estimation of a regression coefficient.
NASA Astrophysics Data System (ADS)
Bae, Gihyun; Huh, Hoon; Park, Sungho
This paper deals with a regression model for light weight and crashworthiness enhancement design of automotive parts in frontal car crash. The ULSAB-AVC model is employed for the crash analysis and effective parts are selected based on the amount of energy absorption during the crash behavior. Finite element analyses are carried out for designated design cases in order to investigate the crashworthiness and weight according to the material and thickness of main energy absorption parts. Based on simulations results, a regression analysis is performed to construct a regression model utilized for light weight and crashworthiness enhancement design of automotive parts. An example for weight reduction of main energy absorption parts demonstrates the validity of a regression model constructed.
Ai, Zi-Sheng; Gao, You-Shui; Sun, Yuan; Liu, Yue; Zhang, Chang-Qing; Jiang, Cheng-Hua
2013-03-01
Risk factors for femoral neck fracture-induced avascular necrosis of the femoral head have not been elucidated clearly in middle-aged and elderly patients. Moreover, the high incidence of screw removal in China and its effect on the fate of the involved femoral head require statistical methods to reflect their intrinsic relationship. Ninety-nine patients older than 45 years with femoral neck fracture were treated by internal fixation between May 1999 and April 2004. Descriptive analysis, interaction analysis between associated factors, single factor logistic regression, multivariate logistic regression, and detailed interaction analysis were employed to explore potential relationships among associated factors. Avascular necrosis of the femoral head was found in 15 cases (15.2 %). Age × the status of implants (removal vs. maintenance) and gender × the timing of reduction were interactive according to two-factor interactive analysis. Age, the displacement of fractures, the quality of reduction, and the status of implants were found to be significant factors in single factor logistic regression analysis. Age, age × the status of implants, and the quality of reduction were found to be significant factors in multivariate logistic regression analysis. In fine interaction analysis after multivariate logistic regression analysis, implant removal was the most important risk factor for avascular necrosis in 56-to-85-year-old patients, with a risk ratio of 26.00 (95 % CI = 3.076-219.747). The middle-aged and elderly have less incidence of avascular necrosis of the femoral head following femoral neck fractures treated by cannulated screws. The removal of cannulated screws can induce a significantly high incidence of avascular necrosis of the femoral head in elderly patients, while a high-quality reduction is helpful to reduce avascular necrosis.
Two Paradoxes in Linear Regression Analysis.
Feng, Ge; Peng, Jing; Tu, Dongke; Zheng, Julia Z; Feng, Changyong
2016-12-25
Regression is one of the favorite tools in applied statistics. However, misuse and misinterpretation of results from regression analysis are common in biomedical research. In this paper we use statistical theory and simulation studies to clarify some paradoxes around this popular statistical method. In particular, we show that a widely used model selection procedure employed in many publications in top medical journals is wrong. Formal procedures based on solid statistical theory should be used in model selection.
NASA Astrophysics Data System (ADS)
Denli, H. H.; Koc, Z.
2015-12-01
Estimation of real properties depending on standards is difficult to apply in time and location. Regression analysis construct mathematical models which describe or explain relationships that may exist between variables. The problem of identifying price differences of properties to obtain a price index can be converted into a regression problem, and standard techniques of regression analysis can be used to estimate the index. Considering regression analysis for real estate valuation, which are presented in real marketing process with its current characteristics and quantifiers, the method will help us to find the effective factors or variables in the formation of the value. In this study, prices of housing for sale in Zeytinburnu, a district in Istanbul, are associated with its characteristics to find a price index, based on information received from a real estate web page. The associated variables used for the analysis are age, size in m2, number of floors having the house, floor number of the estate and number of rooms. The price of the estate represents the dependent variable, whereas the rest are independent variables. Prices from 60 real estates have been used for the analysis. Same price valued locations have been found and plotted on the map and equivalence curves have been drawn identifying the same valued zones as lines.
Schörgendorfer, Angela; Branscum, Adam J; Hanson, Timothy E
2013-06-01
Logistic regression is a popular tool for risk analysis in medical and population health science. With continuous response data, it is common to create a dichotomous outcome for logistic regression analysis by specifying a threshold for positivity. Fitting a linear regression to the nondichotomized response variable assuming a logistic sampling model for the data has been empirically shown to yield more efficient estimates of odds ratios than ordinary logistic regression of the dichotomized endpoint. We illustrate that risk inference is not robust to departures from the parametric logistic distribution. Moreover, the model assumption of proportional odds is generally not satisfied when the condition of a logistic distribution for the data is violated, leading to biased inference from a parametric logistic analysis. We develop novel Bayesian semiparametric methodology for testing goodness of fit of parametric logistic regression with continuous measurement data. The testing procedures hold for any cutoff threshold and our approach simultaneously provides the ability to perform semiparametric risk estimation. Bayes factors are calculated using the Savage-Dickey ratio for testing the null hypothesis of logistic regression versus a semiparametric generalization. We propose a fully Bayesian and a computationally efficient empirical Bayesian approach to testing, and we present methods for semiparametric estimation of risks, relative risks, and odds ratios when parametric logistic regression fails. Theoretical results establish the consistency of the empirical Bayes test. Results from simulated data show that the proposed approach provides accurate inference irrespective of whether parametric assumptions hold or not. Evaluation of risk factors for obesity shows that different inferences are derived from an analysis of a real data set when deviations from a logistic distribution are permissible in a flexible semiparametric framework. © 2013, The International Biometric Society.
Zhang, Hong-guang; Lu, Jian-gang
2016-02-01
Abstract To overcome the problems of significant difference among samples and nonlinearity between the property and spectra of samples in spectral quantitative analysis, a local regression algorithm is proposed in this paper. In this algorithm, net signal analysis method(NAS) was firstly used to obtain the net analyte signal of the calibration samples and unknown samples, then the Euclidean distance between net analyte signal of the sample and net analyte signal of calibration samples was calculated and utilized as similarity index. According to the defined similarity index, the local calibration sets were individually selected for each unknown sample. Finally, a local PLS regression model was built on each local calibration sets for each unknown sample. The proposed method was applied to a set of near infrared spectra of meat samples. The results demonstrate that the prediction precision and model complexity of the proposed method are superior to global PLS regression method and conventional local regression algorithm based on spectral Euclidean distance.
Multilayer Perceptron for Robust Nonlinear Interval Regression Analysis Using Genetic Algorithms
2014-01-01
On the basis of fuzzy regression, computational models in intelligence such as neural networks have the capability to be applied to nonlinear interval regression analysis for dealing with uncertain and imprecise data. When training data are not contaminated by outliers, computational models perform well by including almost all given training data in the data interval. Nevertheless, since training data are often corrupted by outliers, robust learning algorithms employed to resist outliers for interval regression analysis have been an interesting area of research. Several approaches involving computational intelligence are effective for resisting outliers, but the required parameters for these approaches are related to whether the collected data contain outliers or not. Since it seems difficult to prespecify the degree of contamination beforehand, this paper uses multilayer perceptron to construct the robust nonlinear interval regression model using the genetic algorithm. Outliers beyond or beneath the data interval will impose slight effect on the determination of data interval. Simulation results demonstrate that the proposed method performs well for contaminated datasets. PMID:25110755
Multilayer perceptron for robust nonlinear interval regression analysis using genetic algorithms.
Hu, Yi-Chung
2014-01-01
On the basis of fuzzy regression, computational models in intelligence such as neural networks have the capability to be applied to nonlinear interval regression analysis for dealing with uncertain and imprecise data. When training data are not contaminated by outliers, computational models perform well by including almost all given training data in the data interval. Nevertheless, since training data are often corrupted by outliers, robust learning algorithms employed to resist outliers for interval regression analysis have been an interesting area of research. Several approaches involving computational intelligence are effective for resisting outliers, but the required parameters for these approaches are related to whether the collected data contain outliers or not. Since it seems difficult to prespecify the degree of contamination beforehand, this paper uses multilayer perceptron to construct the robust nonlinear interval regression model using the genetic algorithm. Outliers beyond or beneath the data interval will impose slight effect on the determination of data interval. Simulation results demonstrate that the proposed method performs well for contaminated datasets.
The use of cognitive ability measures as explanatory variables in regression analysis.
Junker, Brian; Schofield, Lynne Steuerle; Taylor, Lowell J
2012-12-01
Cognitive ability measures are often taken as explanatory variables in regression analysis, e.g., as a factor affecting a market outcome such as an individual's wage, or a decision such as an individual's education acquisition. Cognitive ability is a latent construct; its true value is unobserved. Nonetheless, researchers often assume that a test score , constructed via standard psychometric practice from individuals' responses to test items, can be safely used in regression analysis. We examine problems that can arise, and suggest that an alternative approach, a "mixed effects structural equations" (MESE) model, may be more appropriate in many circumstances.
Factor analysis and multiple regression between topography and precipitation on Jeju Island, Korea
NASA Astrophysics Data System (ADS)
Um, Myoung-Jin; Yun, Hyeseon; Jeong, Chang-Sam; Heo, Jun-Haeng
2011-11-01
SummaryIn this study, new factors that influence precipitation were extracted from geographic variables using factor analysis, which allow for an accurate estimation of orographic precipitation. Correlation analysis was also used to examine the relationship between nine topographic variables from digital elevation models (DEMs) and the precipitation in Jeju Island. In addition, a spatial analysis was performed in order to verify the validity of the regression model. From the results of the correlation analysis, it was found that all of the topographic variables had a positive correlation with the precipitation. The relations between the variables also changed in accordance with a change in the precipitation duration. However, upon examining the correlation matrix, no significant relationship between the latitude and the aspect was found. According to the factor analysis, eight topographic variables (latitude being the exception) were found to have a direct influence on the precipitation. Three factors were then extracted from the eight topographic variables. By directly comparing the multiple regression model with the factors (model 1) to the multiple regression model with the topographic variables (model 3), it was found that model 1 did not violate the limits of statistical significance and multicollinearity. As such, model 1 was considered to be appropriate for estimating the precipitation when taking into account the topography. In the study of model 1, the multiple regression model using factor analysis was found to be the best method for estimating the orographic precipitation on Jeju Island.
2014-01-01
Background Meta-regression is becoming increasingly used to model study level covariate effects. However this type of statistical analysis presents many difficulties and challenges. Here two methods for calculating confidence intervals for the magnitude of the residual between-study variance in random effects meta-regression models are developed. A further suggestion for calculating credible intervals using informative prior distributions for the residual between-study variance is presented. Methods Two recently proposed and, under the assumptions of the random effects model, exact methods for constructing confidence intervals for the between-study variance in random effects meta-analyses are extended to the meta-regression setting. The use of Generalised Cochran heterogeneity statistics is extended to the meta-regression setting and a Newton-Raphson procedure is developed to implement the Q profile method for meta-analysis and meta-regression. WinBUGS is used to implement informative priors for the residual between-study variance in the context of Bayesian meta-regressions. Results Results are obtained for two contrasting examples, where the first example involves a binary covariate and the second involves a continuous covariate. Intervals for the residual between-study variance are wide for both examples. Conclusions Statistical methods, and R computer software, are available to compute exact confidence intervals for the residual between-study variance under the random effects model for meta-regression. These frequentist methods are almost as easily implemented as their established counterparts for meta-analysis. Bayesian meta-regressions are also easily performed by analysts who are comfortable using WinBUGS. Estimates of the residual between-study variance in random effects meta-regressions should be routinely reported and accompanied by some measure of their uncertainty. Confidence and/or credible intervals are well-suited to this purpose. PMID:25196829
Choi, Seung Hoan; Labadorf, Adam T; Myers, Richard H; Lunetta, Kathryn L; Dupuis, Josée; DeStefano, Anita L
2017-02-06
Next generation sequencing provides a count of RNA molecules in the form of short reads, yielding discrete, often highly non-normally distributed gene expression measurements. Although Negative Binomial (NB) regression has been generally accepted in the analysis of RNA sequencing (RNA-Seq) data, its appropriateness has not been exhaustively evaluated. We explore logistic regression as an alternative method for RNA-Seq studies designed to compare cases and controls, where disease status is modeled as a function of RNA-Seq reads using simulated and Huntington disease data. We evaluate the effect of adjusting for covariates that have an unknown relationship with gene expression. Finally, we incorporate the data adaptive method in order to compare false positive rates. When the sample size is small or the expression levels of a gene are highly dispersed, the NB regression shows inflated Type-I error rates but the Classical logistic and Bayes logistic (BL) regressions are conservative. Firth's logistic (FL) regression performs well or is slightly conservative. Large sample size and low dispersion generally make Type-I error rates of all methods close to nominal alpha levels of 0.05 and 0.01. However, Type-I error rates are controlled after applying the data adaptive method. The NB, BL, and FL regressions gain increased power with large sample size, large log2 fold-change, and low dispersion. The FL regression has comparable power to NB regression. We conclude that implementing the data adaptive method appropriately controls Type-I error rates in RNA-Seq analysis. Firth's logistic regression provides a concise statistical inference process and reduces spurious associations from inaccurately estimated dispersion parameters in the negative binomial framework.
Lewis, Jason M.
2010-01-01
Peak-streamflow regression equations were determined for estimating flows with exceedance probabilities from 50 to 0.2 percent for the state of Oklahoma. These regression equations incorporate basin characteristics to estimate peak-streamflow magnitude and frequency throughout the state by use of a generalized least squares regression analysis. The most statistically significant independent variables required to estimate peak-streamflow magnitude and frequency for unregulated streams in Oklahoma are contributing drainage area, mean-annual precipitation, and main-channel slope. The regression equations are applicable for watershed basins with drainage areas less than 2,510 square miles that are not affected by regulation. The resulting regression equations had a standard model error ranging from 31 to 46 percent. Annual-maximum peak flows observed at 231 streamflow-gaging stations through water year 2008 were used for the regression analysis. Gage peak-streamflow estimates were used from previous work unless 2008 gaging-station data were available, in which new peak-streamflow estimates were calculated. The U.S. Geological Survey StreamStats web application was used to obtain the independent variables required for the peak-streamflow regression equations. Limitations on the use of the regression equations and the reliability of regression estimates for natural unregulated streams are described. Log-Pearson Type III analysis information, basin and climate characteristics, and the peak-streamflow frequency estimates for the 231 gaging stations in and near Oklahoma are listed. Methodologies are presented to estimate peak streamflows at ungaged sites by using estimates from gaging stations on unregulated streams. For ungaged sites on urban streams and streams regulated by small floodwater retarding structures, an adjustment of the statewide regression equations for natural unregulated streams can be used to estimate peak-streamflow magnitude and frequency.
Neither fixed nor random: weighted least squares meta-regression.
Stanley, T D; Doucouliagos, Hristos
2017-03-01
Our study revisits and challenges two core conventional meta-regression estimators: the prevalent use of 'mixed-effects' or random-effects meta-regression analysis and the correction of standard errors that defines fixed-effects meta-regression analysis (FE-MRA). We show how and explain why an unrestricted weighted least squares MRA (WLS-MRA) estimator is superior to conventional random-effects (or mixed-effects) meta-regression when there is publication (or small-sample) bias that is as good as FE-MRA in all cases and better than fixed effects in most practical applications. Simulations and statistical theory show that WLS-MRA provides satisfactory estimates of meta-regression coefficients that are practically equivalent to mixed effects or random effects when there is no publication bias. When there is publication selection bias, WLS-MRA always has smaller bias than mixed effects or random effects. In practical applications, an unrestricted WLS meta-regression is likely to give practically equivalent or superior estimates to fixed-effects, random-effects, and mixed-effects meta-regression approaches. However, random-effects meta-regression remains viable and perhaps somewhat preferable if selection for statistical significance (publication bias) can be ruled out and when random, additive normal heterogeneity is known to directly affect the 'true' regression coefficient. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
First-in-Human Assessment of the Novel PDE2A PET Radiotracer 18F-PF-05270430
Waterhouse, Rikki N.; Nabulsi, Nabeel; Lin, Shu-Fei; Labaree, David; Ropchan, Jim; Tarabar, Sanela; DeMartinis, Nicholas; Ogden, Adam; Banerjee, Anindita; Huang, Yiyun; Carson, Richard E.
2016-01-01
This was a first-in-human study of the novel phosphodiesterase-2A (PDE2A) PET ligand 18F-PF-05270430. The primary goals were to determine the appropriate tracer kinetic model to quantify brain uptake and to examine the within-subject test–retest variability. Methods: In advance of human studies, radiation dosimetry was determined in nonhuman primates. Six healthy male subjects participated in a test–retest protocol with dynamic scans and metabolite-corrected input functions. Nine brain regions of interest were studied, including the striatum, white matter, neocortical regions, and cerebellum. Multiple modeling methods were applied to calculate volume of distribution (VT) and binding potentials relative to the nondisplaceable tracer in tissue (BPND), concentration of tracer in plasma (BPP), and free tracer in tissue (BPF). The cerebellum was selected as a reference region to calculate binding potentials. Results: The dosimetry study provided an effective dose of less than 0.30 mSv/MBq, with the gallbladder as the critical organ; the human target dose was 185 MBq. There were no adverse events or clinically detectable pharmacologic effects reported. Tracer uptake was highest in the striatum, followed by neocortical regions and white matter, and lowest in the cerebellum. Regional time–activity curves were well fit by multilinear analysis-1, and a 70-min scan duration was sufficient to quantify VT and the binding potentials. BPND, with mean values ranging from 0.3 to 0.8, showed the best intrasubject and intersubject variability and reliability. Test–retest variability in the whole brain (excluding the cerebellum) of VT, BPND, and BPP were 8%, 16%, and 17%, respectively. Conclusion: 18F-PF-05270430 shows promise as a PDE2A PET ligand, albeit with low binding potential values. PMID:27103022
Effects of plaque lengths on stent surface roughness.
Syaifudin, Achmad; Takeda, Ryo; Sasaki, Katsuhiko
2015-01-01
The physical properties of the stent surface influence the effectiveness of vascular disease treatment after stent deployment. During the expanding process, the stent acquires high-level deformation that could alter either its microstructure or the magnitude of surface roughness. This paper constructed a finite element simulation to observe the changes in surface roughness during the stenting process. Structural transient dynamic analysis was performed using ANSYS, to identify the deformation after the stent is placed in a blood vessel. Two types of bare metal stents are studied: a Palmaz type and a Sinusoidal type. The relationship between plaque length and the changes in surface roughness was investigated by utilizing three different length of plaque; plaque length longer than the stent, shorter than the stent and the same length as the stent. In order to reduce computational time, 3D cyclical and translational symmetry was implemented into the FE model. The material models used was defined as a multilinear isotropic for stent and hyperelastic for the balloon, plaque and vessel wall. The correlation between the plastic deformation and the changes in surface roughness was obtained by intermittent pure tensile test using specimen whose chemical composition was similar to that of actual stent material. As the plastic strain is achieved from FE simulation, the surface roughness can be assessed thoroughly. The study found that the plaque size relative to stent length significantly influenced the critical changes in surface roughness. It was found that the length of stent which is equal to the plaque length was preferable due to the fact that it generated only moderate change in surface roughness. This effect was less influential to the Sinusoidal stent.
NASA Astrophysics Data System (ADS)
Ge, Xinlei; Setyan, Ari; Sun, Yele; Zhang, Qi
2012-10-01
Organic aerosols (OA) were studied in Fresno, California, in winter 2010 with an Aerodyne High Resolution Time-of-Flight Aerosol Mass Spectrometer (HR-ToF-AMS). OA dominated the submicron aerosol mass (average = 67%) with an average concentration of 7.9μg m-3 and a nominal formula of C1H1.59N0.014O0.27S0.00008, which corresponds to an average organic mass-to-carbon ratio of 1.50. Three primary OA (POA) factors and one oxygenated OA factor (OOA) representative of secondary OA (SOA) were identified via Positive Matrix Factorization of the high-resolution mass spectra. The three POA factors, which include a traffic-related hydrocarbon-like OA (HOA), a cooking OA (COA), and a biomass burning OA (BBOA) released from residential heating, accounted for an average 57% of the OA mass and up to 80% between 6 - 9 P.M., during which enhanced emissions from evening rush hour traffic, dinner cooking, and residential wood burning were exacerbated by low mixed layer height. The mass-based size distributions of the OA factors were estimated based on multilinear analysis of the size-resolved mass spectra of organics. Both HOA and BBOA peaked at ˜140 nm in vacuum aerodynamic diameter (Dva) while OOA peaked at an accumulation mode of ˜460 nm. COA exhibited a unique size distribution with two size modes centering at ˜200 nm and 450 nm respectively. This study highlights the leading roles played by anthropogenic POA emissions, primarily from traffic, cooking and residential heating, in aerosol pollution in Fresno in wintertime.
Groundwater salinity in a floodplain forest impacted by saltwater intrusion.
Kaplan, David A; Muñoz-Carpena, Rafael
2014-11-15
Coastal wetlands occupy a delicate position at the intersection of fresh and saline waters. Changing climate and watershed hydrology can lead to saltwater intrusion into historically freshwater systems, causing plant mortality and loss of freshwater habitat. Understanding the hydrological functioning of tidally influenced floodplain forests is essential for advancing ecosystem protection and restoration goals, however finding direct relationships between hydrological inputs and floodplain hydrology is complicated by interactions between surface water, groundwater, and atmospheric fluxes in variably saturated soils with heterogeneous vegetation and topography. Thus, an alternative method for identifying common trends and causal factors is required. Dynamic factor analysis (DFA), a time series dimension reduction technique, models temporal variation in observed data as linear combinations of common trends, which represent unexplained common variability, and explanatory variables. DFA was applied to model shallow groundwater salinity in the forested floodplain wetlands of the Loxahatchee River (Florida, USA), where altered watershed hydrology has led to changing hydroperiod and salinity regimes and undesired vegetative changes. Long-term, high-resolution groundwater salinity datasets revealed dynamics over seasonal and yearly time periods as well as over tidal cycles and storm events. DFA identified shared trends among salinity time series and a full dynamic factor model simulated observed series well (overall coefficient of efficiency, Ceff=0.85; 0.52≤Ceff≤0.99). A reduced multilinear model based solely on explanatory variables identified in the DFA had fair to good results (Ceff=0.58; 0.38≤Ceff≤0.75) and may be used to assess the effects of restoration and management scenarios on shallow groundwater salinity in the Loxahatchee River floodplain. Copyright © 2014 Elsevier B.V. All rights reserved.
[How to fit and interpret multilevel models using SPSS].
Pardo, Antonio; Ruiz, Miguel A; San Martín, Rafael
2007-05-01
Hierarchic or multilevel models are used to analyse data when cases belong to known groups and sample units are selected both from the individual level and from the group level. In this work, the multilevel models most commonly discussed in the statistic literature are described, explaining how to fit these models using the SPSS program (any version as of the 11 th ) and how to interpret the outcomes of the analysis. Five particular models are described, fitted, and interpreted: (1) one-way analysis of variance with random effects, (2) regression analysis with means-as-outcomes, (3) one-way analysis of covariance with random effects, (4) regression analysis with random coefficients, and (5) regression analysis with means- and slopes-as-outcomes. All models are explained, trying to make them understandable to researchers in health and behaviour sciences.
Correlative and multivariate analysis of increased radon concentration in underground laboratory.
Maletić, Dimitrije M; Udovičić, Vladimir I; Banjanac, Radomir M; Joković, Dejan R; Dragić, Aleksandar L; Veselinović, Nikola B; Filipović, Jelena
2014-11-01
The results of analysis using correlative and multivariate methods, as developed for data analysis in high-energy physics and implemented in the Toolkit for Multivariate Analysis software package, of the relations of the variation of increased radon concentration with climate variables in shallow underground laboratory is presented. Multivariate regression analysis identified a number of multivariate methods which can give a good evaluation of increased radon concentrations based on climate variables. The use of the multivariate regression methods will enable the investigation of the relations of specific climate variable with increased radon concentrations by analysis of regression methods resulting in 'mapped' underlying functional behaviour of radon concentrations depending on a wide spectrum of climate variables. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Brenn, T; Arnesen, E
1985-01-01
For comparative evaluation, discriminant analysis, logistic regression and Cox's model were used to select risk factors for total and coronary deaths among 6595 men aged 20-49 followed for 9 years. Groups with mortality between 5 and 93 per 1000 were considered. Discriminant analysis selected variable sets only marginally different from the logistic and Cox methods which always selected the same sets. A time-saving option, offered for both the logistic and Cox selection, showed no advantage compared with discriminant analysis. Analysing more than 3800 subjects, the logistic and Cox methods consumed, respectively, 80 and 10 times more computer time than discriminant analysis. When including the same set of variables in non-stepwise analyses, all methods estimated coefficients that in most cases were almost identical. In conclusion, discriminant analysis is advocated for preliminary or stepwise analysis, otherwise Cox's method should be used.
Bias due to two-stage residual-outcome regression analysis in genetic association studies.
Demissie, Serkalem; Cupples, L Adrienne
2011-11-01
Association studies of risk factors and complex diseases require careful assessment of potential confounding factors. Two-stage regression analysis, sometimes referred to as residual- or adjusted-outcome analysis, has been increasingly used in association studies of single nucleotide polymorphisms (SNPs) and quantitative traits. In this analysis, first, a residual-outcome is calculated from a regression of the outcome variable on covariates and then the relationship between the adjusted-outcome and the SNP is evaluated by a simple linear regression of the adjusted-outcome on the SNP. In this article, we examine the performance of this two-stage analysis as compared with multiple linear regression (MLR) analysis. Our findings show that when a SNP and a covariate are correlated, the two-stage approach results in biased genotypic effect and loss of power. Bias is always toward the null and increases with the squared-correlation between the SNP and the covariate (). For example, for , 0.1, and 0.5, two-stage analysis results in, respectively, 0, 10, and 50% attenuation in the SNP effect. As expected, MLR was always unbiased. Since individual SNPs often show little or no correlation with covariates, a two-stage analysis is expected to perform as well as MLR in many genetic studies; however, it produces considerably different results from MLR and may lead to incorrect conclusions when independent variables are highly correlated. While a useful alternative to MLR under , the two -stage approach has serious limitations. Its use as a simple substitute for MLR should be avoided. © 2011 Wiley Periodicals, Inc.
Variable Selection in Logistic Regression.
1987-06-01
23 %. AUTIOR(.) S. CONTRACT OR GRANT NUMBE Rf.i %Z. D. Bai, P. R. Krishnaiah and . C. Zhao F49620-85- C-0008 " PERFORMING ORGANIZATION NAME AND AOORESS...d I7 IOK-TK- d 7 -I0 7’ VARIABLE SELECTION IN LOGISTIC REGRESSION Z. D. Bai, P. R. Krishnaiah and L. C. Zhao Center for Multivariate Analysis...University of Pittsburgh Center for Multivariate Analysis University of Pittsburgh Y !I VARIABLE SELECTION IN LOGISTIC REGRESSION Z- 0. Bai, P. R. Krishnaiah
Two Paradoxes in Linear Regression Analysis
FENG, Ge; PENG, Jing; TU, Dongke; ZHENG, Julia Z.; FENG, Changyong
2016-01-01
Summary Regression is one of the favorite tools in applied statistics. However, misuse and misinterpretation of results from regression analysis are common in biomedical research. In this paper we use statistical theory and simulation studies to clarify some paradoxes around this popular statistical method. In particular, we show that a widely used model selection procedure employed in many publications in top medical journals is wrong. Formal procedures based on solid statistical theory should be used in model selection. PMID:28638214
Wang, Wen-Cheng; Cho, Wen-Chien; Chen, Yin-Jen
2014-01-01
It is estimated that mainland Chinese tourists travelling to Taiwan can bring annual revenues of 400 billion NTD to the Taiwan economy. Thus, how the Taiwanese Government formulates relevant measures to satisfy both sides is the focus of most concern. Taiwan must improve the facilities and service quality of its tourism industry so as to attract more mainland tourists. This paper conducted a questionnaire survey of mainland tourists and used grey relational analysis in grey mathematics to analyze the satisfaction performance of all satisfaction question items. The first eight satisfaction items were used as independent variables, and the overall satisfaction performance was used as a dependent variable for quantile regression model analysis to discuss the relationship between the dependent variable under different quantiles and independent variables. Finally, this study further discussed the predictive accuracy of the least mean regression model and each quantile regression model, as a reference for research personnel. The analysis results showed that other variables could also affect the overall satisfaction performance of mainland tourists, in addition to occupation and age. The overall predictive accuracy of quantile regression model Q0.25 was higher than that of the other three models. PMID:24574916
Composite marginal quantile regression analysis for longitudinal adolescent body mass index data.
Yang, Chi-Chuan; Chen, Yi-Hau; Chang, Hsing-Yi
2017-09-20
Childhood and adolescenthood overweight or obesity, which may be quantified through the body mass index (BMI), is strongly associated with adult obesity and other health problems. Motivated by the child and adolescent behaviors in long-term evolution (CABLE) study, we are interested in individual, family, and school factors associated with marginal quantiles of longitudinal adolescent BMI values. We propose a new method for composite marginal quantile regression analysis for longitudinal outcome data, which performs marginal quantile regressions at multiple quantile levels simultaneously. The proposed method extends the quantile regression coefficient modeling method introduced by Frumento and Bottai (Biometrics 2016; 72:74-84) to longitudinal data accounting suitably for the correlation structure in longitudinal observations. A goodness-of-fit test for the proposed modeling is also developed. Simulation results show that the proposed method can be much more efficient than the analysis without taking correlation into account and the analysis performing separate quantile regressions at different quantile levels. The application to the longitudinal adolescent BMI data from the CABLE study demonstrates the practical utility of our proposal. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Cervical Vertebral Body's Volume as a New Parameter for Predicting the Skeletal Maturation Stages.
Choi, Youn-Kyung; Kim, Jinmi; Yamaguchi, Tetsutaro; Maki, Koutaro; Ko, Ching-Chang; Kim, Yong-Il
2016-01-01
This study aimed to determine the correlation between the volumetric parameters derived from the images of the second, third, and fourth cervical vertebrae by using cone beam computed tomography with skeletal maturation stages and to propose a new formula for predicting skeletal maturation by using regression analysis. We obtained the estimation of skeletal maturation levels from hand-wrist radiographs and volume parameters derived from the second, third, and fourth cervical vertebrae bodies from 102 Japanese patients (54 women and 48 men, 5-18 years of age). We performed Pearson's correlation coefficient analysis and simple regression analysis. All volume parameters derived from the second, third, and fourth cervical vertebrae exhibited statistically significant correlations (P < 0.05). The simple regression model with the greatest R-square indicated the fourth-cervical-vertebra volume as an independent variable with a variance inflation factor less than ten. The explanation power was 81.76%. Volumetric parameters of cervical vertebrae using cone beam computed tomography are useful in regression models. The derived regression model has the potential for clinical application as it enables a simple and quantitative analysis to evaluate skeletal maturation level.
Cervical Vertebral Body's Volume as a New Parameter for Predicting the Skeletal Maturation Stages
Choi, Youn-Kyung; Kim, Jinmi; Maki, Koutaro; Ko, Ching-Chang
2016-01-01
This study aimed to determine the correlation between the volumetric parameters derived from the images of the second, third, and fourth cervical vertebrae by using cone beam computed tomography with skeletal maturation stages and to propose a new formula for predicting skeletal maturation by using regression analysis. We obtained the estimation of skeletal maturation levels from hand-wrist radiographs and volume parameters derived from the second, third, and fourth cervical vertebrae bodies from 102 Japanese patients (54 women and 48 men, 5–18 years of age). We performed Pearson's correlation coefficient analysis and simple regression analysis. All volume parameters derived from the second, third, and fourth cervical vertebrae exhibited statistically significant correlations (P < 0.05). The simple regression model with the greatest R-square indicated the fourth-cervical-vertebra volume as an independent variable with a variance inflation factor less than ten. The explanation power was 81.76%. Volumetric parameters of cervical vertebrae using cone beam computed tomography are useful in regression models. The derived regression model has the potential for clinical application as it enables a simple and quantitative analysis to evaluate skeletal maturation level. PMID:27340668
Wang, Wen-Cheng; Cho, Wen-Chien; Chen, Yin-Jen
2014-01-01
It is estimated that mainland Chinese tourists travelling to Taiwan can bring annual revenues of 400 billion NTD to the Taiwan economy. Thus, how the Taiwanese Government formulates relevant measures to satisfy both sides is the focus of most concern. Taiwan must improve the facilities and service quality of its tourism industry so as to attract more mainland tourists. This paper conducted a questionnaire survey of mainland tourists and used grey relational analysis in grey mathematics to analyze the satisfaction performance of all satisfaction question items. The first eight satisfaction items were used as independent variables, and the overall satisfaction performance was used as a dependent variable for quantile regression model analysis to discuss the relationship between the dependent variable under different quantiles and independent variables. Finally, this study further discussed the predictive accuracy of the least mean regression model and each quantile regression model, as a reference for research personnel. The analysis results showed that other variables could also affect the overall satisfaction performance of mainland tourists, in addition to occupation and age. The overall predictive accuracy of quantile regression model Q0.25 was higher than that of the other three models.
Analysis of Sting Balance Calibration Data Using Optimized Regression Models
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Bader, Jon B.
2010-01-01
Calibration data of a wind tunnel sting balance was processed using a candidate math model search algorithm that recommends an optimized regression model for the data analysis. During the calibration the normal force and the moment at the balance moment center were selected as independent calibration variables. The sting balance itself had two moment gages. Therefore, after analyzing the connection between calibration loads and gage outputs, it was decided to choose the difference and the sum of the gage outputs as the two responses that best describe the behavior of the balance. The math model search algorithm was applied to these two responses. An optimized regression model was obtained for each response. Classical strain gage balance load transformations and the equations of the deflection of a cantilever beam under load are used to show that the search algorithm s two optimized regression models are supported by a theoretical analysis of the relationship between the applied calibration loads and the measured gage outputs. The analysis of the sting balance calibration data set is a rare example of a situation when terms of a regression model of a balance can directly be derived from first principles of physics. In addition, it is interesting to note that the search algorithm recommended the correct regression model term combinations using only a set of statistical quality metrics that were applied to the experimental data during the algorithm s term selection process.
Covariate Imbalance and Adjustment for Logistic Regression Analysis of Clinical Trial Data
Ciolino, Jody D.; Martin, Reneé H.; Zhao, Wenle; Jauch, Edward C.; Hill, Michael D.; Palesch, Yuko Y.
2014-01-01
In logistic regression analysis for binary clinical trial data, adjusted treatment effect estimates are often not equivalent to unadjusted estimates in the presence of influential covariates. This paper uses simulation to quantify the benefit of covariate adjustment in logistic regression. However, International Conference on Harmonization guidelines suggest that covariate adjustment be pre-specified. Unplanned adjusted analyses should be considered secondary. Results suggest that that if adjustment is not possible or unplanned in a logistic setting, balance in continuous covariates can alleviate some (but never all) of the shortcomings of unadjusted analyses. The case of log binomial regression is also explored. PMID:24138438
Linear regression analysis of survival data with missing censoring indicators.
Wang, Qihua; Dinse, Gregg E
2011-04-01
Linear regression analysis has been studied extensively in a random censorship setting, but typically all of the censoring indicators are assumed to be observed. In this paper, we develop synthetic data methods for estimating regression parameters in a linear model when some censoring indicators are missing. We define estimators based on regression calibration, imputation, and inverse probability weighting techniques, and we prove all three estimators are asymptotically normal. The finite-sample performance of each estimator is evaluated via simulation. We illustrate our methods by assessing the effects of sex and age on the time to non-ambulatory progression for patients in a brain cancer clinical trial.
NASA Astrophysics Data System (ADS)
Kuik, Friderike; Lauer, Axel; von Schneidemesser, Erika; Butler, Tim
2017-04-01
Many European cities continue to struggle with meeting the European air quality limits for NO2. In Berlin, Germany, most of the exceedances in NO2 recorded at monitoring sites near busy roads can be largely attributed to emissions from traffic. In order to assess the impact of changes in traffic emissions on air quality at policy relevant scales, we combine the regional atmosphere-chemistry transport model WRF-Chem at a resolution of 1kmx1km with a statistical downscaling approach. Here, we build on the recently published study evaluating the performance of a WRF-Chem setup in representing observed urban background NO2 concentrations from Kuik et al. (2016) and extend this setup by developing and testing an approach to statistically downscale simulated urban background NO2 concentrations to street level. The approach uses a multilinear regression model to relate roadside NO2 concentrations observed with the municipal monitoring network with observed NO2 concentrations at urban background sites and observed traffic counts. For this, the urban background NO2 concentrations are decomposed into a long term, a synoptic and a diurnal component using the Kolmogorov-Zurbenko filtering method. We estimate the coefficients of the regression model for five different roadside stations in Berlin representing different street types. In a next step we combine the coefficients with simulated urban background concentrations and observed traffic counts, in order to estimate roadside NO2 concentrations based on the results obtained with WRF-Chem at the five selected stations. In a third step, we extrapolate the NO2 concentrations to all major roads in Berlin. The latter is based on available data for Berlin of daily mean traffic counts, diurnal and weekly cycles of traffic as well as simulated urban background NO2 concentrations. We evaluate the NO2 concentrations estimated with this method at street level for Berlin with additional observational data from stationary measurements and mobile measurements conducted during a campaign in summer 2014. The results show that this approach allows us to estimate NO2 concentrations at roadside reasonably well. The approach can be applied when observations show a strong correlation between roadside NO2 concentrations and traffic emissions from a single type of road. The method, however, shows weaknesses for intersections where observed NO2 concentrations are influenced by traffic on several different roads. We then apply this downscaling approach to estimate the impact of different traffic emission scenarios both on urban background and street level NO2 concentrations. References Kuik, F., Lauer, A., Churkina, G., Denier van der Gon, H. A. C., Fenner, D., Mar, K. A., and Butler, T. M.: Air quality modelling in the Berlin-Brandenburg region using WRF-Chem v3.7.1: sensitivity to resolution of model grid and input data, Geosci. Model Dev., 9, 4339-4363, doi:10.5194/gmd-9-4339-2016, 2016.
Kuiper, Gerhardus J A J M; Houben, Rik; Wetzels, Rick J H; Verhezen, Paul W M; Oerle, Rene van; Ten Cate, Hugo; Henskens, Yvonne M C; Lancé, Marcus D
2017-11-01
Low platelet counts and hematocrit levels hinder whole blood point-of-care testing of platelet function. Thus far, no reference ranges for MEA (multiple electrode aggregometry) and PFA-100 (platelet function analyzer 100) devices exist for low ranges. Through dilution methods of volunteer whole blood, platelet function at low ranges of platelet count and hematocrit levels was assessed on MEA for four agonists and for PFA-100 in two cartridges. Using (multiple) regression analysis, 95% reference intervals were computed for these low ranges. Low platelet counts affected MEA in a positive correlation (all agonists showed r 2 ≥ 0.75) and PFA-100 in an inverse correlation (closure times were prolonged with lower platelet counts). Lowered hematocrit did not affect MEA testing, except for arachidonic acid activation (ASPI), which showed a weak positive correlation (r 2 = 0.14). Closure time on PFA-100 testing was inversely correlated with hematocrit for both cartridges. Regression analysis revealed different 95% reference intervals in comparison with originally established intervals for both MEA and PFA-100 in low platelet or hematocrit conditions. Multiple regression analysis of ASPI and both tests on the PFA-100 for combined low platelet and hematocrit conditions revealed that only PFA-100 testing should be adjusted for both thrombocytopenia and anemia. 95% reference intervals were calculated using multiple regression analysis. However, coefficients of determination of PFA-100 were poor, and some variance remained unexplained. Thus, in this pilot study using (multiple) regression analysis, we could establish reference intervals of platelet function in anemia and thrombocytopenia conditions on PFA-100 and in thrombocytopenia conditions on MEA.
Clustering performance comparison using K-means and expectation maximization algorithms.
Jung, Yong Gyu; Kang, Min Soo; Heo, Jun
2014-11-14
Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K -means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K -means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results.
Farno, E; Coventry, K; Slatter, P; Eshtiaghi, N
2018-06-15
Sludge pumps in wastewater treatment plants are often oversized due to uncertainty in calculation of pressure drop. This issue costs millions of dollars for industry to purchase and operate the oversized pumps. Besides costs, higher electricity consumption is associated with extra CO 2 emission which creates huge environmental impacts. Calculation of pressure drop via current pipe flow theory requires model estimation of flow curve data which depends on regression analysis and also varies with natural variation of rheological data. This study investigates impact of variation of rheological data and regression analysis on variation of pressure drop calculated via current pipe flow theories. Results compare the variation of calculated pressure drop between different models and regression methods and suggest on the suitability of each method. Copyright © 2018 Elsevier Ltd. All rights reserved.
Henrard, S; Speybroeck, N; Hermans, C
2015-11-01
Haemophilia is a rare genetic haemorrhagic disease characterized by partial or complete deficiency of coagulation factor VIII, for haemophilia A, or IX, for haemophilia B. As in any other medical research domain, the field of haemophilia research is increasingly concerned with finding factors associated with binary or continuous outcomes through multivariable models. Traditional models include multiple logistic regressions, for binary outcomes, and multiple linear regressions for continuous outcomes. Yet these regression models are at times difficult to implement, especially for non-statisticians, and can be difficult to interpret. The present paper sought to didactically explain how, why, and when to use classification and regression tree (CART) analysis for haemophilia research. The CART method is non-parametric and non-linear, based on the repeated partitioning of a sample into subgroups based on a certain criterion. Breiman developed this method in 1984. Classification trees (CTs) are used to analyse categorical outcomes and regression trees (RTs) to analyse continuous ones. The CART methodology has become increasingly popular in the medical field, yet only a few examples of studies using this methodology specifically in haemophilia have to date been published. Two examples using CART analysis and previously published in this field are didactically explained in details. There is increasing interest in using CART analysis in the health domain, primarily due to its ease of implementation, use, and interpretation, thus facilitating medical decision-making. This method should be promoted for analysing continuous or categorical outcomes in haemophilia, when applicable. © 2015 John Wiley & Sons Ltd.
Using Refined Regression Analysis To Assess The Ecological Services Of Restored Wetlands
A hierarchical approach to regression analysis of wetland water treatment was conducted to determine which factors are the most appropriate for characterizing wetlands of differing structure and function. We used this approach in an effort to identify the types and characteristi...
Regression Analysis: Instructional Resource for Cost/Managerial Accounting
ERIC Educational Resources Information Center
Stout, David E.
2015-01-01
This paper describes a classroom-tested instructional resource, grounded in principles of active learning and a constructivism, that embraces two primary objectives: "demystify" for accounting students technical material from statistics regarding ordinary least-squares (OLS) regression analysis--material that students may find obscure or…
Ultrasound-enhanced bioscouring of greige cotton: regression analysis of process factors
USDA-ARS?s Scientific Manuscript database
Process factors of enzyme concentration, time, power and frequency were investigated for ultrasound-enhanced bioscouring of greige cotton. A fractional factorial experimental design and subsequent regression analysis of the process factors were employed to determine the significance of each factor a...
London Measure of Unplanned Pregnancy: guidance for its use as an outcome measure
Hall, Jennifer A; Barrett, Geraldine; Copas, Andrew; Stephenson, Judith
2017-01-01
Background The London Measure of Unplanned Pregnancy (LMUP) is a psychometrically validated measure of the degree of intention of a current or recent pregnancy. The LMUP is increasingly being used worldwide, and can be used to evaluate family planning or preconception care programs. However, beyond recommending the use of the full LMUP scale, there is no published guidance on how to use the LMUP as an outcome measure. Ordinal logistic regression has been recommended informally, but studies published to date have all used binary logistic regression and dichotomized the scale at different cut points. There is thus a need for evidence-based guidance to provide a standardized methodology for multivariate analysis and to enable comparison of results. This paper makes recommendations for the regression method for analysis of the LMUP as an outcome measure. Materials and methods Data collected from 4,244 pregnant women in Malawi were used to compare five regression methods: linear, logistic with two cut points, and ordinal logistic with either the full or grouped LMUP score. The recommendations were then tested on the original UK LMUP data. Results There were small but no important differences in the findings across the regression models. Logistic regression resulted in the largest loss of information, and assumptions were violated for the linear and ordinal logistic regression. Consequently, robust standard errors were used for linear regression and a partial proportional odds ordinal logistic regression model attempted. The latter could only be fitted for grouped LMUP score. Conclusion We recommend the linear regression model with robust standard errors to make full use of the LMUP score when analyzed as an outcome measure. Ordinal logistic regression could be considered, but a partial proportional odds model with grouped LMUP score may be required. Logistic regression is the least-favored option, due to the loss of information. For logistic regression, the cut point for un/planned pregnancy should be between nine and ten. These recommendations will standardize the analysis of LMUP data and enhance comparability of results across studies. PMID:28435343
Afantitis, Antreas; Melagraki, Georgia; Sarimveis, Haralambos; Koutentis, Panayiotis A; Markopoulos, John; Igglessi-Markopoulou, Olga
2006-08-01
A quantitative-structure activity relationship was obtained by applying Multiple Linear Regression Analysis to a series of 80 1-[2-hydroxyethoxy-methyl]-6-(phenylthio) thymine (HEPT) derivatives with significant anti-HIV activity. For the selection of the best among 37 different descriptors, the Elimination Selection Stepwise Regression Method (ES-SWR) was utilized. The resulting QSAR model (R (2) (CV) = 0.8160; S (PRESS) = 0.5680) proved to be very accurate both in training and predictive stages.
The use of cognitive ability measures as explanatory variables in regression analysis
Junker, Brian; Schofield, Lynne Steuerle; Taylor, Lowell J
2015-01-01
Cognitive ability measures are often taken as explanatory variables in regression analysis, e.g., as a factor affecting a market outcome such as an individual’s wage, or a decision such as an individual’s education acquisition. Cognitive ability is a latent construct; its true value is unobserved. Nonetheless, researchers often assume that a test score, constructed via standard psychometric practice from individuals’ responses to test items, can be safely used in regression analysis. We examine problems that can arise, and suggest that an alternative approach, a “mixed effects structural equations” (MESE) model, may be more appropriate in many circumstances. PMID:26998417
Suzuki, Hideaki; Tabata, Takahisa; Koizumi, Hiroki; Hohchi, Nobusuke; Takeuchi, Shoko; Kitamura, Takuro; Fujino, Yoshihisa; Ohbuchi, Toyoaki
2014-12-01
This study aimed to create a multiple regression model for predicting hearing outcomes of idiopathic sudden sensorineural hearing loss (ISSNHL). The participants were 205 consecutive patients (205 ears) with ISSNHL (hearing level ≥ 40 dB, interval between onset and treatment ≤ 30 days). They received systemic steroid administration combined with intratympanic steroid injection. Data were examined by simple and multiple regression analyses. Three hearing indices (percentage hearing improvement, hearing gain, and posttreatment hearing level [HLpost]) and 7 prognostic factors (age, days from onset to treatment, initial hearing level, initial hearing level at low frequencies, initial hearing level at high frequencies, presence of vertigo, and contralateral hearing level) were included in the multiple regression analysis as dependent and explanatory variables, respectively. In the simple regression analysis, the percentage hearing improvement, hearing gain, and HLpost showed significant correlation with 2, 5, and 6 of the 7 prognostic factors, respectively. The multiple correlation coefficients were 0.396, 0.503, and 0.714 for the percentage hearing improvement, hearing gain, and HLpost, respectively. Predicted values of HLpost calculated by the multiple regression equation were reliable with 70% probability with a 40-dB-width prediction interval. Prediction of HLpost by the multiple regression model may be useful to estimate the hearing prognosis of ISSNHL. © The Author(s) 2014.
MULGRES: a computer program for stepwise multiple regression analysis
A. Jeff Martin
1971-01-01
MULGRES is a computer program source deck that is designed for multiple regression analysis employing the technique of stepwise deletion in the search for most significant variables. The features of the program, along with inputs and outputs, are briefly described, with a note on machine compatibility.
CatReg Software for Categorical Regression Analysis (May 2016)
CatReg 3.0 is a Microsoft Windows enhanced version of the Agency’s categorical regression analysis (CatReg) program. CatReg complements EPA’s existing Benchmark Dose Software (BMDS) by greatly enhancing a risk assessor’s ability to determine whether data from separate toxicologic...
Method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1972-01-01
Two computer programs developed according to two general types of exponential models for conducting nonlinear exponential regression analysis are described. Least squares procedure is used in which the nonlinear problem is linearized by expanding in a Taylor series. Program is written in FORTRAN 5 for the Univac 1108 computer.
ERIC Educational Resources Information Center
Koon, Sharon; Petscher, Yaacov
2015-01-01
The purpose of this report was to explicate the use of logistic regression and classification and regression tree (CART) analysis in the development of early warning systems. It was motivated by state education leaders' interest in maintaining high classification accuracy while simultaneously improving practitioner understanding of the rules by…
Interrupted Time Series Versus Statistical Process Control in Quality Improvement Projects.
Andersson Hagiwara, Magnus; Andersson Gäre, Boel; Elg, Mattias
2016-01-01
To measure the effect of quality improvement interventions, it is appropriate to use analysis methods that measure data over time. Examples of such methods include statistical process control analysis and interrupted time series with segmented regression analysis. This article compares the use of statistical process control analysis and interrupted time series with segmented regression analysis for evaluating the longitudinal effects of quality improvement interventions, using an example study on an evaluation of a computerized decision support system.
NASA Astrophysics Data System (ADS)
Prahutama, Alan; Suparti; Wahyu Utami, Tiani
2018-03-01
Regression analysis is an analysis to model the relationship between response variables and predictor variables. The parametric approach to the regression model is very strict with the assumption, but nonparametric regression model isn’t need assumption of model. Time series data is the data of a variable that is observed based on a certain time, so if the time series data wanted to be modeled by regression, then we should determined the response and predictor variables first. Determination of the response variable in time series is variable in t-th (yt), while the predictor variable is a significant lag. In nonparametric regression modeling, one developing approach is to use the Fourier series approach. One of the advantages of nonparametric regression approach using Fourier series is able to overcome data having trigonometric distribution. In modeling using Fourier series needs parameter of K. To determine the number of K can be used Generalized Cross Validation method. In inflation modeling for the transportation sector, communication and financial services using Fourier series yields an optimal K of 120 parameters with R-square 99%. Whereas if it was modeled by multiple linear regression yield R-square 90%.
Westreich, Daniel; Lessler, Justin; Funk, Michele Jonsson
2010-08-01
Propensity scores for the analysis of observational data are typically estimated using logistic regression. Our objective in this review was to assess machine learning alternatives to logistic regression, which may accomplish the same goals but with fewer assumptions or greater accuracy. We identified alternative methods for propensity score estimation and/or classification from the public health, biostatistics, discrete mathematics, and computer science literature, and evaluated these algorithms for applicability to the problem of propensity score estimation, potential advantages over logistic regression, and ease of use. We identified four techniques as alternatives to logistic regression: neural networks, support vector machines, decision trees (classification and regression trees [CART]), and meta-classifiers (in particular, boosting). Although the assumptions of logistic regression are well understood, those assumptions are frequently ignored. All four alternatives have advantages and disadvantages compared with logistic regression. Boosting (meta-classifiers) and, to a lesser extent, decision trees (particularly CART), appear to be most promising for use in the context of propensity score analysis, but extensive simulation studies are needed to establish their utility in practice. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Field calibration of electrochemical NO2 sensors in a citizen science context
NASA Astrophysics Data System (ADS)
Mijling, Bas; Jiang, Qijun; de Jonge, Dave; Bocconi, Stefano
2018-03-01
In many urban areas the population is exposed to elevated levels of air pollution. However, real-time air quality is usually only measured at few locations. These measurements provide a general picture of the state of the air, but they are unable to monitor local differences. New low-cost sensor technology is available for several years now, and has the potential to extend official monitoring networks significantly even though the current generation of sensors suffer from various technical issues.Citizen science experiments based on these sensors must be designed carefully to avoid generation of data which is of poor or even useless quality. This study explores the added value of the 2016 Urban AirQ campaign, which focused on measuring nitrogen dioxide (NO2) in Amsterdam, the Netherlands. Sixteen low-cost air quality sensor devices were built and distributed among volunteers living close to roads with high traffic volume for a 2-month measurement period. Each electrochemical sensor was calibrated in-field next to an air monitoring station during an 8-day period, resulting in R2 ranging from 0.3 to 0.7. When temperature and relative humidity are included in a multilinear regression approach, the NO2 accuracy is improved significantly, with R2 ranging from 0.6 to 0.9. Recalibration after the campaign is crucial, as all sensors show a significant signal drift in the 2-month measurement period. The measurement series between the calibration periods can be corrected for after the measurement period by taking a weighted average of the calibration coefficients.Validation against an independent air monitoring station shows good agreement. Using our approach, the standard deviation of a typical sensor device for NO2 measurements was found to be 7 µg m-3, provided that temperatures are below 30 °C. Stronger ozone titration on street sides causes an underestimation of NO2 concentrations, which 75 % of the time is less than 2.3 µg m-3.Our findings show that citizen science campaigns using low-cost sensors based on the current generations of electrochemical NO2 sensors may provide useful complementary data on local air quality in an urban setting, provided that experiments are properly set up and the data are carefully analysed.
NASA Astrophysics Data System (ADS)
Ritter, Mathias; Müller, Mathias D.; Tsai, Ming-Yi; Parlow, Eberhard
2013-10-01
The fully coupled chemistry module (WRF-Chem) within the Weather Research and Forecasting (WRF) model has been implemented over a Swiss domain for the years 2002 and 1991. The very complex terrain requires a high horizontal resolution (2 × 2 km2), which is achieved by nesting the Swiss domain into a coarser European one. The temporal and spatial distribution of O3, NO2 and PM10 as well as temperature and solar radiation are evaluated against ground-based measurements. The model performs well for the meteorological parameters with Pearson correlation coefficients of 0.92 for temperature and 0.88-0.89 for solar radiation. Temperature has root mean square errors (RMSE) of 3.30 K and 3.51 K for 2002 and 1991 and solar radiation has RMSEs of 122.92 and 116.35 for 2002 and 1991, respectively. For the modeled air pollutants, a multi-linear regression post-processing was used to eliminate systematic bias. Seasonal variations of post-processed air pollutants are represented correctly. However, short-term peaks of several days are not captured by the model. Averaged daily maximum and daily values of O3 achieved Pearson correlation coefficients of 0.69-0.77 whereas averaged NO2 and PM10 had the highest correlations for yearly average values (0.68-0.78). The spatial distribution reveals the importance of PM10 advection from the Po valley to southern Switzerland (Ticino). The absolute errors are ranging from - 10 to 15 μg/m3 for ozone, - 9 to 3 μg/m3 for NO2 and - 4 to 3 μg/m3 for PM10. However, larger errors occur along heavily trafficked roads, in street canyons or on mountains. We also compare yearly modeled results against a dedicated Swiss dispersion model for NO2 and PM10. The dedicated dispersion model has a slightly better statistical performance, but WRF-Chem is capable of computing the temporal evolution of three-dimensional data for a variety of air pollutants and meteorological parameters. Overall, WRF-Chem with the application of post-processing algorithms can produce encouraging statistical values over very complex terrain which are competitive with similar studies.
NASA Astrophysics Data System (ADS)
Palm, Brett Brian
Secondary organic aerosols (SOA) in the atmosphere play an important role in air quality, human health, and climate. However, the sources, formation pathways, and fate of SOA are poorly constrained. In this dissertation, I present development and application of the oxidation flow reactor (OFR) technique for studying SOA formation from OH, O3, and NO3 oxidation of ambient air. With a several-minute residence time and a portable design with no inlet, OFRs are particularly well-suited for this purpose. I first introduce the OFR concept, and discuss several advances I have made in performing and interpreting OFR experiments. This includes estimating oxidant exposures, modeling the fate of low-volatility gases in the OFR (wall loss, condensation, and oxidation), and comparing SOA yields of single precursors in the OFR with yields measured in environmental chambers. When these experimental details are carefully considered, SOA formation in an OFR can be more reliably compared with ambient SOA formation processes. I then present an overview of what OFR measurements have taught us about SOA formation in the atmosphere. I provide a comparison of SOA formation from OH, O3, and NO3 oxidation of ambient air in a wide variety of environments, from rural forests to urban air. In a rural forest, the SOA formation correlated with biogenic precursors (e.g., monoterpenes). In urban air, it correlated instead with reactive anthropogenic tracers (e.g., trimethylbenzene). In mixed-source regions, the SOA formation did not correlate well with any single precursor, but could be predicted by multilinear regression from several precursors. Despite these correlations, the concentrations of speciated ambient VOCs could only explain approximately 10-50% of the total SOA formed from OH oxidation. In contrast, ambient VOCs could explain all of the SOA formation observed from O3 and NO3 oxidation. Evidence suggests that lower-volatility gases (semivolatile and intermediate-volatility organic compounds; S/IVOCs) were present in ambient air and were the likely source of SOA formation that could not be explained by VOCs. These measurements show that S/IVOCs likely play an important intermediary role in ambient SOA formation in all of the sampled locations, from rural forests to urban air.
Analysis of Sting Balance Calibration Data Using Optimized Regression Models
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert; Bader, Jon B.
2009-01-01
Calibration data of a wind tunnel sting balance was processed using a search algorithm that identifies an optimized regression model for the data analysis. The selected sting balance had two moment gages that were mounted forward and aft of the balance moment center. The difference and the sum of the two gage outputs were fitted in the least squares sense using the normal force and the pitching moment at the balance moment center as independent variables. The regression model search algorithm predicted that the difference of the gage outputs should be modeled using the intercept and the normal force. The sum of the two gage outputs, on the other hand, should be modeled using the intercept, the pitching moment, and the square of the pitching moment. Equations of the deflection of a cantilever beam are used to show that the search algorithm s two recommended math models can also be obtained after performing a rigorous theoretical analysis of the deflection of the sting balance under load. The analysis of the sting balance calibration data set is a rare example of a situation when regression models of balance calibration data can directly be derived from first principles of physics and engineering. In addition, it is interesting to see that the search algorithm recommended the same regression models for the data analysis using only a set of statistical quality metrics.
Meta-regression analysis of commensal and pathogenic Escherichia coli survival in soil and water.
Franz, Eelco; Schijven, Jack; de Roda Husman, Ana Maria; Blaak, Hetty
2014-06-17
The extent to which pathogenic and commensal E. coli (respectively PEC and CEC) can survive, and which factors predominantly determine the rate of decline, are crucial issues from a public health point of view. The goal of this study was to provide a quantitative summary of the variability in E. coli survival in soil and water over a broad range of individual studies and to identify the most important sources of variability. To that end, a meta-regression analysis on available literature data was conducted. The considerable variation in reported decline rates indicated that the persistence of E. coli is not easily predictable. The meta-analysis demonstrated that for soil and water, the type of experiment (laboratory or field), the matrix subtype (type of water and soil), and temperature were the main factors included in the regression analysis. A higher average decline rate in soil of PEC compared with CEC was observed. The regression models explained at best 57% of the variation in decline rate in soil and 41% of the variation in decline rate in water. This indicates that additional factors, not included in the current meta-regression analysis, are of importance but rarely reported. More complete reporting of experimental conditions may allow future inference on the global effects of these variables on the decline rate of E. coli.
Regression Model Term Selection for the Analysis of Strain-Gage Balance Calibration Data
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred; Volden, Thomas R.
2010-01-01
The paper discusses the selection of regression model terms for the analysis of wind tunnel strain-gage balance calibration data. Different function class combinations are presented that may be used to analyze calibration data using either a non-iterative or an iterative method. The role of the intercept term in a regression model of calibration data is reviewed. In addition, useful algorithms and metrics originating from linear algebra and statistics are recommended that will help an analyst (i) to identify and avoid both linear and near-linear dependencies between regression model terms and (ii) to make sure that the selected regression model of the calibration data uses only statistically significant terms. Three different tests are suggested that may be used to objectively assess the predictive capability of the final regression model of the calibration data. These tests use both the original data points and regression model independent confirmation points. Finally, data from a simplified manual calibration of the Ames MK40 balance is used to illustrate the application of some of the metrics and tests to a realistic calibration data set.
Functional mixture regression.
Yao, Fang; Fu, Yuejiao; Lee, Thomas C M
2011-04-01
In functional linear models (FLMs), the relationship between the scalar response and the functional predictor process is often assumed to be identical for all subjects. Motivated by both practical and methodological considerations, we relax this assumption and propose a new class of functional regression models that allow the regression structure to vary for different groups of subjects. By projecting the predictor process onto its eigenspace, the new functional regression model is simplified to a framework that is similar to classical mixture regression models. This leads to the proposed approach named as functional mixture regression (FMR). The estimation of FMR can be readily carried out using existing software implemented for functional principal component analysis and mixture regression. The practical necessity and performance of FMR are illustrated through applications to a longevity analysis of female medflies and a human growth study. Theoretical investigations concerning the consistent estimation and prediction properties of FMR along with simulation experiments illustrating its empirical properties are presented in the supplementary material available at Biostatistics online. Corresponding results demonstrate that the proposed approach could potentially achieve substantial gains over traditional FLMs.
A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield
NASA Astrophysics Data System (ADS)
Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan
2018-04-01
In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.
Westreich, Daniel; Lessler, Justin; Funk, Michele Jonsson
2010-01-01
Summary Objective Propensity scores for the analysis of observational data are typically estimated using logistic regression. Our objective in this Review was to assess machine learning alternatives to logistic regression which may accomplish the same goals but with fewer assumptions or greater accuracy. Study Design and Setting We identified alternative methods for propensity score estimation and/or classification from the public health, biostatistics, discrete mathematics, and computer science literature, and evaluated these algorithms for applicability to the problem of propensity score estimation, potential advantages over logistic regression, and ease of use. Results We identified four techniques as alternatives to logistic regression: neural networks, support vector machines, decision trees (CART), and meta-classifiers (in particular, boosting). Conclusion While the assumptions of logistic regression are well understood, those assumptions are frequently ignored. All four alternatives have advantages and disadvantages compared with logistic regression. Boosting (meta-classifiers) and to a lesser extent decision trees (particularly CART) appear to be most promising for use in the context of propensity score analysis, but extensive simulation studies are needed to establish their utility in practice. PMID:20630332
Smith, S. Jerrod; Lewis, Jason M.; Graves, Grant M.
2015-09-28
Generalized-least-squares multiple-linear regression analysis was used to formulate regression relations between peak-streamflow frequency statistics and basin characteristics. Contributing drainage area was the only basin characteristic determined to be statistically significant for all percentage of annual exceedance probabilities and was the only basin characteristic used in regional regression equations for estimating peak-streamflow frequency statistics on unregulated streams in and near the Oklahoma Panhandle. The regression model pseudo-coefficient of determination, converted to percent, for the Oklahoma Panhandle regional regression equations ranged from about 38 to 63 percent. The standard errors of prediction and the standard model errors for the Oklahoma Panhandle regional regression equations ranged from about 84 to 148 percent and from about 76 to 138 percent, respectively. These errors were comparable to those reported for regional peak-streamflow frequency regression equations for the High Plains areas of Texas and Colorado. The root mean square errors for the Oklahoma Panhandle regional regression equations (ranging from 3,170 to 92,000 cubic feet per second) were less than the root mean square errors for the Oklahoma statewide regression equations (ranging from 18,900 to 412,000 cubic feet per second); therefore, the Oklahoma Panhandle regional regression equations produce more accurate peak-streamflow statistic estimates for the irrigated period of record in the Oklahoma Panhandle than do the Oklahoma statewide regression equations. The regression equations developed in this report are applicable to streams that are not substantially affected by regulation, impoundment, or surface-water withdrawals. These regression equations are intended for use for stream sites with contributing drainage areas less than or equal to about 2,060 square miles, the maximum value for the independent variable used in the regression analysis.
Advanced microwave soil moisture studies. [Big Sioux River Basin, Iowa
NASA Technical Reports Server (NTRS)
Dalsted, K. J.; Harlan, J. C.
1983-01-01
Comparisons of low level L-band brightness temperature (TB) and thermal infrared (TIR) data as well as the following data sets: soil map and land cover data; direct soil moisture measurement; and a computer generated contour map were statistically evaluated using regression analysis and linear discriminant analysis. Regression analysis of footprint data shows that statistical groupings of ground variables (soil features and land cover) hold promise for qualitative assessment of soil moisture and for reducing variance within the sampling space. Dry conditions appear to be more conductive to producing meaningful statistics than wet conditions. Regression analysis using field averaged TB and TIR data did not approach the higher sq R values obtained using within-field variations. The linear discriminant analysis indicates some capacity to distinguish categories with the results being somewhat better on a field basis than a footprint basis.
Cooley, Richard L.
1982-01-01
Prior information on the parameters of a groundwater flow model can be used to improve parameter estimates obtained from nonlinear regression solution of a modeling problem. Two scales of prior information can be available: (1) prior information having known reliability (that is, bias and random error structure) and (2) prior information consisting of best available estimates of unknown reliability. A regression method that incorporates the second scale of prior information assumes the prior information to be fixed for any particular analysis to produce improved, although biased, parameter estimates. Approximate optimization of two auxiliary parameters of the formulation is used to help minimize the bias, which is almost always much smaller than that resulting from standard ridge regression. It is shown that if both scales of prior information are available, then a combined regression analysis may be made.
NASA Astrophysics Data System (ADS)
Li, Jiangtong; Luo, Yongdao; Dai, Honglin
2018-01-01
Water is the source of life and the essential foundation of all life. With the development of industrialization, the phenomenon of water pollution is becoming more and more frequent, which directly affects the survival and development of human. Water quality detection is one of the necessary measures to protect water resources. Ultraviolet (UV) spectral analysis is an important research method in the field of water quality detection, which partial least squares regression (PLSR) analysis method is becoming predominant technology, however, in some special cases, PLSR's analysis produce considerable errors. In order to solve this problem, the traditional principal component regression (PCR) analysis method was improved by using the principle of PLSR in this paper. The experimental results show that for some special experimental data set, improved PCR analysis method performance is better than PLSR. The PCR and PLSR is the focus of this paper. Firstly, the principal component analysis (PCA) is performed by MATLAB to reduce the dimensionality of the spectral data; on the basis of a large number of experiments, the optimized principal component is extracted by using the principle of PLSR, which carries most of the original data information. Secondly, the linear regression analysis of the principal component is carried out with statistic package for social science (SPSS), which the coefficients and relations of principal components can be obtained. Finally, calculating a same water spectral data set by PLSR and improved PCR, analyzing and comparing two results, improved PCR and PLSR is similar for most data, but improved PCR is better than PLSR for data near the detection limit. Both PLSR and improved PCR can be used in Ultraviolet spectral analysis of water, but for data near the detection limit, improved PCR's result better than PLSR.
Statistical methods for astronomical data with upper limits. II - Correlation and regression
NASA Technical Reports Server (NTRS)
Isobe, T.; Feigelson, E. D.; Nelson, P. I.
1986-01-01
Statistical methods for calculating correlations and regressions in bivariate censored data where the dependent variable can have upper or lower limits are presented. Cox's regression and the generalization of Kendall's rank correlation coefficient provide significant levels of correlations, and the EM algorithm, under the assumption of normally distributed errors, and its nonparametric analog using the Kaplan-Meier estimator, give estimates for the slope of a regression line. Monte Carlo simulations demonstrate that survival analysis is reliable in determining correlations between luminosities at different bands. Survival analysis is applied to CO emission in infrared galaxies, X-ray emission in radio galaxies, H-alpha emission in cooling cluster cores, and radio emission in Seyfert galaxies.
Time series regression studies in environmental epidemiology.
Bhaskaran, Krishnan; Gasparrini, Antonio; Hajat, Shakoor; Smeeth, Liam; Armstrong, Ben
2013-08-01
Time series regression studies have been widely used in environmental epidemiology, notably in investigating the short-term associations between exposures such as air pollution, weather variables or pollen, and health outcomes such as mortality, myocardial infarction or disease-specific hospital admissions. Typically, for both exposure and outcome, data are available at regular time intervals (e.g. daily pollution levels and daily mortality counts) and the aim is to explore short-term associations between them. In this article, we describe the general features of time series data, and we outline the analysis process, beginning with descriptive analysis, then focusing on issues in time series regression that differ from other regression methods: modelling short-term fluctuations in the presence of seasonal and long-term patterns, dealing with time varying confounding factors and modelling delayed ('lagged') associations between exposure and outcome. We finish with advice on model checking and sensitivity analysis, and some common extensions to the basic model.
Schistosomiasis Breeding Environment Situation Analysis in Dongting Lake Area
NASA Astrophysics Data System (ADS)
Li, Chuanrong; Jia, Yuanyuan; Ma, Lingling; Liu, Zhaoyan; Qian, Yonggang
2013-01-01
Monitoring environmental characteristics, such as vegetation, soil moisture et al., of Oncomelania hupensis (O. hupensis)’ spatial/temporal distribution is of vital importance to the schistosomiasis prevention and control. In this study, the relationship between environmental factors derived from remotely sensed data and the density of O. hupensis was analyzed by a multiple linear regression model. Secondly, spatial analysis of the regression residual was investigated by the semi-variogram method. Thirdly, spatial analysis of the regression residual and the multiple linear regression model were both employed to estimate the spatial variation of O. hupensis density. Finally, the approach was used to monitor and predict the spatial and temporal variations of oncomelania of Dongting Lake region, China. And the areas of potential O. hupensis habitats were predicted and the influence of Three Gorges Dam (TGB)project on the density of O. hupensis was analyzed.
Assessing risk factors for periodontitis using regression
NASA Astrophysics Data System (ADS)
Lobo Pereira, J. A.; Ferreira, Maria Cristina; Oliveira, Teresa
2013-10-01
Multivariate statistical analysis is indispensable to assess the associations and interactions between different factors and the risk of periodontitis. Among others, regression analysis is a statistical technique widely used in healthcare to investigate and model the relationship between variables. In our work we study the impact of socio-demographic, medical and behavioral factors on periodontal health. Using regression, linear and logistic models, we can assess the relevance, as risk factors for periodontitis disease, of the following independent variables (IVs): Age, Gender, Diabetic Status, Education, Smoking status and Plaque Index. The multiple linear regression analysis model was built to evaluate the influence of IVs on mean Attachment Loss (AL). Thus, the regression coefficients along with respective p-values will be obtained as well as the respective p-values from the significance tests. The classification of a case (individual) adopted in the logistic model was the extent of the destruction of periodontal tissues defined by an Attachment Loss greater than or equal to 4 mm in 25% (AL≥4mm/≥25%) of sites surveyed. The association measures include the Odds Ratios together with the correspondent 95% confidence intervals.
Ecologic regression analysis and the study of the influence of air quality on mortality.
Selvin, S; Merrill, D; Wong, L; Sacks, S T
1984-01-01
This presentation focuses entirely on the use and evaluation of regression analysis applied to ecologic data as a method to study the effects of ambient air pollution on mortality rates. Using extensive national data on mortality, air quality and socio-economic status regression analyses are used to study the influence of air quality on mortality. The analytic methods and data are selected in such a way that direct comparisons can be made with other ecologic regression studies of mortality and air quality. Analyses are performed by use of two types of geographic areas, age-specific mortality of both males and females and three pollutants (total suspended particulates, sulfur dioxide and nitrogen dioxide). The overall results indicate no persuasive evidence exists of a link between air quality and general mortality levels. Additionally, a lack of consistency between the present results and previous published work is noted. Overall, it is concluded that linear regression analysis applied to nationally collected ecologic data cannot be used to usefully infer a causal relationship between air quality and mortality which is in direct contradiction to other major published studies. PMID:6734568
Regression Analysis with Dummy Variables: Use and Interpretation.
ERIC Educational Resources Information Center
Hinkle, Dennis E.; Oliver, J. Dale
1986-01-01
Multiple regression analysis (MRA) may be used when both continuous and categorical variables are included as independent research variables. The use of MRA with categorical variables involves dummy coding, that is, assigning zeros and ones to levels of categorical variables. Caution is urged in results interpretation. (Author/CH)
Radio Propagation Prediction Software for Complex Mixed Path Physical Channels
2006-08-14
63 4.4.6. Applied Linear Regression Analysis in the Frequency Range 1-50 MHz 69 4.4.7. Projected Scaling to...4.4.6. Applied Linear Regression Analysis in the Frequency Range 1-50 MHz In order to construct a comprehensive numerical algorithm capable of
Declining Bias and Gender Wage Discrimination? A Meta-Regression Analysis
ERIC Educational Resources Information Center
Jarrell, Stephen B.; Stanley, T. D.
2004-01-01
The meta-regression analysis reveals that there is a strong tendency for discrimination estimates to fall and wage discrimination exist against the woman. The biasing effect of researchers' gender of not correcting for selection bias has weakened and changes in labor market have made it less important.
From Equal to Equivalent Pay: Salary Discrimination in Academia
ERIC Educational Resources Information Center
Greenfield, Ester
1977-01-01
Examines the federal statutes barring sex discrimination in employment and argues that the work of any two professors is comparable but not equal. Suggests using regression analysis to prove salary discrimination and discusses the legal justification for adopting regression analysis and the standard of comparable pay for comparable work.…
Applied Statistics: From Bivariate through Multivariate Techniques [with CD-ROM
ERIC Educational Resources Information Center
Warner, Rebecca M.
2007-01-01
This book provides a clear introduction to widely used topics in bivariate and multivariate statistics, including multiple regression, discriminant analysis, MANOVA, factor analysis, and binary logistic regression. The approach is applied and does not require formal mathematics; equations are accompanied by verbal explanations. Students are asked…
ERIC Educational Resources Information Center
Bates, Reid A.; Holton, Elwood F., III; Burnett, Michael F.
1999-01-01
A case study of learning transfer demonstrates the possible effect of influential observation on linear regression analysis. A diagnostic method that tests for violation of assumptions, multicollinearity, and individual and multiple influential observations helps determine which observation to delete to eliminate bias. (SK)
Anantha M. Prasad; Louis R. Iverson; Andy Liaw; Andy Liaw
2006-01-01
We evaluated four statistical models - Regression Tree Analysis (RTA), Bagging Trees (BT), Random Forests (RF), and Multivariate Adaptive Regression Splines (MARS) - for predictive vegetation mapping under current and future climate scenarios according to the Canadian Climate Centre global circulation model.
Ludbrook, John
2010-07-01
1. There are two reasons for wanting to compare measurers or methods of measurement. One is to calibrate one method or measurer against another; the other is to detect bias. Fixed bias is present when one method gives higher (or lower) values across the whole range of measurement. Proportional bias is present when one method gives values that diverge progressively from those of the other. 2. Linear regression analysis is a popular method for comparing methods of measurement, but the familiar ordinary least squares (OLS) method is rarely acceptable. The OLS method requires that the x values are fixed by the design of the study, whereas it is usual that both y and x values are free to vary and are subject to error. In this case, special regression techniques must be used. 3. Clinical chemists favour techniques such as major axis regression ('Deming's method'), the Passing-Bablok method or the bivariate least median squares method. Other disciplines, such as allometry, astronomy, biology, econometrics, fisheries research, genetics, geology, physics and sports science, have their own preferences. 4. Many Monte Carlo simulations have been performed to try to decide which technique is best, but the results are almost uninterpretable. 5. I suggest that pharmacologists and physiologists should use ordinary least products regression analysis (geometric mean regression, reduced major axis regression): it is versatile, can be used for calibration or to detect bias and can be executed by hand-held calculator or by using the loss function in popular, general-purpose, statistical software.
Kitagawa, Yasuhisa; Teramoto, Tamio; Daida, Hiroyuki
2012-01-01
We evaluated the impact of adherence to preferable behavior on serum lipid control assessed by a self-reported questionnaire in high-risk patients taking pravastatin for primary prevention of coronary artery disease. High-risk patients taking pravastatin were followed for 2 years. Questionnaire surveys comprising 21 questions, including 18 questions concerning awareness of health, and current status of diet, exercise, and drug therapy, were conducted at baseline and after 1 year. Potential domains were established by factor analysis from the results of questionnaires, and adherence scores were calculated in each domain. The relationship between adherence scores and lipid values during the 1-year treatment period was analyzed by each domain using multiple regression analysis. A total of 5,792 patients taking pravastatin were included in the analysis. Multiple regression analysis showed a significant correlation in terms of "Intake of high fat/cholesterol/sugar foods" (regression coefficient -0.58, p=0.0105) and "Adherence to instructions for drug therapy" (regression coefficient -6.61, p<0.0001). Low-density lipoprotein cholesterol (LDL-C) values were significantly lower in patients who had an increase in the adherence score in the "Awareness of health" domain compared with those with a decreased score. There was a significant correlation between high-density lipoprotein (HDL-C) values and "Awareness of health" (regression coefficient 0.26; p= 0.0037), "Preferable dietary behaviors" (regression coefficient 0.75; p<0.0001), and "Exercise" (regression coefficient 0.73; p= 0.0002). Similar relations were seen with triglycerides. In patients who have a high awareness of their health, a positive attitude toward lipid-lowering treatment including diet, exercise, and high adherence to drug therapy, is related with favorable overall lipid control even in patients under treatment with pravastatin.
Mita, Tomoya; Katakami, Naoto; Shiraiwa, Toshihiko; Yoshii, Hidenori; Gosho, Masahiko; Shimomura, Iichiro; Watada, Hirotaka
2017-01-01
Background. The effect of dipeptidyl peptidase-4 (DPP-4) inhibitors on the regression of carotid IMT remains largely unknown. The present study aimed to clarify whether sitagliptin, DPP-4 inhibitor, could regress carotid intima-media thickness (IMT) in insulin-treated patients with type 2 diabetes mellitus (T2DM). Methods . This is an exploratory analysis of a randomized trial in which we investigated the effect of sitagliptin on the progression of carotid IMT in insulin-treated patients with T2DM. Here, we compared the efficacy of sitagliptin treatment on the number of patients who showed regression of carotid IMT of ≥0.10 mm in a post hoc analysis. Results . The percentages of the number of the patients who showed regression of mean-IMT-CCA (28.9% in the sitagliptin group versus 16.4% in the conventional group, P = 0.022) and left max-IMT-CCA (43.0% in the sitagliptin group versus 26.2% in the conventional group, P = 0.007), but not right max-IMT-CCA, were higher in the sitagliptin treatment group compared with those in the non-DPP-4 inhibitor treatment group. In multiple logistic regression analysis, sitagliptin treatment significantly achieved higher target attainment of mean-IMT-CCA ≥0.10 mm and right and left max-IMT-CCA ≥0.10 mm compared to conventional treatment. Conclusions . Our data suggested that DPP-4 inhibitors were associated with the regression of carotid atherosclerosis in insulin-treated T2DM patients. This study has been registered with the University Hospital Medical Information Network Clinical Trials Registry (UMIN000007396).
A framework for longitudinal data analysis via shape regression
NASA Astrophysics Data System (ADS)
Fishbaugh, James; Durrleman, Stanley; Piven, Joseph; Gerig, Guido
2012-02-01
Traditional longitudinal analysis begins by extracting desired clinical measurements, such as volume or head circumference, from discrete imaging data. Typically, the continuous evolution of a scalar measurement is estimated by choosing a 1D regression model, such as kernel regression or fitting a polynomial of fixed degree. This type of analysis not only leads to separate models for each measurement, but there is no clear anatomical or biological interpretation to aid in the selection of the appropriate paradigm. In this paper, we propose a consistent framework for the analysis of longitudinal data by estimating the continuous evolution of shape over time as twice differentiable flows of deformations. In contrast to 1D regression models, one model is chosen to realistically capture the growth of anatomical structures. From the continuous evolution of shape, we can simply extract any clinical measurements of interest. We demonstrate on real anatomical surfaces that volume extracted from a continuous shape evolution is consistent with a 1D regression performed on the discrete measurements. We further show how the visualization of shape progression can aid in the search for significant measurements. Finally, we present an example on a shape complex of the brain (left hemisphere, right hemisphere, cerebellum) that demonstrates a potential clinical application for our framework.
Lin, Ying-Ting
2013-04-30
A tandem technique of hard equipment is often used for the chemical analysis of a single cell to first isolate and then detect the wanted identities. The first part is the separation of wanted chemicals from the bulk of a cell; the second part is the actual detection of the important identities. To identify the key structural modifications around ligand binding, the present study aims to develop a counterpart of tandem technique for cheminformatics. A statistical regression and its outliers act as a computational technique for separation. A PPARγ (peroxisome proliferator-activated receptor gamma) agonist cellular system was subjected to such an investigation. Results show that this tandem regression-outlier analysis, or the prioritization of the context equations tagged with features of the outliers, is an effective regression technique of cheminformatics to detect key structural modifications, as well as their tendency of impact to ligand binding. The key structural modifications around ligand binding are effectively extracted or characterized out of cellular reactions. This is because molecular binding is the paramount factor in such ligand cellular system and key structural modifications around ligand binding are expected to create outliers. Therefore, such outliers can be captured by this tandem regression-outlier analysis.
Digression and Value Concatenation to Enable Privacy-Preserving Regression.
Li, Xiao-Bai; Sarkar, Sumit
2014-09-01
Regression techniques can be used not only for legitimate data analysis, but also to infer private information about individuals. In this paper, we demonstrate that regression trees, a popular data-analysis and data-mining technique, can be used to effectively reveal individuals' sensitive data. This problem, which we call a "regression attack," has not been addressed in the data privacy literature, and existing privacy-preserving techniques are not appropriate in coping with this problem. We propose a new approach to counter regression attacks. To protect against privacy disclosure, our approach introduces a novel measure, called digression , which assesses the sensitive value disclosure risk in the process of building a regression tree model. Specifically, we develop an algorithm that uses the measure for pruning the tree to limit disclosure of sensitive data. We also propose a dynamic value-concatenation method for anonymizing data, which better preserves data utility than a user-defined generalization scheme commonly used in existing approaches. Our approach can be used for anonymizing both numeric and categorical data. An experimental study is conducted using real-world financial, economic and healthcare data. The results of the experiments demonstrate that the proposed approach is very effective in protecting data privacy while preserving data quality for research and analysis.
New analysis methods to push the boundaries of diagnostic techniques in the environmental sciences
NASA Astrophysics Data System (ADS)
Lungaroni, M.; Murari, A.; Peluso, E.; Gelfusa, M.; Malizia, A.; Vega, J.; Talebzadeh, S.; Gaudio, P.
2016-04-01
In the last years, new and more sophisticated measurements have been at the basis of the major progress in various disciplines related to the environment, such as remote sensing and thermonuclear fusion. To maximize the effectiveness of the measurements, new data analysis techniques are required. First data processing tasks, such as filtering and fitting, are of primary importance, since they can have a strong influence on the rest of the analysis. Even if Support Vector Regression is a method devised and refined at the end of the 90s, a systematic comparison with more traditional non parametric regression methods has never been reported. In this paper, a series of systematic tests is described, which indicates how SVR is a very competitive method of non-parametric regression that can usefully complement and often outperform more consolidated approaches. The performance of Support Vector Regression as a method of filtering is investigated first, comparing it with the most popular alternative techniques. Then Support Vector Regression is applied to the problem of non-parametric regression to analyse Lidar surveys for the environments measurement of particulate matter due to wildfires. The proposed approach has given very positive results and provides new perspectives to the interpretation of the data.
ERIC Educational Resources Information Center
And Others; Werts, Charles E.
1979-01-01
It is shown how partial covariance, part and partial correlation, and regression weights can be estimated and tested for significance by means of a factor analytic model. Comparable partial covariance, correlations, and regression weights have identical significance tests. (Author)
A Simulation Investigation of Principal Component Regression.
ERIC Educational Resources Information Center
Allen, David E.
Regression analysis is one of the more common analytic tools used by researchers. However, multicollinearity between the predictor variables can cause problems in using the results of regression analyses. Problems associated with multicollinearity include entanglement of relative influences of variables due to reduced precision of estimation,…
ERIC Educational Resources Information Center
Jaccard, James; And Others
1990-01-01
Issues in the detection and interpretation of interaction effects between quantitative variables in multiple regression analysis are discussed. Recent discussions associated with problems of multicollinearity are reviewed in the context of the conditional nature of multiple regression with product terms. (TJH)
CATEGORICAL REGRESSION ANALYSIS OF ACUTE INHALATION TOXICITY DATA FOR HYDROGEN SULFIDE
Categorical regression is one of the tools offered by the U.S. EPA for derivation of acute reference exposures (AREs), which are dose-response assessments for acute exposures to inhaled chemicals. Categorical regression is used as a meta-analytical technique to calculate probabi...
Shi, K-Q; Zhou, Y-Y; Yan, H-D; Li, H; Wu, F-L; Xie, Y-Y; Braddock, M; Lin, X-Y; Zheng, M-H
2017-02-01
At present, there is no ideal model for predicting the short-term outcome of patients with acute-on-chronic hepatitis B liver failure (ACHBLF). This study aimed to establish and validate a prognostic model by using the classification and regression tree (CART) analysis. A total of 1047 patients from two separate medical centres with suspected ACHBLF were screened in the study, which were recognized as derivation cohort and validation cohort, respectively. CART analysis was applied to predict the 3-month mortality of patients with ACHBLF. The accuracy of the CART model was tested using the area under the receiver operating characteristic curve, which was compared with the model for end-stage liver disease (MELD) score and a new logistic regression model. CART analysis identified four variables as prognostic factors of ACHBLF: total bilirubin, age, serum sodium and INR, and three distinct risk groups: low risk (4.2%), intermediate risk (30.2%-53.2%) and high risk (81.4%-96.9%). The new logistic regression model was constructed with four independent factors, including age, total bilirubin, serum sodium and prothrombin activity by multivariate logistic regression analysis. The performances of the CART model (0.896), similar to the logistic regression model (0.914, P=.382), exceeded that of MELD score (0.667, P<.001). The results were confirmed in the validation cohort. We have developed and validated a novel CART model superior to MELD for predicting three-month mortality of patients with ACHBLF. Thus, the CART model could facilitate medical decision-making and provide clinicians with a validated practical bedside tool for ACHBLF risk stratification. © 2016 John Wiley & Sons Ltd.
An Analysis of San Diego's Housing Market Using a Geographically Weighted Regression Approach
NASA Astrophysics Data System (ADS)
Grant, Christina P.
San Diego County real estate transaction data was evaluated with a set of linear models calibrated by ordinary least squares and geographically weighted regression (GWR). The goal of the analysis was to determine whether the spatial effects assumed to be in the data are best studied globally with no spatial terms, globally with a fixed effects submarket variable, or locally with GWR. 18,050 single-family residential sales which closed in the six months between April 2014 and September 2014 were used in the analysis. Diagnostic statistics including AICc, R2, Global Moran's I, and visual inspection of diagnostic plots and maps indicate superior model performance by GWR as compared to both global regressions.
Li, Min; Zhang, Lu; Yao, Xiaolong; Jiang, Xingyu
2017-01-01
The emerging membrane introduction mass spectrometry technique has been successfully used to detect benzene, toluene, ethyl benzene and xylene (BTEX), while overlapped spectra have unfortunately hindered its further application to the analysis of mixtures. Multivariate calibration, an efficient method to analyze mixtures, has been widely applied. In this paper, we compared univariate and multivariate analyses for quantification of the individual components of mixture samples. The results showed that the univariate analysis creates poor models with regression coefficients of 0.912, 0.867, 0.440 and 0.351 for BTEX, respectively. For multivariate analysis, a comparison to the partial-least squares (PLS) model shows that the orthogonal partial-least squares (OPLS) regression exhibits an optimal performance with regression coefficients of 0.995, 0.999, 0.980 and 0.976, favorable calibration parameters (RMSEC and RMSECV) and a favorable validation parameter (RMSEP). Furthermore, the OPLS exhibits a good recovery of 73.86 - 122.20% and relative standard deviation (RSD) of the repeatability of 1.14 - 4.87%. Thus, MIMS coupled with the OPLS regression provides an optimal approach for a quantitative BTEX mixture analysis in monitoring and predicting water pollution.
Modeling vertebrate diversity in Oregon using satellite imagery
NASA Astrophysics Data System (ADS)
Cablk, Mary Elizabeth
Vertebrate diversity was modeled for the state of Oregon using a parametric approach to regression tree analysis. This exploratory data analysis effectively modeled the non-linear relationships between vertebrate richness and phenology, terrain, and climate. Phenology was derived from time-series NOAA-AVHRR satellite imagery for the year 1992 using two methods: principal component analysis and derivation of EROS data center greenness metrics. These two measures of spatial and temporal vegetation condition incorporated the critical temporal element in this analysis. The first three principal components were shown to contain spatial and temporal information about the landscape and discriminated phenologically distinct regions in Oregon. Principal components 2 and 3, 6 greenness metrics, elevation, slope, aspect, annual precipitation, and annual seasonal temperature difference were investigated as correlates to amphibians, birds, all vertebrates, reptiles, and mammals. Variation explained for each regression tree by taxa were: amphibians (91%), birds (67%), all vertebrates (66%), reptiles (57%), and mammals (55%). Spatial statistics were used to quantify the pattern of each taxa and assess validity of resulting predictions from regression tree models. Regression tree analysis was relatively robust against spatial autocorrelation in the response data and graphical results indicated models were well fit to the data.
Introduction to the use of regression models in epidemiology.
Bender, Ralf
2009-01-01
Regression modeling is one of the most important statistical techniques used in analytical epidemiology. By means of regression models the effect of one or several explanatory variables (e.g., exposures, subject characteristics, risk factors) on a response variable such as mortality or cancer can be investigated. From multiple regression models, adjusted effect estimates can be obtained that take the effect of potential confounders into account. Regression methods can be applied in all epidemiologic study designs so that they represent a universal tool for data analysis in epidemiology. Different kinds of regression models have been developed in dependence on the measurement scale of the response variable and the study design. The most important methods are linear regression for continuous outcomes, logistic regression for binary outcomes, Cox regression for time-to-event data, and Poisson regression for frequencies and rates. This chapter provides a nontechnical introduction to these regression models with illustrating examples from cancer research.
Understanding poisson regression.
Hayat, Matthew J; Higgins, Melinda
2014-04-01
Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes. Copyright 2014, SLACK Incorporated.
Do Our Means of Inquiry Match our Intentions?
Petscher, Yaacov
2016-01-01
A key stage of the scientific method is the analysis of data, yet despite the variety of methods that are available to researchers they are most frequently distilled to a model that focuses on the average relation between variables. Although research questions are frequently conceived with broad inquiry in mind, most regression methods are limited in comprehensively evaluating how observed behaviors are related to each other. Quantile regression is a largely unknown yet well-suited analytic technique similar to traditional regression analysis, but allows for a more systematic approach to understanding complex associations among observed phenomena in the psychological sciences. Data from the National Education Longitudinal Study of 1988/2000 are used to illustrate how quantile regression overcomes the limitations of average associations in linear regression by showing that psychological well-being and sex each differentially relate to reading achievement depending on one’s level of reading achievement. PMID:27486410
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2018-01-01
Analysis and use of temperature-dependent wind tunnel strain-gage balance calibration data are discussed in the paper. First, three different methods are presented and compared that may be used to process temperature-dependent strain-gage balance data. The first method uses an extended set of independent variables in order to process the data and predict balance loads. The second method applies an extended load iteration equation during the analysis of balance calibration data. The third method uses temperature-dependent sensitivities for the data analysis. Physical interpretations of the most important temperature-dependent regression model terms are provided that relate temperature compensation imperfections and the temperature-dependent nature of the gage factor to sets of regression model terms. Finally, balance calibration recommendations are listed so that temperature-dependent calibration data can be obtained and successfully processed using the reviewed analysis methods.
2015-01-01
different PRBC transfusion volumes. We performed multivariate regression analysis using HRV metrics and routine vital signs to test the hypothesis that...study sponsors did not have any role in the study design, data collection, analysis and interpretation of data, report writing, or the decision to...primary outcome was hemorrhagic injury plus different PRBC transfusion volumes. We performed multivariate regression analysis using HRV metrics and
Singh, Jagmahender; Pathak, R K; Chavali, Krishnadutt H
2011-03-20
Skeletal height estimation from regression analysis of eight sternal lengths in the subjects of Chandigarh zone of Northwest India is the topic of discussion in this study. Analysis of eight sternal lengths (length of manubrium, length of mesosternum, combined length of manubrium and mesosternum, total sternal length and first four intercostals lengths of mesosternum) measured from 252 male and 91 female sternums obtained at postmortems revealed that mean cadaver stature and sternal lengths were more in North Indians and males than the South Indians and females. Except intercostal lengths, all the sternal lengths were positively correlated with stature of the deceased in both sexes (P < 0.001). The multiple regression analysis of sternal lengths was found more useful than the linear regression for stature estimation. Using multivariate regression analysis, the combined length of manubrium and mesosternum in both sexes and the length of manubrium along with 2nd and 3rd intercostal lengths of mesosternum in males were selected as best estimators of stature. Nonetheless, the stature of males can be predicted with SEE of 6.66 (R(2) = 0.16, r = 0.318) from combination of MBL+BL_3+LM+BL_2, and in females from MBL only, it can be estimated with SEE of 6.65 (R(2) = 0.10, r = 0.318), whereas from the multiple regression analysis of pooled data, stature can be known with SEE of 6.97 (R(2) = 0.387, r = 575) from the combination of MBL+LM+BL_2+TSL+BL_3. The R(2) and F-ratio were found to be statistically significant for almost all the variables in both the sexes, except 4th intercostal length in males and 2nd to 4th intercostal lengths in females. The 'major' sternal lengths were more useful than the 'minor' ones for stature estimation The universal regression analysis used by Kanchan et al. [39] when applied to sternal lengths, gave satisfactory estimates of stature for males only but female stature was comparatively better estimated from simple linear regressions. But they are not proposed for the subjects of known sex, as they underestimate the male and overestimate female stature. However, intercostal lengths were found to be the poor estimators of stature (P < 0.05). And also sternal lengths exhibit weaker correlation coefficients and higher standard errors of estimate. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data.
Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard
2017-04-01
To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field in the elderly. When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI -0.03 to 0.32D, p = 0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, p = 0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller p-values, while analysis of the worse eye provided larger p-values than mixed effects models and marginal models. In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision.
ERIC Educational Resources Information Center
Li, Spencer D.
2011-01-01
Mediation analysis in child and adolescent development research is possible using large secondary data sets. This article provides an overview of two statistical methods commonly used to test mediated effects in secondary analysis: multiple regression and structural equation modeling (SEM). Two empirical studies are presented to illustrate the…
Conjoint Analysis: A Study of the Effects of Using Person Variables.
ERIC Educational Resources Information Center
Fraas, John W.; Newman, Isadore
Three statistical techniques--conjoint analysis, a multiple linear regression model, and a multiple linear regression model with a surrogate person variable--were used to estimate the relative importance of five university attributes for students in the process of selecting a college. The five attributes include: availability and variety of…