NASA Astrophysics Data System (ADS)
Camera, Corrado; Bruggeman, Adriana; Hadjinicolaou, Panos; Pashiardis, Stelios; Lange, Manfred A.
2014-01-01
High-resolution gridded daily data sets are essential for natural resource management and the analyses of climate changes and their effects. This study aims to evaluate the performance of 15 simple or complex interpolation techniques in reproducing daily precipitation at a resolution of 1 km2 over topographically complex areas. Methods are tested considering two different sets of observation densities and different rainfall amounts. We used rainfall data that were recorded at 74 and 145 observational stations, respectively, spread over the 5760 km2 of the Republic of Cyprus, in the Eastern Mediterranean. Regression analyses utilizing geographical copredictors and neighboring interpolation techniques were evaluated both in isolation and combined. Linear multiple regression (LMR) and geographically weighted regression methods (GWR) were tested. These included a step-wise selection of covariables, as well as inverse distance weighting (IDW), kriging, and 3D-thin plate splines (TPS). The relative rank of the different techniques changes with different station density and rainfall amounts. Our results indicate that TPS performs well for low station density and large-scale events and also when coupled with regression models. It performs poorly for high station density. The opposite is observed when using IDW. Simple IDW performs best for local events, while a combination of step-wise GWR and IDW proves to be the best method for large-scale events and high station density. This study indicates that the use of step-wise regression with a variable set of geographic parameters can improve the interpolation of large-scale events because it facilitates the representation of local climate dynamics.
ERIC Educational Resources Information Center
Pissanos, Becky W.; And Others
1983-01-01
Step-wise linear regressions were used to relate children's age, sex, and body composition to performance on basic motor abilities including balance, speed, agility, power, coordination, and reaction time, and to health-related fitness items including flexibility, muscle strength and endurance and cardiovascular functions. Eighty subjects were in…
Akbar, Jamshed; Iqbal, Shahid; Batool, Fozia; Karim, Abdul; Chan, Kim Wei
2012-01-01
Quantitative structure-retention relationships (QSRRs) have successfully been developed for naturally occurring phenolic compounds in a reversed-phase liquid chromatographic (RPLC) system. A total of 1519 descriptors were calculated from the optimized structures of the molecules using MOPAC2009 and DRAGON softwares. The data set of 39 molecules was divided into training and external validation sets. For feature selection and mapping we used step-wise multiple linear regression (SMLR), unsupervised forward selection followed by step-wise multiple linear regression (UFS-SMLR) and artificial neural networks (ANN). Stable and robust models with significant predictive abilities in terms of validation statistics were obtained with negation of any chance correlation. ANN models were found better than remaining two approaches. HNar, IDM, Mp, GATS2v, DISP and 3D-MoRSE (signals 22, 28 and 32) descriptors based on van der Waals volume, electronegativity, mass and polarizability, at atomic level, were found to have significant effects on the retention times. The possible implications of these descriptors in RPLC have been discussed. All the models are proven to be quite able to predict the retention times of phenolic compounds and have shown remarkable validation, robustness, stability and predictive performance. PMID:23203132
Aerobic Fitness Does Not Contribute to Prediction of Orthostatic Intolerance
NASA Technical Reports Server (NTRS)
Convertino, Victor A.; Sather, Tom M.; Goldwater, Danielle J.; Alford, William R.
1986-01-01
Several investigations have suggested that orthostatic tolerance may be inversely related to aerobic fitness (VO (sub 2max)). To test this hypothesis, 18 males (age 29 to 51 yr) underwent both treadmill VO(sub 2max) determination and graded lower body negative pressures (LBNP) exposure to tolerance. VO(2max) was measured during the last minute of a Bruce treadmill protocol. LBNP was terminated based on pre-syncopal symptoms and LBNP tolerance (peak LBNP) was expressed as the cumulative product of LBNP and time (torr-min). Changes in heart rate, stroke volume cardiac output, blood pressure and impedance rheographic indices of mid-thigh-leg initial accumulation were measured at rest and during the final minute of LBNP. For all 18 subjects, mean (plus or minus SE) fluid accumulation index and leg venous compliance index at peak LBNP were 139 plus or minus 3.9 plus or minus 0.4 ml-torr-min(exp -2) x 10(exp 3), respectively. Pearson product-moment correlations and step-wise linear regression were used to investigate relationships with peak LBNP. Variables associated with endurance training, such as VO(sub 2max) and percent body fat were not found to correlate significantly (P is less than 0.05) with peak LBNP and did not add sufficiently to the prediction of peak LBNP to be included in the step-wise regression model. The step-wise regression model included only fluid accumulation index leg venous compliance index, and blood volume and resulted in a squared multiple correlation coefficient of 0.978. These data do not support the hypothesis that orthostatic tolerance as measured by LBNP is lower in individuals with high aerobic fitness.
NASA Astrophysics Data System (ADS)
Dai, Xiaoqian; Tian, Jie; Chen, Zhe
2010-03-01
Parametric images can represent both spatial distribution and quantification of the biological and physiological parameters of tracer kinetics. The linear least square (LLS) method is a well-estimated linear regression method for generating parametric images by fitting compartment models with good computational efficiency. However, bias exists in LLS-based parameter estimates, owing to the noise present in tissue time activity curves (TTACs) that propagates as correlated error in the LLS linearized equations. To address this problem, a volume-wise principal component analysis (PCA) based method is proposed. In this method, firstly dynamic PET data are properly pre-transformed to standardize noise variance as PCA is a data driven technique and can not itself separate signals from noise. Secondly, the volume-wise PCA is applied on PET data. The signals can be mostly represented by the first few principle components (PC) and the noise is left in the subsequent PCs. Then the noise-reduced data are obtained using the first few PCs by applying 'inverse PCA'. It should also be transformed back according to the pre-transformation method used in the first step to maintain the scale of the original data set. Finally, the obtained new data set is used to generate parametric images using the linear least squares (LLS) estimation method. Compared with other noise-removal method, the proposed method can achieve high statistical reliability in the generated parametric images. The effectiveness of the method is demonstrated both with computer simulation and with clinical dynamic FDG PET study.
Prediction of health levels by remote sensing
NASA Technical Reports Server (NTRS)
Rush, M.; Vernon, S.
1975-01-01
Measures of the environment derived from remote sensing were compared to census population/housing measures in their ability to discriminate among health status areas in two urban communities. Three hypotheses were developed to explore the relationships between environmental and health data. Univariate and multiple step-wise linear regression analyses were performed on data from two sample areas in Houston and Galveston, Texas. Environmental data gathered by remote sensing were found to equal or surpass census data in predicting rates of health outcomes. Remote sensing offers the advantages of data collection for any chosen area or time interval, flexibilities not allowed by the decennial census.
NASA Astrophysics Data System (ADS)
Kim, Euiyoung; Cho, Maenghyo
2017-11-01
In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.
NASA Astrophysics Data System (ADS)
Haris, A.; Nafian, M.; Riyanto, A.
2017-07-01
Danish North Sea Fields consist of several formations (Ekofisk, Tor, and Cromer Knoll) that was started from the age of Paleocene to Miocene. In this study, the integration of seismic and well log data set is carried out to determine the chalk sand distribution in the Danish North Sea field. The integration of seismic and well log data set is performed by using the seismic inversion analysis and seismic multi-attribute. The seismic inversion algorithm, which is used to derive acoustic impedance (AI), is model-based technique. The derived AI is then used as external attributes for the input of multi-attribute analysis. Moreover, the multi-attribute analysis is used to generate the linear and non-linear transformation of among well log properties. In the case of the linear model, selected transformation is conducted by weighting step-wise linear regression (SWR), while for the non-linear model is performed by using probabilistic neural networks (PNN). The estimated porosity, which is resulted by PNN shows better suited to the well log data compared with the results of SWR. This result can be understood since PNN perform non-linear regression so that the relationship between the attribute data and predicted log data can be optimized. The distribution of chalk sand has been successfully identified and characterized by porosity value ranging from 23% up to 30%.
Naval Research Logistics Quarterly. Volume 28. Number 3,
1981-09-01
denotes component-wise maximum. f has antone (isotone) differences on C x D if for cl < c2 and d, < d2, NAVAL RESEARCH LOGISTICS QUARTERLY VOL. 28...or negative correlations and linear or nonlinear regressions. Given are the mo- ments to order two and, for special cases, (he regression function and...data sets. We designate this bnb distribution as G - B - N(a, 0, v). The distribution admits only of positive correlation and linear regressions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clegg, Samuel M; Barefield, James E; Wiens, Roger C
2008-01-01
The ChemCam instrument on the Mars Science Laboratory (MSL) will include a laser-induced breakdown spectrometer (LIBS) to quantify major and minor elemental compositions. The traditional analytical chemistry approach to calibration curves for these data regresses a single diagnostic peak area against concentration for each element. This approach contrasts with a new multivariate method in which elemental concentrations are predicted by step-wise multiple regression analysis based on areas of a specific set of diagnostic peaks for each element. The method is tested on LIBS data from igneous and metamorphosed rocks. Between 4 and 13 partial regression coefficients are needed to describemore » each elemental abundance accurately (i.e., with a regression line of R{sup 2} > 0.9995 for the relationship between predicted and measured elemental concentration) for all major and minor elements studied. Validation plots suggest that the method is limited at present by the small data set, and will work best for prediction of concentration when a wide variety of compositions and rock types has been analyzed.« less
Heo, Yun Seok; Lee, Ho-Joon; Hassell, Bryan A; Irimia, Daniel; Toth, Thomas L; Elmoazzen, Heidi; Toner, Mehmet
2011-10-21
Oocyte cryopreservation has become an essential tool in the treatment of infertility by preserving oocytes for women undergoing chemotherapy. However, despite recent advances, pregnancy rates from all cryopreserved oocytes remain low. The inevitable use of the cryoprotectants (CPAs) during preservation affects the viability of the preserved oocytes and pregnancy rates either through CPA toxicity or osmotic injury. Current protocols attempt to reduce CPA toxicity by minimizing CPA concentrations, or by minimizing the volume changes via the step-wise addition of CPAs to the cells. Although the step-wise addition decreases osmotic shock to oocytes, it unfortunately increases toxic injuries due to the long exposure times to CPAs. To address limitations of current protocols and to rationally design protocols that minimize the exposure to CPAs, we developed a microfluidic device for the quantitative measurements of oocyte volume during various CPA loading protocols. We spatially secured a single oocyte on the microfluidic device, created precisely controlled continuous CPA profiles (step-wise, linear and complex) for the addition of CPAs to the oocyte and measured the oocyte volumetric response to each profile. With both linear and complex profiles, we were able to load 1.5 M propanediol to oocytes in less than 15 min and with a volumetric change of less than 10%. Thus, we believe this single oocyte analysis technology will eventually help future advances in assisted reproductive technologies and fertility preservation.
Vanderhaeghe, F; Smolders, A J P; Roelofs, J G M; Hoffmann, M
2012-03-01
Selecting an appropriate variable subset in linear multivariate methods is an important methodological issue for ecologists. Interest often exists in obtaining general predictive capacity or in finding causal inferences from predictor variables. Because of a lack of solid knowledge on a studied phenomenon, scientists explore predictor variables in order to find the most meaningful (i.e. discriminating) ones. As an example, we modelled the response of the amphibious softwater plant Eleocharis multicaulis using canonical discriminant function analysis. We asked how variables can be selected through comparison of several methods: univariate Pearson chi-square screening, principal components analysis (PCA) and step-wise analysis, as well as combinations of some methods. We expected PCA to perform best. The selected methods were evaluated through fit and stability of the resulting discriminant functions and through correlations between these functions and the predictor variables. The chi-square subset, at P < 0.05, followed by a step-wise sub-selection, gave the best results. In contrast to expectations, PCA performed poorly, as so did step-wise analysis. The different chi-square subset methods all yielded ecologically meaningful variables, while probable noise variables were also selected by PCA and step-wise analysis. We advise against the simple use of PCA or step-wise discriminant analysis to obtain an ecologically meaningful variable subset; the former because it does not take into account the response variable, the latter because noise variables are likely to be selected. We suggest that univariate screening techniques are a worthwhile alternative for variable selection in ecology. © 2011 German Botanical Society and The Royal Botanical Society of the Netherlands.
Correlation and simple linear regression.
Eberly, Lynn E
2007-01-01
This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.
Gadegbeku, Crystal A; Stillman, Phyllis Kreger; Huffman, Mark D; Jackson, James S; Kusek, John W; Jamerson, Kenneth A
2008-11-01
Recruitment of diverse populations into clinical trials remains challenging but is needed to fully understand disease processes and benefit the general public. Greater knowledge of key factors among ethnic and racial minority populations associated with the decision to participate in clinical research studies may facilitate recruitment and enhance the generalizibility of study results. Therefore, during the recruitment phase of the African American Study of Kidney Disease and Hypertension (AASK) trial, we conducted a telephone survey, using validated questions, to explore potential facilitators and barriers of research participation among eligible candidates residing in seven U.S. locations. Survey responses included a range of characteristics and perceptions among participants and non-participants and were compared using bivariate and step-wise logistic regression analyses. One-hundred forty-one respondents in the one-hundred forty (70 trial participants and 71 non-participants) completed the survey. Trial participants and non-participants were similar in multiple demographic characteristics and shared similar views on discrimination, physician mistrust, and research integrity. Key group differences were related to their perceptions of the impact of their research participation. Participants associated enrollment with personal and societal health benefits, while non-participants were influenced by the health risks. In a step-wise linear regression analysis, the most powerful significant positive predictors of participation were acknowledgement of health status as important in the enrollment decision (OR=4.54, p=0.006), employment (OR=3.12, p = 0.05) and healthcare satisfaction (OR=2.12, p<0.01). Racially-based mistrust did not emerge as a negative predictor and subjects' decisions were not influenced by the race of the research staff. In conclusion, these results suggest that health-related factors, and not psychosocial perceptions, have predominant influence on research participation among African Americans.
Belavý, Daniel L; Armbrecht, Gabriele; Blenk, Tilo; Bock, Oliver; Börst, Hendrikje; Kocakaya, Emine; Luhn, Franziska; Rantalainen, Timo; Rawer, Rainer; Tomasius, Frederike; Willnecker, Johannes; Felsenberg, Dieter
2016-02-01
We evaluated which aspects of neuromuscular performance are associated with bone mass, density, strength and geometry. 417 women aged 60-94years were examined. Countermovement jump, sit-to-stand test, grip strength, forearm and calf muscle cross-sectional area, areal bone mineral content and density (aBMC and aBMD) at the hip and lumbar spine via dual X-ray absorptiometry, and measures of volumetric vBMC and vBMD, bone geometry and section modulus at 4% and 66% of radius length and 4%, 38% and 66% of tibia length via peripheral quantitative computed tomography were performed. The first principal component of the neuromuscular variables was calculated to generate a summary neuromuscular variable. Percentage of total variance in bone parameters explained by the neuromuscular parameters was calculated. Step-wise regression was also performed. At all pQCT bone sites (radius, ulna, tibia, fibula), a greater percentage of total variance in measures of bone mass, cortical geometry and/or bone strength was explained by peak neuromuscular performance than for vBMD. Sit-to-stand performance did not relate strongly to bone parameters. No obvious differential in the explanatory power of neuromuscular performance was seen for DXA aBMC versus aBMD. In step-wise regression, bone mass, cortical morphology, and/or strength remained significant in relation to the first principal component of the neuromuscular variables. In no case was vBMD positively related to neuromuscular performance in the final step-wise regression models. Peak neuromuscular performance has a stronger relationship with leg and forearm bone mass and cortical geometry as well as proximal forearm section modulus than with vBMD. Copyright © 2015 Elsevier Inc. All rights reserved.
Developing a Study Orientation Questionnaire in Mathematics for primary school students.
Maree, Jacobus G; Van der Walt, Martha S; Ellis, Suria M
2009-04-01
The Study Orientation Questionnaire in Mathematics (Primary) is being developed as a diagnostic measure for South African teachers and counsellors to help primary school students improve their orientation towards the study of mathematics. In this study, participants were primary school students in the North-West Province of South Africa. During the standardisation in 2007, 1,013 students (538 boys: M age = 12.61; SD = 1.53; 555 girls: M age = 11.98; SD = 1.35; 10 missing values) were assessed. Factor analysis yielded three factors. Analysis also showed satisfactory reliability coefficients and item-factor correlations. Step-wise linear regression indicated that three factors (Mathematics anxiety, Study attitude in mathematics, and Study habits in mathematics) contributed significantly (R2 = .194) to predicting achievement in mathematics as measured by the Basic Mathematics Questionnaire (Primary).
Association of dentine hypersensitivity with different risk factors - a cross sectional study.
Vijaya, V; Sanjay, Venkataraam; Varghese, Rana K; Ravuri, Rajyalakshmi; Agarwal, Anil
2013-12-01
This study was done to assess the prevalence of Dentine hypersensitivity (DH) and its associated risk factors. This epidemiological study was done among patients coming to dental college regarding prevalence of DH. A self structured questionnaire along with clinical examination was done for assessment. Descriptive statistics were obtained and frequency distribution was calculated using Chi square test at p value <0.05. Stepwise multiple linear regression was also done to access frequency of DH with different factors. The study population was comprised of 655 participants with different age groups. Our study showed prevalence as 55% and it was more common among males. Similarly smokers and those who use hard tooth brush had more cases of DH. Step wise multiple linear regression showed that best predictor for DH was age followed by habit of smoking and type of tooth brush. Most aggravating factors were cold water (15.4%) and sweet foods (14.7%), whereas only 5% of the patients had it while brushing. A high level of dental hypersensitivity has been in this study and more common among males. A linear finding was shown with age, smoking and type of tooth brush. How to cite this article: Vijaya V, Sanjay V, Varghese RK, Ravuri R, Agarwal A. Association of Dentine Hypersensitivity with Different Risk Factors - A Cross Sectional Study. J Int Oral Health 2013;5(6):88-92 .
Dynamic graphs, community detection, and Riemannian geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakker, Craig; Halappanavar, Mahantesh; Visweswara Sathanur, Arun
A community is a subset of a wider network where the members of that subset are more strongly connected to each other than they are to the rest of the network. In this paper, we consider the problem of identifying and tracking communities in graphs that change over time {dynamic community detection} and present a framework based on Riemannian geometry to aid in this task. Our framework currently supports several important operations such as interpolating between and averaging over graph snapshots. We compare these Riemannian methods with entry-wise linear interpolation and that the Riemannian methods are generally better suited tomore » dynamic community detection. Next steps with the Riemannian framework include developing higher-order interpolation methods (e.g. the analogues of polynomial and spline interpolation) and a Riemannian least-squares regression method for working with noisy data.« less
Reduction of time-resolved space-based CCD photometry developed for MOST Fabry Imaging data*
NASA Astrophysics Data System (ADS)
Reegen, P.; Kallinger, T.; Frast, D.; Gruberbauer, M.; Huber, D.; Matthews, J. M.; Punz, D.; Schraml, S.; Weiss, W. W.; Kuschnig, R.; Moffat, A. F. J.; Walker, G. A. H.; Guenther, D. B.; Rucinski, S. M.; Sasselov, D.
2006-04-01
The MOST (Microvariability and Oscillations of Stars) satellite obtains ultraprecise photometry from space with high sampling rates and duty cycles. Astronomical photometry or imaging missions in low Earth orbits, like MOST, are especially sensitive to scattered light from Earthshine, and all these missions have a common need to extract target information from voluminous data cubes. They consist of upwards of hundreds of thousands of two-dimensional CCD frames (or subrasters) containing from hundreds to millions of pixels each, where the target information, superposed on background and instrumental effects, is contained only in a subset of pixels (Fabry Images, defocused images, mini-spectra). We describe a novel reduction technique for such data cubes: resolving linear correlations of target and background pixel intensities. This step-wise multiple linear regression removes only those target variations which are also detected in the background. The advantage of regression analysis versus background subtraction is the appropriate scaling, taking into account that the amount of contamination may differ from pixel to pixel. The multivariate solution for all pairs of target/background pixels is minimally invasive of the raw photometry while being very effective in reducing contamination due to, e.g. stray light. The technique is tested and demonstrated with both simulated oscillation signals and real MOST photometry.
Tibiofemoral contact forces during walking, running and sidestepping.
Saxby, David J; Modenese, Luca; Bryant, Adam L; Gerus, Pauline; Killen, Bryce; Fortin, Karine; Wrigley, Tim V; Bennell, Kim L; Cicuttini, Flavia M; Lloyd, David G
2016-09-01
We explored the tibiofemoral contact forces and the relative contributions of muscles and external loads to those contact forces during various gait tasks. Second, we assessed the relationships between external gait measures and contact forces. A calibrated electromyography-driven neuromusculoskeletal model estimated the tibiofemoral contact forces during walking (1.44±0.22ms(-1)), running (4.38±0.42ms(-1)) and sidestepping (3.58±0.50ms(-1)) in healthy adults (n=60, 27.3±5.4years, 1.75±0.11m, and 69.8±14.0kg). Contact forces increased from walking (∼1-2.8 BW) to running (∼3-8 BW), sidestepping had largest maximum total (8.47±1.57 BW) and lateral contact forces (4.3±1.05 BW), while running had largest maximum medial contact forces (5.1±0.95 BW). Relative muscle contributions increased across gait tasks (up to 80-90% of medial contact forces), and peaked during running for lateral contact forces (∼90%). Knee adduction moment (KAM) had weak relationships with tibiofemoral contact forces (all R(2)<0.36) and the relationships were gait task-specific. Step-wise regression of multiple external gait measures strengthened relationships (0.20
VoxelStats: A MATLAB Package for Multi-Modal Voxel-Wise Brain Image Analysis.
Mathotaarachchi, Sulantha; Wang, Seqian; Shin, Monica; Pascoal, Tharick A; Benedet, Andrea L; Kang, Min Su; Beaudry, Thomas; Fonov, Vladimir S; Gauthier, Serge; Labbe, Aurélie; Rosa-Neto, Pedro
2016-01-01
In healthy individuals, behavioral outcomes are highly associated with the variability on brain regional structure or neurochemical phenotypes. Similarly, in the context of neurodegenerative conditions, neuroimaging reveals that cognitive decline is linked to the magnitude of atrophy, neurochemical declines, or concentrations of abnormal protein aggregates across brain regions. However, modeling the effects of multiple regional abnormalities as determinants of cognitive decline at the voxel level remains largely unexplored by multimodal imaging research, given the high computational cost of estimating regression models for every single voxel from various imaging modalities. VoxelStats is a voxel-wise computational framework to overcome these computational limitations and to perform statistical operations on multiple scalar variables and imaging modalities at the voxel level. VoxelStats package has been developed in Matlab(®) and supports imaging formats such as Nifti-1, ANALYZE, and MINC v2. Prebuilt functions in VoxelStats enable the user to perform voxel-wise general and generalized linear models and mixed effect models with multiple volumetric covariates. Importantly, VoxelStats can recognize scalar values or image volumes as response variables and can accommodate volumetric statistical covariates as well as their interaction effects with other variables. Furthermore, this package includes built-in functionality to perform voxel-wise receiver operating characteristic analysis and paired and unpaired group contrast analysis. Validation of VoxelStats was conducted by comparing the linear regression functionality with existing toolboxes such as glim_image and RMINC. The validation results were identical to existing methods and the additional functionality was demonstrated by generating feature case assessments (t-statistics, odds ratio, and true positive rate maps). In summary, VoxelStats expands the current methods for multimodal imaging analysis by allowing the estimation of advanced regional association metrics at the voxel level.
Association of Dentine Hypersensitivity with Different Risk Factors – A Cross Sectional Study
Vijaya, V; Sanjay, Venkataraam; Varghese, Rana K; Ravuri, Rajyalakshmi; Agarwal, Anil
2013-01-01
Background: This study was done to assess the prevalence of Dentine hypersensitivity (DH) and its associated risk factors. Materials & Methods: This epidemiological study was done among patients coming to dental college regarding prevalence of DH. A self structured questionnaire along with clinical examination was done for assessment. Descriptive statistics were obtained and frequency distribution was calculated using Chi square test at p value <0.05. Stepwise multiple linear regression was also done to access frequency of DH with different factors. Results: The study population was comprised of 655 participants with different age groups. Our study showed prevalence as 55% and it was more common among males. Similarly smokers and those who use hard tooth brush had more cases of DH. Step wise multiple linear regression showed that best predictor for DH was age followed by habit of smoking and type of tooth brush. Most aggravating factors were cold water (15.4%) and sweet foods (14.7%), whereas only 5% of the patients had it while brushing. Conclusion: A high level of dental hypersensitivity has been in this study and more common among males. A linear finding was shown with age, smoking and type of tooth brush. How to cite this article: Vijaya V, Sanjay V, Varghese RK, Ravuri R, Agarwal A. Association of Dentine Hypersensitivity with Different Risk Factors – A Cross Sectional Study. J Int Oral Health 2013;5(6):88-92 . PMID:24453451
Handling Missing Data: Analysis of a Challenging Data Set Using Multiple Imputation
ERIC Educational Resources Information Center
Pampaka, Maria; Hutcheson, Graeme; Williams, Julian
2016-01-01
Missing data is endemic in much educational research. However, practices such as step-wise regression common in the educational research literature have been shown to be dangerous when significant data are missing, and multiple imputation (MI) is generally recommended by statisticians. In this paper, we provide a review of these advances and their…
Muradian, Kh K; Utko, N O; Mozzhukhina, T H; Pishel', I M; Litoshenko, O Ia; Bezrukov, V V; Fraĭfel'd, V E
2002-01-01
Correlative and regressive relations between the gaseous exchange, thermoregulation and mitochondrial protein content were analyzed by two- and three-dimensional statistics in mice. It has been shown that the pair wise linear methods of analysis did not reveal any significant correlation between the parameters under exploration. However, it became evident at three-dimensional and non-linear plotting for which the coefficients of multivariable correlation reached and even exceeded 0.7-0.8. The calculations based on partial differentiation of the multivariable regression equations allow to conclude that at certain values of VO2, VCO2 and body temperature negative relations between the systems of gaseous exchange and thermoregulation become dominating.
ERIC Educational Resources Information Center
McCoy, John L.
Step-wise multiple regression and typological analysis were used to analyze the extent to which selected factors influence vertical mobility and achieved level of living. A sample of 418 male household heads who were 18 to 45 years old in Washington County, Mississippi were interviewed during 1971. A prescreening using census and local housing…
ERIC Educational Resources Information Center
Wendt, Jillian L.; Nisbet, Deanna L.
2017-01-01
This study examined the predictive relationship among international students' sense of community, perceived learning, and end-of-course grades in computer-mediated, U.S. graduate-level courses. The community of inquiry (CoI) framework served as the theoretical foundation for the study. Step-wise hierarchical multiple regression showed no…
Step-wise refolding of recombinant proteins.
Tsumoto, Kouhei; Arakawa, Tsutomu; Chen, Linda
2010-04-01
Protein refolding is still on trial-and-error basis. Here we describe step-wise dialysis refolding, in which denaturant concentration is altered in step-wise fashion. This technology controls the folding pathway by adjusting the concentrations of the denaturant and other solvent additives to induce sequential folding or disulfide formation.
USING LINEAR AND POLYNOMIAL MODELS TO EXAMINE THE ENVIRONMENTAL STABILITY OF VIRUSES
The article presents the development of model equations for describing the fate of viral infectivity in environmental samples. Most of the models were based upon the use of a two-step linear regression approach. The first step employs regression of log base 10 transformed viral t...
Aadal, Lena; Fog, Lisbet; Pedersen, Asger Roer
2016-12-01
Investigation of a possible relation between body temperature measurements by the current generation of tympanic ear and rectal thermometers. In Denmark, a national guideline recommends the rectal measurement. Subsequently, the rectal thermometers and tympanic ear devices are the most frequently used and first choice in Danish hospital wards. Cognitive changes constitute challenges with cooperating in rectal temperature assessments. With regard to diagnosing, ethics, safety and the patients' dignity, the tympanic ear thermometer might comprise a desirable alternative to rectal noninvasive measurement of body temperature during in-hospital-based neurorehabilitation. A prospective, descriptive cohort study. Consecutive inclusion of 27 patients. Linear regression models were used to analyse 284 simultaneous temperature measurements. Ethical approval for this study was granted by the Danish Data Protection Agency, and the study was completed in accordance with the Helsinki Declaration 2008. About 284 simultaneous rectal and ear temperature measurements on 27 patients were analysed. The patient-wise variability of measured temperatures was significantly higher for the ear measurements. Patient-wise linear regressions for the 25 patients with at least three pairs of simultaneous ear and rectal temperature measurements showed large interpatient variability of the association. A linear relationship between the rectal body temperature assessment and the temperature assessment employing the tympanic thermometer is weak. Both measuring methods reflect variance in temperature, but ear measurements showed larger variation. © 2016 Nordic College of Caring Science.
Ghasemi, Jahan B; Safavi-Sohi, Reihaneh; Barbosa, Euzébio G
2012-02-01
A quasi 4D-QSAR has been carried out on a series of potent Gram-negative LpxC inhibitors. This approach makes use of the molecular dynamics (MD) trajectories and topology information retrieved from the GROMACS package. This new methodology is based on the generation of a conformational ensemble profile, CEP, for each compound instead of only one conformation, followed by the calculation intermolecular interaction energies at each grid point considering probes and all aligned conformations resulting from MD simulations. These interaction energies are independent variables employed in a QSAR analysis. The comparison of the proposed methodology to comparative molecular field analysis (CoMFA) formalism was performed. This methodology explores jointly the main features of CoMFA and 4D-QSAR models. Step-wise multiple linear regression was used for the selection of the most informative variables. After variable selection, multiple linear regression (MLR) and partial least squares (PLS) methods used for building the regression models. Leave-N-out cross-validation (LNO), and Y-randomization were performed in order to confirm the robustness of the model in addition to analysis of the independent test set. Best models provided the following statistics: [Formula in text] (PLS) and [Formula in text] (MLR). Docking study was applied to investigate the major interactions in protein-ligand complex with CDOCKER algorithm. Visualization of the descriptors of the best model helps us to interpret the model from the chemical point of view, supporting the applicability of this new approach in rational drug design.
Key Elements of Observing Practice: A Data Wise DVD and Facilitator's Guide
ERIC Educational Resources Information Center
Boudett, Kathryn Parker; City, Elizabeth A.; Russell, Marcia K.
2010-01-01
Based on the bestselling book "Data Wise: A Step-by-Step Guide to Using Assessment Results to Improve Teaching and Learning", and its companion volume, "Data Wise in Action", this DVD and Facilitator's Guide offer insight into one of the most challenging steps in capturing data about school performance: observing and analyzing instructional…
A statistical model of expansion in a colony of black-tailed prairie dogs
R. P. Cincotta; Daniel W. Uresk; R. M. Hansen
1988-01-01
To predict prairie dog establishment in areas adjacent to a colony we sample: (1) VISIBILITY through the vegetation using a target, (2) POPULATION DENSITY at the cology edge, (3) DISTANCE from the edge to the potential site of settlement, and (4) % FORB COVER. Step-wise regression analysis indicated that establishment of prairie dogs in adjacent prairie was most likely...
Afolayan, A A
1985-09-01
"The paper sets out to test whether or not the movement pattern of people in Nigeria is step-wise. It examines the spatial order in the country and the movement pattern of people. It then analyzes the survey data and tests for the validity of step-wise migration in the country. The findings show that step-wise migration cannot adequately describe all the patterns observed." The presence of large-scale circulatory migration between rural and urban areas is noted. Ways to decrease the pressure on Lagos by developing intermediate urban areas are considered. excerpt
Development of a Bayesian model to estimate health care outcomes in the severely wounded
Stojadinovic, Alexander; Eberhardt, John; Brown, Trevor S; Hawksworth, Jason S; Gage, Frederick; Tadaki, Douglas K; Forsberg, Jonathan A; Davis, Thomas A; Potter, Benjamin K; Dunne, James R; Elster, E A
2010-01-01
Background: Graphical probabilistic models have the ability to provide insights as to how clinical factors are conditionally related. These models can be used to help us understand factors influencing health care outcomes and resource utilization, and to estimate morbidity and clinical outcomes in trauma patient populations. Study design: Thirty-two combat casualties with severe extremity injuries enrolled in a prospective observational study were analyzed using step-wise machine-learned Bayesian belief network (BBN) and step-wise logistic regression (LR). Models were evaluated using 10-fold cross-validation to calculate area-under-the-curve (AUC) from receiver operating characteristics (ROC) curves. Results: Our BBN showed important associations between various factors in our data set that could not be developed using standard regression methods. Cross-validated ROC curve analysis showed that our BBN model was a robust representation of our data domain and that LR models trained on these findings were also robust: hospital-acquired infection (AUC: LR, 0.81; BBN, 0.79), intensive care unit length of stay (AUC: LR, 0.97; BBN, 0.81), and wound healing (AUC: LR, 0.91; BBN, 0.72) showed strong AUC. Conclusions: A BBN model can effectively represent clinical outcomes and biomarkers in patients hospitalized after severe wounding, and is confirmed by 10-fold cross-validation and further confirmed through logistic regression modeling. The method warrants further development and independent validation in other, more diverse patient populations. PMID:21197361
[Has the pregnancy outcome of women with pregestational diabetes mellitus improved in ten years?].
Čechurová, Daniela; Krčma, Michal; Jankovec, Zdeněk; Dort, Jiří; Turek, Jan; Lacigová, Silvie; Rušavý, Zdeněk
2015-02-01
In spite of progress in medicine, studies from a number of countries indicate steadily increased risk of perinatal morbidity and mortality in the offspring of diabetic mothers. No data regarding the pregnancy outcome in women with diabetes mellitus type 1 and 2 (pregestational DM) have been published in the Czech Republic. The aim of the study was to evaluate the pregnancy course of women with pregestational DM and outcome of their offspring and to assess whether it has improved in ten years. A retrospective evaluation of pregnancy outcome of pregestational DM women followed up in the University Hospital Pilsen in years 2000-2009 (Group A, n = 107) and comparison with the period 1990-1997 (Group B, n = 39) were performed. Wilcoxon non-paired test, contingency tables, step-wise logistic regression and step-wise linear multiple regression methods were used for statistical analyses. Data is presented as median (interquartile range). Women from the Group A were older 28 (25, 31) vs 25 (22, 27) years, p = 0.01. Otherwise, the groups did not statistically significantly differ in diabetes duration, BMI, and representation of women with type 2 diabetes. A better glycemic control (HbA1c, mmol/mol) was achieved in the Group A in all trimesters - 1st trimester: 59 (47, 67) vs 66 (56, 76), 2nd trimester: 46 (40, 52) vs 54 (48, 59) and 3rd trimester: 46 (40, 51) vs 53 (47, 60), p = 0.01. The caesarean section rate decreased (65.2 % vs 87.5 %, p < 0.05). The incidence of the respiratory distress syndrome after adjustment for age and diabetes duration also decreased (8.9 % vs 18.2 %, p < 0.05). A decreasing trend in the rate of premature delivery before 34th week of gestation (1.1 % vs 6.3 %) and neonatal mortality (1.1 % vs 2.9 %) was observed, however, the differences were not statistically significant. The achieved improved glycemic control led to only a partial improvement in the course of pregnancy and outcome of the offspring of diabetic mothers.
Dafsari, Haidar Salimi; Weiß, Luisa; Silverdale, Monty; Rizos, Alexandra; Reddy, Prashanth; Ashkan, Keyoumars; Evans, Julian; Reker, Paul; Petry-Schmelzer, Jan Niklas; Samuel, Michael; Visser-Vandewalle, Veerle; Antonini, Angelo; Martinez-Martin, Pablo; Ray-Chaudhuri, K; Timmermann, Lars
2018-02-24
Subthalamic nucleus (STN) deep brain stimulation (DBS) improves quality of life (QoL), motor, and non-motor symptoms (NMS) in advanced Parkinson's disease (PD). However, considerable inter-individual variability has been observed for QoL outcome. We hypothesized that demographic and preoperative NMS characteristics can predict postoperative QoL outcome. In this ongoing, prospective, multicenter study (Cologne, Manchester, London) including 88 patients, we collected the following scales preoperatively and on follow-up 6 months postoperatively: PDQuestionnaire-8 (PDQ-8), NMSScale (NMSS), NMSQuestionnaire (NMSQ), Scales for Outcomes in PD (SCOPA)-motor examination, -complications, and -activities of daily living, levodopa equivalent daily dose. We dichotomized patients into "QoL responders"/"non-responders" and screened for factors associated with QoL improvement with (1) Spearman-correlations between baseline test scores and QoL improvement, (2) step-wise linear regressions with baseline test scores as independent and QoL improvement as dependent variables, (3) logistic regressions using aforementioned "responders/non-responders" as dependent variable. All outcomes improved significantly on follow-up. However, approximately 44% of patients were categorized as "QoL non-responders". Spearman-correlations, linear and logistic regression analyses were significant for NMSS and NMSQ but not for SCOPA-motor examination. Post-hoc, we identified specific NMS (flat moods, difficulties experiencing pleasure, pain, bladder voiding) as significant contributors to QoL outcome. Our results provide evidence that QoL improvement after STN-DBS depends on preoperative NMS characteristics. These findings are important in the advising and selection of individuals for DBS therapy. Future studies investigating motor and non-motor PD clusters may enable stratifying QoL outcomes and help predict patients' individual prospects of benefiting from DBS. Copyright © 2018. Published by Elsevier Inc.
TG study of the Li0.4Fe2.4Zn0.2O4 ferrite synthesis
NASA Astrophysics Data System (ADS)
Lysenko, E. N.; Nikolaev, E. V.; Surzhikov, A. P.
2016-02-01
In this paper, the kinetic analysis of Li-Zn ferrite synthesis was studied using thermogravimetry (TG) method through the simultaneous application of non-linear regression to several measurements run at different heating rates (multivariate non-linear regression). Using TG-curves obtained for the four heating rates and Netzsch Thermokinetics software package, the kinetic models with minimal adjustable parameters were selected to quantitatively describe the reaction of Li-Zn ferrite synthesis. It was shown that the experimental TG-curves clearly suggest a two-step process for the ferrite synthesis and therefore a model-fitting kinetic analysis based on multivariate non-linear regressions was conducted. The complex reaction was described by a two-step reaction scheme consisting of sequential reaction steps. It is established that the best results were obtained using the Yander three-dimensional diffusion model at the first stage and Ginstling-Bronstein model at the second step. The kinetic parameters for lithium-zinc ferrite synthesis reaction were found and discussed.
Motunrayo Ibrahim, Fausat
2013-01-01
Gardening is a worthwhile adventure which engenders health op-timization. Yet, a dearth of evidences that highlights motivations to engage in gardening exists. This study examined willingness to engage in gardening and its correlates, including some socio-psychological, health related and socio-demographic variables. In this cross-sectional survey, 508 copies of a structured questionnaire were randomly self administered among a group of civil servants of Oyo State, Nigeria. Multi-item measures were used to assess variables. Step wise multiple regression analysis was used to identify predictors of willingness to engage in gar-dening Results: Simple percentile analysis shows that 71.1% of respondents do not own a garden. Results of step wise multiple regression analysis indicate that descriptive norm of gardening is a good predictor, social support for gardening is better while gardening self efficacy is the best predictor of willingness to engage in gardening (P< 0.001). Health consciousness, gardening response efficacy, education and age are not predictors of this willingness (P> 0.05). Results of t-test and ANOVA respectively shows that gender is not associated with this willingness (P> 0.05), but marital status is (P< 0.05). Socio-psychological characteristics and being married are very rele-vant in motivations to engage in gardening. The nexus between gardening and health optimization appears to be highly obscured in this population.
Motunrayo Ibrahim, Fausat
2013-01-01
Background: Gardening is a worthwhile adventure which engenders health optimization. Yet, a dearth of evidences that highlights motivations to engage in gardening exists. This study examined willingness to engage in gardening and its correlates, including some socio-psychological, health related and socio-demographic variables. Methods: In this cross-sectional survey, 508 copies of a structured questionnaire were randomly self administered among a group of civil servants of Oyo State, Nigeria. Multi-item measures were used to assess variables. Step wise multiple regression analysis was used to identify predictors of willingness to engage in gardening Results: Simple percentile analysis shows that 71.1% of respondents do not own a garden. Results of step wise multiple regression analysis indicate that descriptive norm of gardening is a good predictor, social support for gardening is better while gardening self efficacy is the best predictor of willingness to engage in gardening (P< 0.001). Health consciousness, gardening response efficacy, education and age are not predictors of this willingness (P> 0.05). Results of t-test and ANOVA respectively shows that gender is not associated with this willingness (P> 0.05), but marital status is (P< 0.05). Conclusion: Socio-psychological characteristics and being married are very relevant in motivations to engage in gardening. The nexus between gardening and health optimization appears to be highly obscured in this population. PMID:24688974
ERIC Educational Resources Information Center
Boudett, Kathryn Parker, Ed.; City, Elizabeth A., Ed.; Murnane, Richard J., Ed.
2013-01-01
"Data Wise: A Step-by-Step Guide to Using Assessment Results to Improve Teaching and Learning" presents a clear and carefully tested blueprint for school leaders. It shows how examining test scores and other classroom data can become a catalyst for important schoolwide conversations that will enhance schools' abilities to capture…
ERIC Educational Resources Information Center
Boudett, Kathryn Parker, Ed.; City, Elizabeth A., Ed.; Murnane, Richard J., Ed.
2013-01-01
"Data Wise: A Step-by-Step Guide to Using Assessment Results to Improve Teaching and Learning" presents a clear and carefully tested blueprint for school leaders. It shows how examining test scores and other classroom data can become a catalyst for important schoolwide conversations that will enhance schools' abilities to capture…
Data Wise: A Step-by-Step Guide to Using Assessment Results to Improve Teaching and Learning
ERIC Educational Resources Information Center
Boudett, Kathryn Parker, Ed.; City, Elizabeth, Ed.; Murnane, Richard, Ed.
2005-01-01
In the wake of the accountability movement, school administrators are inundated with data about their students. How can they use this information to support student achievement? "Data Wise: A Step-by-Step Guide to Using Assessment Results to Improve Teaching and Learning" presents a clear and carefully tested blueprint for school leaders. It shows…
Local repair of stoma prolapse: Case report of an in vivo application of linear stapler devices.
Monette, Margaret M; Harney, Rodney T; Morris, Melanie S; Chu, Daniel I
2016-11-01
One of the most common late complications following stoma construction is prolapse. Although the majority of prolapse can be managed conservatively, surgical revision is required with incarceration/strangulation and in certain cases laparotomy and/or stoma reversal are not appropriate. This report will inform surgeons on safe and effective approaches to revising prolapsed stomas using local techniques. A 58 year old female with an obstructing rectal cancer previously received a diverting transverse loop colostomy. On completion of neoadjuvant treatment, re-staging found new lung metastases. She was scheduled for further chemotherapy but incarcerated a prolapsed segment of her loop colostomy. As there was no plan to resect her primary rectal tumor at the time, a local revision was preferred. Linear staplers were applied to the prolapsed stoma in step-wise fashion to locally revise the incarcerated prolapse. Post-operative recovery was satisfactory with no complications or recurrence of prolapse. We detail in step-wise fashion a technique using linear stapler devices that can be used to locally revise prolapsed stoma segments and therefore avoid a laparotomy. The procedure is technically easy to perform with satisfactory post-operative outcomes. We additionally review all previous reports of local repairs and show the evolution of local prolapse repair to the currently reported technique. This report offers surgeons an alternative, efficient and effective option for addressing the complications of stoma prolapse. While future studies are needed to assess long-term outcomes, in the short-term, our report confirms the safety and effectiveness of this local technique.
Kovalska, M P; Bürki, E; Schoetzau, A; Orguel, S F; Orguel, S; Grieshaber, M C
2011-04-01
The distinction of real progression from test variability in visual field (VF) series may be based on clinical judgment, on trend analysis based on follow-up of test parameters over time, or on identification of a significant change related to the mean of baseline exams (event analysis). The aim of this study was to compare a new population-based method (Octopus field analysis, OFA) with classic regression analyses and clinical judgment for detecting glaucomatous VF changes. 240 VF series of 240 patients with at least 9 consecutive examinations available were included into this study. They were independently classified by two experienced investigators. The results of such a classification served as a reference for comparison for the following statistical tests: (a) t-test global, (b) r-test global, (c) regression analysis of 10 VF clusters and (d) point-wise linear regression analysis. 32.5 % of the VF series were classified as progressive by the investigators. The sensitivity and specificity were 89.7 % and 92.0 % for r-test, and 73.1 % and 93.8 % for the t-test, respectively. In the point-wise linear regression analysis, the specificity was comparable (89.5 % versus 92 %), but the sensitivity was clearly lower than in the r-test (22.4 % versus 89.7 %) at a significance level of p = 0.01. A regression analysis for the 10 VF clusters showed a markedly higher sensitivity for the r-test (37.7 %) than the t-test (14.1 %) at a similar specificity (88.3 % versus 93.8 %) for a significant trend (p = 0.005). In regard to the cluster distribution, the paracentral clusters and the superior nasal hemifield progressed most frequently. The population-based regression analysis seems to be superior to the trend analysis in detecting VF progression in glaucoma, and may eliminate the drawbacks of the event analysis. Further, it may assist the clinician in the evaluation of VF series and may allow better visualization of the correlation between function and structure owing to VF clusters. © Georg Thieme Verlag KG Stuttgart · New York.
Enhanced eumelanin emission by stepwise three-photon excitation
NASA Astrophysics Data System (ADS)
Kerimo, Josef; Rajadhyaksha, Milind; DiMarzio, Charles A.
2011-03-01
Eumelanin fluorescence from Sepia officinalis and black human hair was activated with near-infrared radiation and multiphoton excitation. A third order multiphoton absorption by a step-wise process appears to be the underlying mechanism. The activation was caused by a photochemical process since it could not be reproduced by simple heating. Both fluorescence and brightfield imaging indicate the near-infrared irradiation caused photodamage to the eumelanin and the activated emission originated from the photodamaged region. At least two different components with about thousand-fold enhanced fluorescence were activated and could be distinguished by their excitation properties. One component was excited with wavelengths in the visible region and exhibited linear absorption dependence. The second component could be excited with near-infrared wavelengths and had a third order dependence on the laser power. The third order dependence is explained by a step-wise excited state absorption (ESA) process since it could be observed equally with the CW and femtosecond lasers. The new method for photoactivating the eumelanin fluorescence was used to map the melanin content in human hair.
Finite cohesion due to chain entanglement in polymer melts.
Cheng, Shiwang; Lu, Yuyuan; Liu, Gengxin; Wang, Shi-Qing
2016-04-14
Three different types of experiments, quiescent stress relaxation, delayed rate-switching during stress relaxation, and elastic recovery after step strain, are carried out in this work to elucidate the existence of a finite cohesion barrier against free chain retraction in entangled polymers. Our experiments show that there is little hastened stress relaxation from step-wise shear up to γ = 0.7 and step-wise extension up to the stretching ratio λ = 1.5 at any time before or after the Rouse time. In contrast, a noticeable stress drop stemming from the built-in barrier-free chain retraction is predicted using the GLaMM model. In other words, the experiment reveals a threshold magnitude of step-wise deformation below which the stress relaxation follows identical dynamics whereas the GLaMM or Doi-Edwards model indicates a monotonic acceleration of the stress relaxation dynamics as a function of the magnitude of the step-wise deformation. Furthermore, a sudden application of startup extension during different stages of stress relaxation after a step-wise extension, i.e. the delayed rate-switching experiment, shows that the geometric condensation of entanglement strands in the cross-sectional area survives beyond the reptation time τd that is over 100 times the Rouse time τR. Our results point to the existence of a cohesion barrier that can prevent free chain retraction upon moderate deformation in well-entangled polymer melts.
Brown, A M
2001-06-01
The objective of this present study was to introduce a simple, easily understood method for carrying out non-linear regression analysis based on user input functions. While it is relatively straightforward to fit data with simple functions such as linear or logarithmic functions, fitting data with more complicated non-linear functions is more difficult. Commercial specialist programmes are available that will carry out this analysis, but these programmes are expensive and are not intuitive to learn. An alternative method described here is to use the SOLVER function of the ubiquitous spreadsheet programme Microsoft Excel, which employs an iterative least squares fitting routine to produce the optimal goodness of fit between data and function. The intent of this paper is to lead the reader through an easily understood step-by-step guide to implementing this method, which can be applied to any function in the form y=f(x), and is well suited to fast, reliable analysis of data in all fields of biology.
Validation of Regression-Based Myogenic Correction Techniques for Scalp and Source-Localized EEG
McMenamin, Brenton W.; Shackman, Alexander J.; Maxwell, Jeffrey S.; Greischar, Lawrence L.; Davidson, Richard J.
2008-01-01
EEG and EEG source-estimation are susceptible to electromyographic artifacts (EMG) generated by the cranial muscles. EMG can mask genuine effects or masquerade as a legitimate effect - even in low frequencies, such as alpha (8–13Hz). Although regression-based correction has been used previously, only cursory attempts at validation exist and the utility for source-localized data is unknown. To address this, EEG was recorded from 17 participants while neurogenic and myogenic activity were factorially varied. We assessed the sensitivity and specificity of four regression-based techniques: between-subjects, between-subjects using difference-scores, within-subjects condition-wise, and within-subject epoch-wise on the scalp and in data modeled using the LORETA algorithm. Although within-subject epoch-wise showed superior performance on the scalp, no technique succeeded in the source-space. Aside from validating the novel epoch-wise methods on the scalp, we highlight methods requiring further development. PMID:19298626
Compound Identification Using Penalized Linear Regression on Metabolomics
Liu, Ruiqi; Wu, Dongfeng; Zhang, Xiang; Kim, Seongho
2014-01-01
Compound identification is often achieved by matching the experimental mass spectra to the mass spectra stored in a reference library based on mass spectral similarity. Because the number of compounds in the reference library is much larger than the range of mass-to-charge ratio (m/z) values so that the data become high dimensional data suffering from singularity. For this reason, penalized linear regressions such as ridge regression and the lasso are used instead of the ordinary least squares regression. Furthermore, two-step approaches using the dot product and Pearson’s correlation along with the penalized linear regression are proposed in this study. PMID:27212894
GWAS with longitudinal phenotypes: performance of approximate procedures
Sikorska, Karolina; Montazeri, Nahid Mostafavi; Uitterlinden, André; Rivadeneira, Fernando; Eilers, Paul HC; Lesaffre, Emmanuel
2015-01-01
Analysis of genome-wide association studies with longitudinal data using standard procedures, such as linear mixed model (LMM) fitting, leads to discouragingly long computation times. There is a need to speed up the computations significantly. In our previous work (Sikorska et al: Fast linear mixed model computations for genome-wide association studies with longitudinal data. Stat Med 2012; 32.1: 165–180), we proposed the conditional two-step (CTS) approach as a fast method providing an approximation to the P-value for the longitudinal single-nucleotide polymorphism (SNP) effect. In the first step a reduced conditional LMM is fit, omitting all the SNP terms. In the second step, the estimated random slopes are regressed on SNPs. The CTS has been applied to the bone mineral density data from the Rotterdam Study and proved to work very well even in unbalanced situations. In another article (Sikorska et al: GWAS on your notebook: fast semi-parallel linear and logistic regression for genome-wide association studies. BMC Bioinformatics 2013; 14: 166), we suggested semi-parallel computations, greatly speeding up fitting many linear regressions. Combining CTS with fast linear regression reduces the computation time from several weeks to a few minutes on a single computer. Here, we explore further the properties of the CTS both analytically and by simulations. We investigate the performance of our proposal in comparison with a related but different approach, the two-step procedure. It is analytically shown that for the balanced case, under mild assumptions, the P-value provided by the CTS is the same as from the LMM. For unbalanced data and in realistic situations, simulations show that the CTS method does not inflate the type I error rate and implies only a minimal loss of power. PMID:25712081
Fast and robust group-wise eQTL mapping using sparse graphical models.
Cheng, Wei; Shi, Yu; Zhang, Xiang; Wang, Wei
2015-01-16
Genome-wide expression quantitative trait loci (eQTL) studies have emerged as a powerful tool to understand the genetic basis of gene expression and complex traits. The traditional eQTL methods focus on testing the associations between individual single-nucleotide polymorphisms (SNPs) and gene expression traits. A major drawback of this approach is that it cannot model the joint effect of a set of SNPs on a set of genes, which may correspond to hidden biological pathways. We introduce a new approach to identify novel group-wise associations between sets of SNPs and sets of genes. Such associations are captured by hidden variables connecting SNPs and genes. Our model is a linear-Gaussian model and uses two types of hidden variables. One captures the set associations between SNPs and genes, and the other captures confounders. We develop an efficient optimization procedure which makes this approach suitable for large scale studies. Extensive experimental evaluations on both simulated and real datasets demonstrate that the proposed methods can effectively capture both individual and group-wise signals that cannot be identified by the state-of-the-art eQTL mapping methods. Considering group-wise associations significantly improves the accuracy of eQTL mapping, and the successful multi-layer regression model opens a new approach to understand how multiple SNPs interact with each other to jointly affect the expression level of a group of genes.
A simplified competition data analysis for radioligand specific activity determination.
Venturino, A; Rivera, E S; Bergoc, R M; Caro, R A
1990-01-01
Non-linear regression and two-step linear fit methods were developed to determine the actual specific activity of 125I-ovine prolactin by radioreceptor self-displacement analysis. The experimental results obtained by the different methods are superposable. The non-linear regression method is considered to be the most adequate procedure to calculate the specific activity, but if its software is not available, the other described methods are also suitable.
Biological Parametric Mapping: A Statistical Toolbox for Multi-Modality Brain Image Analysis
Casanova, Ramon; Ryali, Srikanth; Baer, Aaron; Laurienti, Paul J.; Burdette, Jonathan H.; Hayasaka, Satoru; Flowers, Lynn; Wood, Frank; Maldjian, Joseph A.
2006-01-01
In recent years multiple brain MR imaging modalities have emerged; however, analysis methodologies have mainly remained modality specific. In addition, when comparing across imaging modalities, most researchers have been forced to rely on simple region-of-interest type analyses, which do not allow the voxel-by-voxel comparisons necessary to answer more sophisticated neuroscience questions. To overcome these limitations, we developed a toolbox for multimodal image analysis called biological parametric mapping (BPM), based on a voxel-wise use of the general linear model. The BPM toolbox incorporates information obtained from other modalities as regressors in a voxel-wise analysis, thereby permitting investigation of more sophisticated hypotheses. The BPM toolbox has been developed in MATLAB with a user friendly interface for performing analyses, including voxel-wise multimodal correlation, ANCOVA, and multiple regression. It has a high degree of integration with the SPM (statistical parametric mapping) software relying on it for visualization and statistical inference. Furthermore, statistical inference for a correlation field, rather than a widely-used T-field, has been implemented in the correlation analysis for more accurate results. An example with in-vivo data is presented demonstrating the potential of the BPM methodology as a tool for multimodal image analysis. PMID:17070709
Schupf, Nicole; Lee, Annie; Park, Naeun; Dang, Lam-Ha; Pang, Deborah; Yale, Alexander; Oh, David Kyung-Taek; Krinsky-McHale, Sharon J; Jenkins, Edmund C; Luchsinger, José A; Zigman, Warren B; Silverman, Wayne; Tycko, Benjamin; Kisselev, Sergey; Clark, Lorraine; Lee, Joseph H
2015-10-01
We examined the contribution of candidates genes for Alzheimer's disease (AD) to individual differences in levels of beta amyloid peptides in adults with Down syndrom, a population at high risk for AD. Participants were 254 non-demented adults with Down syndrome, 30-78 years of age. Genomic deoxyribonucleic acid was genotyped using an Illumina GoldenGate custom array. We used linear regression to examine differences in levels of Aβ peptides associated with the number of risk alleles, adjusting for age, sex, level of intellectual disability, race and/or ethnicity, and the presence of the APOE ε4 allele. For Aβ42 levels, the strongest gene-wise association was found for a single nucleotide polymorphism (SNP) on CAHLM1; for Aβ40 levels, the strongest gene-wise associations were found for SNPs in IDE and SOD1, while the strongest gene-wise associations with levels of the Aβ42/Aβ40 ratio were found for SNPs in SORCS1. Broadly classified, variants in these genes may influence amyloid precursor protein processing (CALHM1, IDE), vesicular trafficking (SORCS1), and response to oxidative stress (SOD1). Copyright © 2015 Elsevier Inc. All rights reserved.
Spin-Based Lattice-Gas Quantum Computers in Solids Using Optical Addressing
2007-04-30
excitation spectra recorded while changing step wise the applied electric field from zero to 0.3 MVm 1. The resulting spectral trails give an overview of...optics (Wiley, New York, 1984). 24 to a few MVm ’, the Stark shift of the optical resonance is well fitted by linear and quadratic dependences, Av= aF+bF2...effect with a = - 6.3 GHz/ MVm n which corresponds to Ag = 1.3 D (I Debye = 3.33 1030 Cm). This value is very similar to values found in other cases
NASA Technical Reports Server (NTRS)
Lent, P. C. (Principal Investigator)
1973-01-01
The author has identified the following significant results. Step-wise discriminate analysis has demonstrated the feasibility of feature identification using linear discriminate functions of ERTS-1 MSS band densities and their ratios. The analysis indicated that features such as small streams can be detected even when they are in dark mountain shadow. The potential utility of this and similar analytic techniques appears considerable, and the limits it can be applied to analysis of ERTS-1 imagery are not yet fully known.
A primer for biomedical scientists on how to execute model II linear regression analysis.
Ludbrook, John
2012-04-01
1. There are two very different ways of executing linear regression analysis. One is Model I, when the x-values are fixed by the experimenter. The other is Model II, in which the x-values are free to vary and are subject to error. 2. I have received numerous complaints from biomedical scientists that they have great difficulty in executing Model II linear regression analysis. This may explain the results of a Google Scholar search, which showed that the authors of articles in journals of physiology, pharmacology and biochemistry rarely use Model II regression analysis. 3. I repeat my previous arguments in favour of using least products linear regression analysis for Model II regressions. I review three methods for executing ordinary least products (OLP) and weighted least products (WLP) regression analysis: (i) scientific calculator and/or computer spreadsheet; (ii) specific purpose computer programs; and (iii) general purpose computer programs. 4. Using a scientific calculator and/or computer spreadsheet, it is easy to obtain correct values for OLP slope and intercept, but the corresponding 95% confidence intervals (CI) are inaccurate. 5. Using specific purpose computer programs, the freeware computer program smatr gives the correct OLP regression coefficients and obtains 95% CI by bootstrapping. In addition, smatr can be used to compare the slopes of OLP lines. 6. When using general purpose computer programs, I recommend the commercial programs systat and Statistica for those who regularly undertake linear regression analysis and I give step-by-step instructions in the Supplementary Information as to how to use loss functions. © 2011 The Author. Clinical and Experimental Pharmacology and Physiology. © 2011 Blackwell Publishing Asia Pty Ltd.
Meeting Wise: Making the Most of Collaborative Time for Educators
ERIC Educational Resources Information Center
Boudett, Kathryn Parker; City, Elizabeth A.
2014-01-01
This book, by two editors of "Data Wise: A Step-by-Step Guide to Using Assessment Results to Improve Teaching and Learning," attempts to bring about a fundamental shift in how educators think about the meetings we attend. They make the case that these gatherings are potentially the most important venue where adult and organizational…
Melanin fluorescence spectra by step-wise three photon excitation
NASA Astrophysics Data System (ADS)
Lai, Zhenhua; Kerimo, Josef; DiMarzio, Charles A.
2012-03-01
Melanin is the characteristic chromophore of human skin with various potential biological functions. Kerimo discovered enhanced melanin fluorescence by stepwise three-photon excitation in 2011. In this article, step-wise three-photon excited fluorescence (STPEF) spectrum between 450 nm -700 nm of melanin is reported. The melanin STPEF spectrum exhibited an exponential increase with wavelength. However, there was a probability of about 33% that another kind of step-wise multi-photon excited fluorescence (SMPEF) that peaks at 525 nm, shown by previous research, could also be generated using the same process. Using an excitation source at 920 nm as opposed to 830 nm increased the potential for generating SMPEF peaks at 525 nm. The SMPEF spectrum peaks at 525 nm photo-bleached faster than STPEF spectrum.
Impact of Preadmission Variables on USMLE Step 1 and Step 2 Performance
ERIC Educational Resources Information Center
Kleshinski, James; Khuder, Sadik A.; Shapiro, Joseph I.; Gold, Jeffrey P.
2009-01-01
Purpose: To examine the predictive ability of preadmission variables on United States Medical Licensing Examinations (USMLE) step 1 and step 2 performance, incorporating the use of a neural network model. Method: Preadmission data were collected on matriculants from 1998 to 2004. Linear regression analysis was first used to identify predictors of…
Socio-economic variables influencing mean age at marriage in Karnataka and Kerala.
Prakasam, C P; Upadhyay, R B
1985-01-01
"In this paper an attempt was made to study the influence of certain socio-economic variables on the male and the female age at marriage in Karnataka and Kerala [India] for the year 1971. Step-wise regression method has been used to select the predictor variables influencing mean age at marriage. The results reveal that percent female literate...and percent female in labour force...are found to influence female mean age at marriage in Kerala, while the variables for Karnataka were percent female literate..., percent male literate..., and percent urban male population...." excerpt
Construction of the Second Quito Astrolabe Catalogue
NASA Astrophysics Data System (ADS)
Kolesnik, Y. B.
1994-03-01
A method for astrolabe catalogue construction is presented. It is based on classical concepts, but the model of conditional equations for the group reduction is modified, additional parameters being introduced in the step- wise regressions. The chain adjustment is neglected, and the advantages of this approach are discussed. The method has been applied to the data obtained with the astrolabe of the Quito Astronomical Observatory from 1964 to 1983. Various characteristics of the catalogue produced with this method are compared with those due to the rigorous classical method. Some improvement both in systematic and random errors is outlined.
Relationships between locus of control and paranormal beliefs.
Newby, Robert W; Davis, Jessica Boyette
2004-06-01
The present study investigated the associations between scores on paranormal beliefs, locus of control, and certain psychological processes such as affect and cognitions as measured by the Linguistic Inquiry and Word Count. Analysis yielded significant correlations between scores on Locus of Control and two subscales of Tobacyk's (1988) Revised Paranormal Beliefs Scale, New Age Philosophy and Traditional Paranormal Beliefs. A step-wise multiple regression analysis indicated that Locus of Control was significantly related to New Age Philosophy. Other correlations were found between Tobacyk's subscales, Locus of Control, and three processes measured by the Linguistic Inquiry and Word Count.
Welch, Thomas R; Olson, Brad G; Nelsen, Elizabeth; Beck Dallaghan, Gary L; Kennedy, Gloria A; Botash, Ann
2017-09-01
To determine whether training site or prior examinee performance on the US Medical Licensing Examination (USMLE) step 1 and step 2 might predict pass rates on the American Board of Pediatrics (ABP) certifying examination. Data from graduates of pediatric residency programs completing the ABP certifying examination between 2009 and 2013 were obtained. For each, results of the initial ABP certifying examination were obtained, as well as results on National Board of Medical Examiners (NBME) step 1 and step 2 examinations. Hierarchical linear modeling was used to nest first-time ABP results within training programs to isolate program contribution to ABP results while controlling for USMLE step 1 and step 2 scores. Stepwise linear regression was then used to determine which of these examinations was a better predictor of ABP results. A total of 1110 graduates of 15 programs had complete testing results and were subject to analysis. Mean ABP scores for these programs ranged from 186.13 to 214.32. The hierarchical linear model suggested that the interaction of step 1 and 2 scores predicted ABP performance (F[1,1007.70] = 6.44, P = .011). By conducting a multilevel model by training program, both USMLE step examinations predicted first-time ABP results (b = .002, t = 2.54, P = .011). Linear regression analyses indicated that step 2 results were a better predictor of ABP performance than step 1 or a combination of the two USMLE scores. Performance on the USMLE examinations, especially step 2, predicts performance on the ABP certifying examination. The contribution of training site to ABP performance was statistically significant, though contributed modestly to the effect compared with prior USMLE scores. Copyright © 2017 Elsevier Inc. All rights reserved.
Estimating V0[subscript 2]max Using a Personalized Step Test
ERIC Educational Resources Information Center
Webb, Carrie; Vehrs, Pat R.; George, James D.; Hager, Ronald
2014-01-01
The purpose of this study was to develop a step test with a personalized step rate and step height to predict cardiorespiratory fitness in 80 college-aged males and females using the self-reported perceived functional ability scale and data collected during the step test. Multiple linear regression analysis yielded a model (R = 0.90, SEE = 3.43…
Choosing wisely: prevalence and correlates of low-value health care services in the United States.
Colla, Carrie H; Morden, Nancy E; Sequist, Thomas D; Schpero, William L; Rosenthal, Meredith B
2015-02-01
Specialty societies in the United States identified low-value tests and procedures that contribute to waste and poor health care quality via implementation of the American Board of Internal Medicine Foundation's Choosing Wisely initiative. To develop claims-based algorithms, to use them to estimate the prevalence of select Choosing Wisely services and to examine the demographic, health and health care system correlates of low-value care at a regional level. Using Medicare data from 2006 to 2011, we created claims-based algorithms to measure the prevalence of 11 Choosing Wisely-identified low-value services and examined geographic variation across hospital referral regions (HRRs). We created a composite low-value care score for each HRR and used linear regression to identify regional characteristics associated with more intense use of low-value services. Fee-for-service Medicare beneficiaries over age 65. Prevalence of selected Choosing Wisely low-value services. The national average annual prevalence of the selected Choosing Wisely low-value services ranged from 1.2% (upper urinary tract imaging in men with benign prostatic hyperplasia) to 46.5% (preoperative cardiac testing for low-risk, non-cardiac procedures). Prevalence across HRRs varied significantly. Regional characteristics associated with higher use of low-value services included greater overall per capita spending, a higher specialist to primary care ratio and higher proportion of minority beneficiaries. Identifying and measuring low-value health services is a prerequisite for improving quality and eliminating waste. Our findings suggest that the delivery of wasteful and potentially harmful services may be a fruitful area for further research and policy intervention for HRRs with higher per-capita spending. These findings should inform action by physicians, health systems, policymakers, payers and consumer educators to improve the value of health care by targeting services and areas with greater use of potentially inappropriate care.
Adachi, Daiki; Nishiguchi, Shu; Fukutani, Naoto; Hotta, Takayuki; Tashiro, Yuto; Morino, Saori; Shirooka, Hidehiko; Nozaki, Yuma; Hirata, Hinako; Yamaguchi, Moe; Yorozu, Ayanori; Takahashi, Masaki; Aoyama, Tomoki
2017-05-01
The purpose of this study was to investigate which spatial and temporal parameters of the Timed Up and Go (TUG) test are associated with motor function in elderly individuals. This study included 99 community-dwelling women aged 72.9 ± 6.3 years. Step length, step width, single support time, variability of the aforementioned parameters, gait velocity, cadence, reaction time from starting signal to first step, and minimum distance between the foot and a marker placed to 3 in front of the chair were measured using our analysis system. The 10-m walk test, five times sit-to-stand (FTSTS) test, and one-leg standing (OLS) test were used to assess motor function. Stepwise multivariate linear regression analysis was used to determine which TUG test parameters were associated with each motor function test. Finally, we calculated a predictive model for each motor function test using each regression coefficient. In stepwise linear regression analysis, step length and cadence were significantly associated with the 10-m walk test, FTSTS and OLS test. Reaction time was associated with the FTSTS test, and step width was associated with the OLS test. Each predictive model showed a strong correlation with the 10-m walk test and OLS test (P < 0.01), which was not significant higher correlation than TUG test time. We showed which TUG test parameters were associated with each motor function test. Moreover, the TUG test time regarded as the lower extremity function and mobility has strong predictive ability in each motor function test. Copyright © 2017 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Anvari, S. R.; Monirvaghefi, S. M.; Enayati, M. H.
2015-06-01
In this study, step-wise multilayer and functionally graded Ni-P coatings were deposited with electroless in which the content of phosphorus and nickel would be changed gradually and step-wise through the thickness of the coatings, respectively. To compare the properties of these coatings with Ni-P single-layer coatings, three types of coatings with different phosphorus contents were deposited. Heat treatment of coatings was performed at 400 °C for 1 h. The microstructure and phase transformation of coatings were characterized by SEM/EDS, TEM, and XRD. The mechanical properties of coatings were studied by nanoindentation test. According to the results of the single-layer coatings, low P coating had the maximum hardness and also the ratio of hardness ( H) to elasticity modulus ( E) for the mentioned coating was maximum. In addition, low and medium P coatings had crystalline and semi-crystalline structure, respectively. The mentioned coatings had <111> texture and after heat treatment their texture didn't change. While high P coating had amorphous structure, after heat treatment it changed to crystalline structure with <100> texture for nickel grains. Furthermore, the results showed that functionally graded and step-wise multilayer coatings were deposited successfully by using the same initial bath and changing the temperature and pH during deposition. Nanoindentation test results showed that the hardness of the mentioned coatings changed from 670 Hv near the substrate to 860 Hv near the top surface of coatings. For functionally graded coating the hardness profile had gradual changes, while step-wise multilayer coating had step-wise hardness profile. After heat treatment trend of hardness profiles was changed, so that near the substrate, hardness was measured 1400 Hv and changed to 1090 Hv at the top coat.
Gaskin, Cadeyrn J; Happell, Brenda
2014-05-01
To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Quality of life in children with infantile hemangioma: a case control study.
Wang, Chuan; Li, Yanan; Xiang, Bo; Xiong, Fei; Li, Kai; Yang, Kaiying; Chen, Siyuan; Ji, Yi
2017-11-16
Infantile hemangioma (IH) is the most common vascular tumor in children. It is controversial whether IHs has effects on the quality of life (QOL) in patients of whom IH poses no threat or potential for complication. Thus, we conducted this study to evaluate the q QOL in patients with IH and find the predictors of poor QOL. The PedsQL 4.0 Genetic Core Scales and the PedsQL family information form were administered to parents of children with IH and healthy children both younger than 2-year-old. The quality-of-life instrument for IH (IH-QOL) and the PedsQL 4.0 family impact module were administered to parents of children with IH. We compared the PedsQL 4.0 Genetic Core Scales (GCIS) scores of the two groups. Multiple step-wise regression analysis was used to determine factors that influenced QOL in children with IH and their parents. Except for physical symptom, we found no significant difference in GCIS between patient group and healthy group (P = 0.409). The internal reliability of IH-QOL was excellent with the Cronbach's alpha coefficient for summary scores being 0.76. Multiple step-wise regression analysis showed that the predictors of poor IH-QOL total scores were hemangioma size, location, and mother's education level. The predictors of poor FIM total scores were hemangioma location and father's education level. The predictors of poor GCIS total scores were children's age, hemangioma location and father's education level. The findings support the feasibility and reliability of the Chinese version of IH-QOL to evaluate the QOL in children with IH and their parents. Hemangioma size, location and education level of mother are important impact factors for QOL in children with IH and their parents.
Teren, Andrej; Zachariae, Silke; Beutner, Frank; Ubrich, Romy; Sandri, Marcus; Engel, Christoph; Löffler, Markus; Gielen, Stephan
2016-07-01
Cardiorespiratory fitness is a well-established independent predictor of cardiovascular health. However, the relevance of alternative exercise and non-exercise tests for cardiorespiratory fitness assessment in large cohorts has not been studied in detail. We aimed to evaluate the YMCA-step test and the Veterans Specific Activity Questionnaire (VSAQ) for the estimation of cardiorespiratory fitness in the general population. One hundred and five subjects answered the VSAQ, performed the YMCA-step test and a maximal cardiopulmonary exercise test (CPX) and gave BORG ratings for both exercise tests (BORGSTEP, BORGCPX). Correlations of peak oxygen uptake on a treadmill (VO2_PEAK) with VSAQ, BORGSTEP, one-minute, post-exercise heartbeat count, and peak oxygen uptake during the step test (VO2_STEP) were determined. Moreover, the incremental values of the questionnaire and the step test in addition to other fitness-related parameters were evaluated using block-wise hierarchical regression analysis. Eighty-six subjects completed the step test according to the protocol. For completers, correlations of VO2_PEAK with the age- and gender-adjusted VSAQ, heartbeat count and VO2_STEP were 0.67, 0.63 and 0.49, respectively. However, using hierarchical regression analysis, age, gender and body mass index already explained 68.8% of the variance of VO2_PEAK, while the additional benefit of VSAQ was rather low (3.4%). The inclusion of BORGSTEP, heartbeat count and VO2_STEP increased R(2) by a further 2.2%, 3.3% and 5.6%, respectively, yielding a total R(2) of 83.3%. Neither VSAQ nor the YMCA-step test contributes sufficiently to the assessment of cardiorespiratory fitness in population-based studies. © The European Society of Cardiology 2015.
Recent trends of groundwater temperatures in Austria
NASA Astrophysics Data System (ADS)
Benz, Susanne A.; Bayer, Peter; Winkler, Gerfried; Blum, Philipp
2018-06-01
Climate change is one of if not the most pressing challenge modern society faces. Increasing temperatures are observed all over the planet and the impact of climate change on the hydrogeological cycle has long been shown. However, so far we have insufficient knowledge on the influence of atmospheric warming on shallow groundwater temperatures. While some studies analyse the implication climate change has for selected wells, large-scale studies are so far lacking. Here we focus on the combined impact of climate change in the atmosphere and local hydrogeological conditions on groundwater temperatures in 227 wells in Austria, which have in part been observed since 1964. A linear analysis finds a temperature change of +0.7 ± 0.8 K in the years from 1994 to 2013. In the same timeframe surface air temperatures in Austria increased by 0.5 ± 0.3 K, displaying a much smaller variety. However, most of the extreme changes in groundwater temperatures can be linked to local hydrogeological conditions. Correlation between groundwater temperatures and nearby surface air temperatures was additionally analysed. They vary greatly, with correlation coefficients of -0.3 in central Linz to 0.8 outside of Graz. In contrast, the correlation of nationwide groundwater temperatures and surface air temperatures is high, with a correlation coefficient of 0.83. All of these findings indicate that while atmospheric climate change can be observed in nationwide groundwater temperatures, individual wells are often primarily dominated by local hydrogeological conditions. In addition to the linear temperature trend, a step-wise model was also applied that identifies climate regime shifts, which were observed globally in the late 70s, 80s, and 90s. Hinting again at the influence of local conditions, at most 22 % of all wells show these climate regime shifts. However, we were able to identify an additional shift in 2007, which was observed by 37 % of all wells. Overall, the step-wise representation provides a slightly more accurate picture of observed temperatures than the linear trend.
ERIC Educational Resources Information Center
Pace, Diana; Witucki, Laurie; Blumreich, Kathleen
2008-01-01
This paper describes the rationale and the step by step process for setting up a WISE (Women in Science and Engineering) learning community at one institution. Background information on challenges for women in science and engineering and the benefits of a learning community for female students in these major areas are described. Authors discuss…
BrightStat.com: free statistics online.
Stricker, Daniel
2008-10-01
Powerful software for statistical analysis is expensive. Here I present BrightStat, a statistical software running on the Internet which is free of charge. BrightStat's goals, its main capabilities and functionalities are outlined. Three different sample runs, a Friedman test, a chi-square test, and a step-wise multiple regression are presented. The results obtained by BrightStat are compared with results computed by SPSS, one of the global leader in providing statistical software, and VassarStats, a collection of scripts for data analysis running on the Internet. Elementary statistics is an inherent part of academic education and BrightStat is an alternative to commercial products.
Correlates of cognitive function scores in elderly outpatients.
Mangione, C M; Seddon, J M; Cook, E F; Krug, J H; Sahagian, C R; Campion, E W; Glynn, R J
1993-05-01
To determine medical, ophthalmologic, and demographic predictors of cognitive function scores as measured by the Telephone Interview for Cognitive Status (TICS), an adaptation of the Folstein Mini-Mental Status Exam. A secondary objective was to perform an item-by-item analysis of the TICS scores to determine which items correlated most highly with the overall scores. Cross-sectional cohort study. The Glaucoma Consultation Service of the Massachusetts Eye and Ear Infirmary. 472 of 565 consecutive patients age 65 and older who were seen at the Glaucoma Consultation Service between November 1, 1987 and October 31, 1988. Each subject had a standard visual examination and review of medical history at entry, followed by a telephone interview that collected information on demographic characteristics, cognitive status, health status, accidents, falls, symptoms of depression, and alcohol intake. A multivariate linear regression model of correlates of TICS score found the strongest correlates to be education, age, occupation, and the presence of depressive symptoms. The only significant ocular condition that correlated with lower TICS score was the presence of surgical aphakia (model R2 = .46). Forty-six percent (216/472) of patients fell below the established definition of normal on the mental status scale. In a logistic regression analysis, the strongest correlates of an abnormal cognitive function score were age, diabetes, educational status, and occupational status. An item analysis using step-wise linear regression showed that 85 percent of the variance in the TICS score was explained by the ability to perform serial sevens and to repeat 10 items immediately after hearing them. Educational status correlated most highly with both of these items (Kendall Tau R = .43 and Kendall Tau R = .30, respectively). Education, occupation, depression, and age were the strongest correlates of the score on this new screening test for assessing cognitive status. These factors were stronger correlates of the TICS score than chronic medical conditions, visual loss, or medications. The Telephone Interview for Cognitive Status is a useful instrument, but it may overestimate the prevalence of dementia in studies with a high prevalence of persons with less than a high school education.
NASA Astrophysics Data System (ADS)
Deng, J.; Zhou, L.; Dong, Y.; Sanford, R. A.; Shechtman, L. A.; Alcalde, R.; Werth, C. J.; Fouke, B. W.
2017-12-01
Microorganisms in nature have evolved in response to a variety of environmental stresses, including gradients in pH, flow and chemistry. While environmental stresses are generally considered to be the driving force of adaptive evolution, the impact and extent of any specific stress needed to drive such changes has not been well characterized. In this study, a microfluidic diffusion chamber (MDC) and a batch culturing system were used to systematically study the effects of continuous versus step-wise stress increments on adaptation of E. coli to the antibiotic ciprofloxacin. In the MDC, a diffusion gradient of ciprofloxacin was established across a microfluidic well array to microscopically observe changes in Escherichia coli strain 307 replication and migration patterns that would indicate emergence of resistance due to genetic mutations. Cells recovered from the MDC only had resistance of 50-times the original minimum inhibition concentration (MICoriginal) of ciprofloxacin, although minimum exposure concentrations were over 80 × MICoriginal by the end of the experiment. In complementary batch experiments, E. coli 307 were exposed to step-wise daily increases of ciprofloxacin at rates equivalent to 0.1×, 0.2×, 0.4× or 0.8× times MICoriginal/day. Over a period of 18 days, E. coli cells were able to acquire resistance of up to 225 × MICoriginal, with exposure to ciprofloxacin concentration up to only 14.9 × MICoriginal. The different levels of acquired resistance in the continuous MDC versus step-wise batch increment experiments suggests that the intrinsic rate of E. coli adaptation was exceeded in the MDC, while the step-wise experiments favor adaptation to the highest ciprofloxacin experiments. Genomic analyses of E. coli DNA extracted from the microfluidic cell and batch cultures indicated four single nucleotide polymorphism (SNP) mutations of amino acid 82, 83 and 87 in the gyrA gene. The progression of adaptation in the step-wise increments of ciprofloxacin indicate that the Ser83-Leu mutation gradually becomes dominant over other gyrA mutations with increased antibiotic resistance. Co-existence of the Ser83-Leu and Asp87—Gly mutations appear to provide the greatest level of resistance (i.e., 85 × to 225 × MICoriginal), and only emerged after the whole community acquired the Ser83—Leu mutation.
Testing AGN unification via inference from large catalogs
NASA Astrophysics Data System (ADS)
Nikutta, Robert; Ivezic, Zeljko; Elitzur, Moshe; Nenkova, Maia
2018-01-01
Source orientation and clumpiness of the central dust are the main factors in AGN classification. Type-1 QSOs are easy to observe and large samples are available (e.g. in SDSS), but obscured type-2 AGN are dimmer and redder as our line of sight is more obscured, making it difficult to obtain a complete sample. WISE has found up to a million QSOs. With only 4 bands and a relatively small aperture the analysis of individual sources is challenging, but the large sample allows inference of bulk properties at a very significant level.CLUMPY (www.clumpy.org) is arguably the most popular database of AGN torus SEDs. We model the ensemble properties of the entire WISE AGN content using regularized linear regression, with orientation-dependent CLUMPY color-color-magnitude (CCM) tracks as basis functions. We can reproduce the observed number counts per CCM bin with percent-level accuracy, and simultaneously infer the probability distributions of all torus parameters, redshifts, additional SED components, and identify type-1/2 AGN populations through their IR properties alone. We increase the statistical power of our AGN unification tests even further, by adding other datasets as axes in the regression problem. To this end, we make use of the NOAO Data Lab (datalab.noao.edu), which hosts several high-level large datasets and provides very powerful tools for handling large data, e.g. cross-matched catalogs, fast remote queries, etc.
NASA Technical Reports Server (NTRS)
Gupta, R. N.; Rodkiewicz, C. M.
1975-01-01
The numerical results are obtained for heat transfer, skin-friction, and viscous interaction induced pressure for a step-wise accelerated flat plate in hypersonic flow. In the unified approach here the results are presented for both weak and strong-interaction problems without employing any linearization scheme. With the help of the numerical method used in this work an accurate prediction of wall shear can be made for the problems with plate velocity changes of 1% or larger. The obtained results indicate that the transient contribution to the induced pressure for helium is greater than that for air.
Reduction from cost-sensitive ordinal ranking to weighted binary classification.
Lin, Hsuan-Tien; Li, Ling
2012-05-01
We present a reduction framework from ordinal ranking to binary classification. The framework consists of three steps: extracting extended examples from the original examples, learning a binary classifier on the extended examples with any binary classification algorithm, and constructing a ranker from the binary classifier. Based on the framework, we show that a weighted 0/1 loss of the binary classifier upper-bounds the mislabeling cost of the ranker, both error-wise and regret-wise. Our framework allows not only the design of good ordinal ranking algorithms based on well-tuned binary classification approaches, but also the derivation of new generalization bounds for ordinal ranking from known bounds for binary classification. In addition, our framework unifies many existing ordinal ranking algorithms, such as perceptron ranking and support vector ordinal regression. When compared empirically on benchmark data sets, some of our newly designed algorithms enjoy advantages in terms of both training speed and generalization performance over existing algorithms. In addition, the newly designed algorithms lead to better cost-sensitive ordinal ranking performance, as well as improved listwise ranking performance.
A Practical Model for Forecasting New Freshman Enrollment during the Application Period.
ERIC Educational Resources Information Center
Paulsen, Michael B.
1989-01-01
A simple and effective model for forecasting freshman enrollment during the application period is presented step by step. The model requires minimal and readily available information, uses a simple linear regression analysis on a personal computer, and provides updated monthly forecasts. (MSE)
Ghai, Shashank; Schmitz, Gerd; Hwang, Tong-Hun; Effenberg, Alfred O.
2018-01-01
The purpose of the study was to assess the influence of real-time auditory feedback on knee proprioception. Thirty healthy participants were randomly allocated to control (n = 15), and experimental group I (15). The participants performed an active knee-repositioning task using their dominant leg, with/without additional real-time auditory feedback where the frequency was mapped in a convergent manner to two different target angles (40 and 75°). Statistical analysis revealed significant enhancement in knee re-positioning accuracy for the constant and absolute error with real-time auditory feedback, within and across the groups. Besides this convergent condition, we established a second divergent condition. Here, a step-wise transposition of frequency was performed to explore whether a systematic tuning between auditory-proprioceptive repositioning exists. No significant effects were identified in this divergent auditory feedback condition. An additional experimental group II (n = 20) was further included. Here, we investigated the influence of a larger magnitude and directional change of step-wise transposition of the frequency. In a first step, results confirm the findings of experiment I. Moreover, significant effects on knee auditory-proprioception repositioning were evident when divergent auditory feedback was applied. During the step-wise transposition participants showed systematic modulation of knee movements in the opposite direction of transposition. We confirm that knee re-positioning accuracy can be enhanced with concurrent application of real-time auditory feedback and that knee re-positioning can modulated in a goal-directed manner with step-wise transposition of frequency. Clinical implications are discussed with respect to joint position sense in rehabilitation settings. PMID:29568259
López-Sánchez, C; Sulleiro, E; Bocanegra, C; Romero, S; Codina, G; Sanz, I; Esperalba, J; Serra, J; Pigrau, C; Burgos, J; Almirante, B; Falcó, V
2017-04-01
In this study we attempt to assess the utility of a simplified step-wise diagnostic algorithm to determinate the aetiology of encephalitis in daily clinical practice and to describe the main causes in our setting. This was a prospective cohort study of all consecutive cases of encephalitis in adult patients diagnosed between January 2010 and March 2015 at the University Hospital Vall d'Hebron in Barcelona, Spain. The aetiological study was carried out following the proposed step-wise algorithm. The proportion of aetiological diagnoses achieved in each step was analysed. Data from 97 patients with encephalitis were assessed. Following a simplified step-wise algorithm, a definite diagnosis was made in the first step in 53 patients (55 %) and in 12 additional cases (12 %) in the second step. Overall, a definite or probable aetiological diagnosis was achieved in 78 % of the cases. Herpes virus, L. monocytogenes and M. tuberculosis were the leading causative agents demonstrated, whereas less frequent aetiologies were observed, mainly in immunosuppressed patients. The overall related mortality was 13.4 %. According to our experience, the leading and treatable causes of encephalitis can be identified in a first diagnostic step with limited microbiological studies. L. monocytogenes treatment should be considered on arrival in some patients. Additional diagnostic effort should be made in immunosuppressed patients.
Qi, Cong; Gu, Yiyang; Sun, Qing; Gu, Hongliang; Xu, Bo; Gu, Qing; Xiao, Jing; Lian, Yulong
2017-05-01
We assessed the risk of liver injuries following low doses of N,N-dimethylformamide (DMF) below threshold limit values (20 mg/m) among leather industry workers and comparison groups. A cohort of 429 workers from a leather factory and 466 non-exposed subjects in China were followed for 4 years. Poisson regression and piece-wise linear regression were used to examine the relationship between DMF and liver injury. Workers exposed to a cumulative dose of DMF were significantly more likely than non-exposed workers to develop liver injury. A nonlinear relationship between DMF and liver injury was observed, and a threshold of the cumulative DMF dose for liver injury was 7.30 (mg/m) year. The findings indicate the importance of taking action to reduce DMF occupational exposure limits for promoting worker health.
NASA Astrophysics Data System (ADS)
Neumann, D. W.; Zagona, E. A.; Rajagopalan, B.
2005-12-01
Warm summer stream temperatures due to low flows and high air temperatures are a critical water quality problem in many western U.S. river basins because they impact threatened fish species' habitat. Releases from storage reservoirs and river diversions are typically driven by human demands such as irrigation, municipal and industrial uses and hydropower production. Historically, fish needs have not been formally incorporated in the operating procedures, which do not supply adequate flows for fish in the warmest, driest periods. One way to address this problem is for local and federal organizations to purchase water rights to be used to increase flows, hence decrease temperatures. A statistical model-predictive technique for efficient and effective use of a limited supply of fish water has been developed and incorporated in a Decision Support System (DSS) that can be used in an operations mode to effectively use water acquired to mitigate warm stream temperatures. The DSS is a rule-based system that uses the empirical, statistical predictive model to predict maximum daily stream temperatures based on flows that meet the non-fish operating criteria, and to compute reservoir releases of allocated fish water when predicted temperatures exceed fish habitat temperature targets with a user specified confidence of the temperature predictions. The empirical model is developed using a step-wise linear regression procedure to select significant predictors, and includes the computation of a prediction confidence interval to quantify the uncertainty of the prediction. The DSS also includes a strategy for managing a limited amount of water throughout the season based on degree-days in which temperatures are allowed to exceed the preferred targets for a limited number of days that can be tolerated by the fish. The DSS is demonstrated by an example application to the Truckee River near Reno, Nevada using historical flows from 1988 through 1994. In this case, the statistical model predicts maximum daily Truckee River stream temperatures in June, July, and August using predicted maximum daily air temperature and modeled average daily flow. The empirical relationship was created using a step-wise linear regression selection process using 1993 and 1994 data. The adjusted R2 value for this relationship is 0.91. The model is validated using historic data and demonstrated in a predictive mode with a prediction confidence interval to quantify the uncertainty. Results indicate that the DSS could substantially reduce the number of target temperature violations, i.e., stream temperatures exceeding the target temperature levels detrimental to fish habitat. The results show that large volumes of water are necessary to meet a temperature target with a high degree of certainty and violations may still occur if all of the stored water is depleted. A lower degree of certainty requires less water but there is a higher probability that the temperature targets will be exceeded. Addition of the rules that consider degree-days resulted in a reduction of the number of temperature violations without increasing the amount of water used. This work is described in detail in publications referenced in the URL below.
Perception of self and significant others by alcoholics and nonalcoholics.
Quereshi, M Y; Soat, D M
1976-01-01
Ratings of self and 15 significant others on four personality factors by 47 alcoholic and 90 nonalcoholic males were analyzed by means of step-wise regression analysis and multivariate analysis of covariance. Alcoholics rated themselves less positively on extraversion and self-assertiveness (lower mean on extraversion and higher on self-assertiveness) and also judged intimate others (father, mother, and spouse) less positively on unhappiness, extraversion, and productive persistence (higher mean on unhappiness and lower means on extraversion and productive persistence). There were no significant differences between the two groups in judging persons as a whole or in the degree of differentiation that was exhibited in rating all 16 persons including self.
Self-regulated learning and achievement by middle-school children.
Sink, C A; Barnett, J E; Hixon, J E
1991-12-01
The relationship of self-regulated learning to the achievement test scores of 62 Grade 6 students was studied. Generally, the metacognitive and affective variables correlated significantly with teachers' grades and standardized test scores in mathematics, reading, and science. Planning and self-assessment significantly predicted the six measures of achievement. Step-wise multiple regression analyses using the metacognitive and affective variables largely indicate that students' and teachers' perceptions of scholastic ability and planning appear to be the most salient factors in predicting academic performance. The locus of control dimension had no utility in predicting classroom grades and performance on standardized measures of achievement. The implications of the findings for teaching and learning are discussed.
Analytical three-point Dixon method: With applications for spiral water-fat imaging.
Wang, Dinghui; Zwart, Nicholas R; Li, Zhiqiang; Schär, Michael; Pipe, James G
2016-02-01
The goal of this work is to present a new three-point analytical approach with flexible even or uneven echo increments for water-fat separation and to evaluate its feasibility with spiral imaging. Two sets of possible solutions of water and fat are first found analytically. Then, two field maps of the B0 inhomogeneity are obtained by linear regression. The initial identification of the true solution is facilitated by the root-mean-square error of the linear regression and the incorporation of a fat spectrum model. The resolved field map after a region-growing algorithm is refined iteratively for spiral imaging. The final water and fat images are recalculated using a joint water-fat separation and deblurring algorithm. Successful implementations were demonstrated with three-dimensional gradient-echo head imaging and single breathhold abdominal imaging. Spiral, high-resolution T1 -weighted brain images were shown with comparable sharpness to the reference Cartesian images. With appropriate choices of uneven echo increments, it is feasible to resolve the aliasing of the field map voxel-wise. High-quality water-fat spiral imaging can be achieved with the proposed approach. © 2015 Wiley Periodicals, Inc.
Correlation Weights in Multiple Regression
ERIC Educational Resources Information Center
Waller, Niels G.; Jones, Jeff A.
2010-01-01
A general theory on the use of correlation weights in linear prediction has yet to be proposed. In this paper we take initial steps in developing such a theory by describing the conditions under which correlation weights perform well in population regression models. Using OLS weights as a comparison, we define cases in which the two weighting…
Step - wise transient method - Influence of heat source inertia
NASA Astrophysics Data System (ADS)
Malinarič, Svetozár; Dieška, Peter
2016-07-01
Step-wise transient (SWT) method is an experimental technique for measuring the thermal diffusivity and conductivity of materials. Theoretical models and experimental apparatus are presented and the influence of the heat source capacity are investigated using the experiment simulation. The specimens from low density polyethylene (LDPE) were measured yielding the thermal diffusivity 0.165 mm2/s and thermal conductivity 0.351 W/mK with the coefficient of variation less than 1.4 %. The heat source capacity caused the systematic error of the results smaller than 1 %.
Ohno, Yoshiharu; Fujisawa, Yasuko; Koyama, Hisanobu; Kishida, Yuji; Seki, Shinichiro; Sugihara, Naoki; Yoshikawa, Takeshi
2017-01-01
To directly compare the capability of dynamic first-pass contrast-enhanced (CE-) perfusion area-detector CT (ADCT) and PET/CT for early prediction of treatment response, disease progression and overall survival of non-small cell carcinoma (NSCLC) patients treated with chemoradiotherapy. Fifty-three consecutive Stage IIIB NSCLC patients who had undergone PET/CT, dynamic first-pass CE-perfusion ADCT, chemoradiotherapy, and follow-up examination were enrolled in this study. They were divided into two groups: 1) complete or partial response (CR+PR) and 2) stable or progressive disease (SD+PD). Pulmonary arterial and systemic arterial perfusions and total perfusion were assessed at targeted lesions with the dual-input maximum slope method, permeability surface and distribution volume with the Patlak plot method, tumor perfusion with the single-input maximum slope method, and SUV max , and results were averaged to determine final values for each patient. Next, step-wise regression analysis was used to determine which indices were the most useful for predicting therapeutic effect. Finally, overall survival of responders and non-responders assessed by using the indices that had a significant effect on prediction of therapeutic outcome was statistically compared. The step-wise regression test showed that therapeutic effect (r 2 =0.63, p=0.01) was significantly affected by the following three factors in order of magnitude of impact: systemic arterial perfusion, total perfusion, and SUV max . Mean overall survival showed a significant difference for total perfusion (p=0.003) and systemic arterial perfusion (p=0.04). Dynamic first-pass CE-perfusion ADCT as well as PET/CT are useful for treatment response prediction in NSCLC patients treated with chemoradiotherapy. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Covariate Selection for Multilevel Models with Missing Data
Marino, Miguel; Buxton, Orfeu M.; Li, Yi
2017-01-01
Missing covariate data hampers variable selection in multilevel regression settings. Current variable selection techniques for multiply-imputed data commonly address missingness in the predictors through list-wise deletion and stepwise-selection methods which are problematic. Moreover, most variable selection methods are developed for independent linear regression models and do not accommodate multilevel mixed effects regression models with incomplete covariate data. We develop a novel methodology that is able to perform covariate selection across multiply-imputed data for multilevel random effects models when missing data is present. Specifically, we propose to stack the multiply-imputed data sets from a multiple imputation procedure and to apply a group variable selection procedure through group lasso regularization to assess the overall impact of each predictor on the outcome across the imputed data sets. Simulations confirm the advantageous performance of the proposed method compared with the competing methods. We applied the method to reanalyze the Healthy Directions-Small Business cancer prevention study, which evaluated a behavioral intervention program targeting multiple risk-related behaviors in a working-class, multi-ethnic population. PMID:28239457
Disequilibrium dihedral angles in layered intrusions: the microstructural record of fractionation
NASA Astrophysics Data System (ADS)
Holness, Marian; Namur, Olivier; Cawthorn, Grant
2013-04-01
The dihedral angle formed at junctions between two plagioclase grains and a grain of augite is only rarely in textural equilibrium in gabbros from km-scale crustal layered intrusions. The median of a population of these disequilibrium angles, Θcpp, varies systematically within individual layered intrusions, remaining constant over large stretches of stratigraphy with significant increases or decreases associated with the addition or reduction respectively of the number of phases on the liquidus of the bulk magma. The step-wise changes in Θcpp are present in Upper Zone of the Bushveld Complex, the Megacyclic Unit I of the Sept Iles Intrusion, and the Layered Series of the Skaergaard Intrusion. The plagioclase-bearing cumulates of Rum have a bimodal distribution of Θcpp, dependent on whether the cumulus assemblage includes clinopyroxene. The presence of the step-wise changes is independent of the order of arrival of cumulus phases and of the composition of either the cumulus phases or the interstitial liquid inferred to be present in the crystal mush. Step-wise changes in the rate of change in enthalpy with temperature (ΔH) of the cooling and crystallizing magma correspond to the observed variation of Θcpp, with increases of both ΔH and Θcpp associated with the addition of another liquidus phase, and decreases of both associated with the removal of a liquidus phase. The replacement of one phase by another (e.g. olivine ⇔ orthpyroxene) has little effect on ΔH and no discernible effect on Θcpp. An increase of ΔH is manifest by an increase in the fraction of the total enthalpy budget that is the latent heat of crystallization (the fractional latent heat). It also results in an increase in the amount crystallized in each incremental temperature drop (the crystal productivity). An increased fractional latent heat and crystal productivity result in an increased rate of plagioclase growth compared to that of augite during the final stages of solidification, causing a step-wise increase in Θcpp. Step-wise changes in the geometry of three-grain junctions in fully solidified gabbros thus provide a clear microstructural marker for the progress of fractionation.
Gerety, Gregg; Bebakar, Wan Mohamad Wan; Chaykin, Louis; Ozkaya, Mesut; Macura, Stanislava; Hersløv, Malene Lundgren; Behnke, Thomas
2016-05-01
This 26-week, multicenter, randomized, open-label, parallel-group, treat-to-target trial in adults with type 2 diabetes compared the efficacy and safety of treatment intensification algorithms with twice-daily (BID) insulin degludec/insulin aspart (IDegAsp). Patients randomized 1:1 to IDegAsp BID used either a 'Simple' algorithm (twice-weekly dose adjustments based on a single prebreakfast and pre-evening meal self-monitored plasma glucose [SMPG] measurement; IDegAsp[BIDSimple], n = 136) or a 'Stepwise' algorithm (once-weekly dose adjustments based on the lowest of 3 pre-breakfast and 3 pre-evening meal SMPG values; IDegAsp[BIDStep-wise], n = 136). After 26 weeks, mean change from baseline in glycated hemoglobin (HbA1c) with IDegAsp[BIDSimple] was noninferior to IDegAsp[BIDStep-wise] (-15 mmol/mol versus -14 mmol/mol; 95% confidence interval [CI] upper limit, <4 mmol/mol) (baseline HbA1c: 66.3 mmol/mol IDegAsp[BIDSimple] and 66.6 mmol/mol IDegAsp[BIDStep-wise]). The proportion of patients who achieved HbA1c <7.0% (<53 mmol/mol) at the end of the trial was 66.9% with IDegAsp[BIDSimple] and 62.5% with IDegAsp[BIDStep-wise]. Fasting plasma glucose levels were reduced with each titration algorithm (-1.51 mmol/L IDegAsp[BIDSimple] versus -1.95 mmol/L IDegAsp[BIDStep-wise]). Weight gain was 3.8 kg IDegAsp[BIDSimple] versus 2.6 kg IDegAsp[BIDStep-wise], and rates of overall confirmed hypoglycemia (5.16 episodes per patient-year of exposure [PYE] versus 8.93 PYE) and nocturnal confirmed hypoglycemia (0.78 PYE versus 1.33 PYE) were significantly lower with IDegAsp[BIDStep-wise] versus IDegAsp[BIDSimple]. There were no significant differences in insulin dose increments between groups. Treatment intensification with IDegAsp[BIDSimple] was noninferior to IDegAsp[BIDStep-wise]. Both titration algorithms were well tolerated; however, the more conservative step-wise algorithm led to less weight gain and fewer hypoglycemic episodes. Clinicaltrials.gov: NCT01680341.
Chen, Gang; Taylor, Paul A.; Shin, Yong-Wook; Reynolds, Richard C.; Cox, Robert W.
2016-01-01
It has been argued that naturalistic conditions in FMRI studies provide a useful paradigm for investigating perception and cognition through a synchronization measure, inter-subject correlation (ISC). However, one analytical stumbling block has been the fact that the ISC values associated with each single subject are not independent, and our previous paper (Chen et al., 2016) used simulations and analyses of real data to show that the methodologies adopted in the literature do not have the proper control for false positives. In the same paper, we proposed nonparametric subject-wise bootstrapping and permutation testing techniques for one and two groups, respectively, which account for the correlation structure, and these greatly outperformed the prior methods in controlling the false positive rate (FPR); that is, subject-wise bootstrapping (SWB) worked relatively well for both cases with one and two groups, and subject-wise permutation (SWP) testing was virtually ideal for group comparisons. Here we seek to explicate and adopt a parametric approach through linear mixed-effects (LME) modeling for studying the ISC values, building on the previous correlation framework, with the benefit that the LME platform offers wider adaptability, more powerful interpretations, and quality control checking capability than nonparametric methods. We describe both theoretical and practical issues involved in the modeling and the manner in which LME with crossed random effects (CRE) modeling is applied. A data-doubling step further allows us to conveniently track the subject index, and achieve easy implementations. We pit the LME approach against the best nonparametric methods, and find that the LME framework achieves proper control for false positives. The new LME methodologies are shown to be both efficient and robust, and they will be added as an additional option and settings in an existing open source program, 3dLME, in AFNI (http://afni.nimh.nih.gov). PMID:27751943
Effects of non-tidal atmospheric loading on a Kalman filter-based terrestrial reference frame
NASA Astrophysics Data System (ADS)
Abbondanza, C.; Altamimi, Z.; Chin, T. M.; Collilieux, X.; Dach, R.; Heflin, M. B.; Gross, R. S.; König, R.; Lemoine, F. G.; MacMillan, D. S.; Parker, J. W.; van Dam, T. M.; Wu, X.
2013-12-01
The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS global networks used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, the effect of non-tidal atmospheric loading (NTAL) corrections on the TRF is assessed adopting a Remove/Restore approach: (i) Focusing on the a-posteriori approach, the NTAL model derived from the National Center for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations. (ii) Adopting a Kalman-filter based approach, a linear TRF is estimated combining the 4 SG solutions free from NTAL displacements. (iii) Linear fits to the NTAL displacements removed at step (i) are restored to the linear reference frame estimated at (ii). The velocity fields of the (standard) linear reference frame in which the NTAL model has not been removed and the one in which the model has been removed/restored are compared and discussed.
NASA Astrophysics Data System (ADS)
Kumar, Vandhna; Meyssignac, Benoit; Melet, Angélique; Ganachaud, Alexandre
2017-04-01
Rising sea levels are a critical concern in small island nations. The problem is especially serious in the western south Pacific, where the total sea level rise over the last 60 years is up to 3 times the global average. In this study, we attempt to reconstruct sea levels at selected sites in the region (Suva, Lautoka, Noumea - Fiji and New Caledonia) as a mutiple-linear regression of atmospheric and oceanic variables. We focus on interannual-to-decadal scale variability, and lower (including the global mean sea level rise) over the 1979-2014 period. Sea levels are taken from tide gauge records and the ORAS4 reanalysis dataset, and are expressed as a sum of steric and mass changes as a preliminary step. The key development in our methodology is using leading wind stress curl as a proxy for the thermosteric component. This is based on the knowledge that wind stress curl anomalies can modulate the thermocline depth and resultant sea levels via Rossby wave propagation. The analysis is primarily based on correlation between local sea level and selected predictors, the dominant one being wind stress curl. In the first step, proxy boxes for wind stress curl are determined via regions of highest correlation. The proportion of sea level explained via linear regression is then removed, leaving a residual. This residual is then correlated with other locally acting potential predictors: halosteric sea level, the zonal and meridional wind stress components, and sea surface temperature. The statistically significant predictors are used in a multi-linear regression function to simulate the observed sea level. The method is able to reproduce between 40 to 80% of the variance in observed sea level. Based on the skill of the model, it has high potential in sea level projection and downscaling studies.
NASA Astrophysics Data System (ADS)
Weisz, Elisabeth; Smith, William L.; Smith, Nadia
2013-06-01
The dual-regression (DR) method retrieves information about the Earth surface and vertical atmospheric conditions from measurements made by any high-spectral resolution infrared sounder in space. The retrieved information includes temperature and atmospheric gases (such as water vapor, ozone, and carbon species) as well as surface and cloud top parameters. The algorithm was designed to produce a high-quality product with low latency and has been demonstrated to yield accurate results in real-time environments. The speed of the retrieval is achieved through linear regression, while accuracy is achieved through a series of classification schemes and decision-making steps. These steps are necessary to account for the nonlinearity of hyperspectral retrievals. In this work, we detail the key steps that have been developed in the DR method to advance accuracy in the retrieval of nonlinear parameters, specifically cloud top pressure. The steps and their impact on retrieval results are discussed in-depth and illustrated through relevant case studies. In addition to discussing and demonstrating advances made in addressing nonlinearity in a linear geophysical retrieval method, advances toward multi-instrument geophysical analysis by applying the DR to three different operational sounders in polar orbit are also noted. For any area on the globe, the DR method achieves consistent accuracy and precision, making it potentially very valuable to both the meteorological and environmental user communities.
Multivariate detrending of fMRI signal drifts for real-time multiclass pattern classification.
Lee, Dongha; Jang, Changwon; Park, Hae-Jeong
2015-03-01
Signal drift in functional magnetic resonance imaging (fMRI) is an unavoidable artifact that limits classification performance in multi-voxel pattern analysis of fMRI. As conventional methods to reduce signal drift, global demeaning or proportional scaling disregards regional variations of drift, whereas voxel-wise univariate detrending is too sensitive to noisy fluctuations. To overcome these drawbacks, we propose a multivariate real-time detrending method for multiclass classification that involves spatial demeaning at each scan and the recursive detrending of drifts in the classifier outputs driven by a multiclass linear support vector machine. Experiments using binary and multiclass data showed that the linear trend estimation of the classifier output drift for each class (a weighted sum of drifts in the class-specific voxels) was more robust against voxel-wise artifacts that lead to inconsistent spatial patterns and the effect of online processing than voxel-wise detrending. The classification performance of the proposed method was significantly better, especially for multiclass data, than that of voxel-wise linear detrending, global demeaning, and classifier output detrending without demeaning. We concluded that the multivariate approach using classifier output detrending of fMRI signals with spatial demeaning preserves spatial patterns, is less sensitive than conventional methods to sample size, and increases classification performance, which is a useful feature for real-time fMRI classification. Copyright © 2014 Elsevier Inc. All rights reserved.
Manoj, Smita Sara; Cherian, K P; Chitre, Vidya; Aras, Meena
2013-12-01
There is much discussion in the dental literature regarding the superiority of one impression technique over the other using addition silicone impression material. However, there is inadequate information available on the accuracy of different impression techniques using polyether. The purpose of this study was to assess the linear dimensional accuracy of four impression techniques using polyether on a laboratory model that simulates clinical practice. The impression material used was Impregum Soft™, 3 M ESPE and the four impression techniques used were (1) Monophase impression technique using medium body impression material. (2) One step double mix impression technique using heavy body and light body impression materials simultaneously. (3) Two step double mix impression technique using a cellophane spacer (heavy body material used as a preliminary impression to create a wash space with a cellophane spacer, followed by the use of light body material). (4) Matrix impression using a matrix of polyether occlusal registration material. The matrix is loaded with heavy body material followed by a pick-up impression in medium body material. For each technique, thirty impressions were made of a stainless steel master model that contained three complete crown abutment preparations, which were used as the positive control. Accuracy was assessed by measuring eight dimensions (mesiodistal, faciolingual and inter-abutment) on stone dies poured from impressions of the master model. A two-tailed t test was carried out to test the significance in difference of the distances between the master model and the stone models. One way analysis of variance (ANOVA) was used for multiple group comparison followed by the Bonferroni's test for pair wise comparison. The accuracy was tested at α = 0.05. In general, polyether impression material produced stone dies that were smaller except for the dies produced from the one step double mix impression technique. The ANOVA revealed a highly significant difference for each dimension measured (except for the inter-abutment distance between the first and the second die) between any two groups of stone models obtained from the four impression techniques. Pair wise comparison for each measurement did not reveal any significant difference (except for the faciolingual distance of the third die) between the casts produced using the two step double mix impression technique and the matrix impression system. The two step double mix impression technique produced stone dies that showed the least dimensional variation. During fabrication of a cast restoration, laboratory procedures should not only compensate for the cement thickness, but also for the increase or decrease in die dimensions.
Boosting structured additive quantile regression for longitudinal childhood obesity data.
Fenske, Nora; Fahrmeir, Ludwig; Hothorn, Torsten; Rzehak, Peter; Höhle, Michael
2013-07-25
Childhood obesity and the investigation of its risk factors has become an important public health issue. Our work is based on and motivated by a German longitudinal study including 2,226 children with up to ten measurements on their body mass index (BMI) and risk factors from birth to the age of 10 years. We introduce boosting of structured additive quantile regression as a novel distribution-free approach for longitudinal quantile regression. The quantile-specific predictors of our model include conventional linear population effects, smooth nonlinear functional effects, varying-coefficient terms, and individual-specific effects, such as intercepts and slopes. Estimation is based on boosting, a computer intensive inference method for highly complex models. We propose a component-wise functional gradient descent boosting algorithm that allows for penalized estimation of the large variety of different effects, particularly leading to individual-specific effects shrunken toward zero. This concept allows us to flexibly estimate the nonlinear age curves of upper quantiles of the BMI distribution, both on population and on individual-specific level, adjusted for further risk factors and to detect age-varying effects of categorical risk factors. Our model approach can be regarded as the quantile regression analog of Gaussian additive mixed models (or structured additive mean regression models), and we compare both model classes with respect to our obesity data.
Vossel, Simone; Weiss, Peter H; Eschenbeck, Philipp; Fink, Gereon R
2013-01-01
Right-hemispheric stroke can give rise to manifold neuropsychological deficits, in particular, impairments of spatial perception which are often accompanied by reduced self-awareness of these deficits (anosognosia). To date, the specific contribution of these deficits to a patient's difficulties in daily life activities remains to be elucidated. In 55 patients with right-hemispheric stroke we investigated the predictive value of different neglect-related symptoms, visual extinction and anosognosia for the performance of standardized activities of daily living (ADL). The additional impact of lesion location was examined using voxel-based lesion-symptom mapping. Step-wise linear regression revealed that anosognosia for visuospatial deficits was the most important predictor for performance in standardized ADL. In addition, motor-intentional and perceptual-attentional neglect, extinction and cancellation task performance significantly predicted ADL performance. Lesions comprising the right frontal and cingulate cortex and adjacent white matter explained additional variance in the performance of standardized ADL, in that damage to these areas was related to lower performance than predicted by the regression model only. Our data show a decisive role of anosognosia for visuospatial deficits for impaired ADL and therefore outcome/disability after stroke. The findings further demonstrate that the severity of neglect and extinction also predicts ADL performance. Our results thus strongly suggest that right-hemispheric stroke patients should not only be routinely assessed for neglect and extinction but also for anosognosia to initiate appropriate rehabilitative treatment. The observation that right frontal lesions explain additional variance in ADL most likely reflects that dysfunction of the supervisory system also significantly impacts upon rehabilitation. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tan, Yimin; Lin, Kejian; Zu, Jean W.
2018-05-01
Halbach permanent magnet (PM) array has attracted tremendous research attention in the development of electromagnetic generators for its unique properties. This paper has proposed a generalized analytical model for linear generators. The slotted stator pole-shifting and implementation of Halbach array have been combined for the first time. Initially, the magnetization components of the Halbach array have been determined using Fourier decomposition. Then, based on the magnetic scalar potential method, the magnetic field distribution has been derived employing specially treated boundary conditions. FEM analysis has been conducted to verify the analytical model. A slotted linear PM generator with Halbach PM has been constructed to validate the model and further improved using piece-wise springs to trigger full range reciprocating motion. A dynamic model has been developed to characterize the dynamic behavior of the slider. This analytical method provides an effective tool in development and optimization of Halbach PM generator. The experimental results indicate that piece-wise springs can be employed to improve generator performance under low excitation frequency.
Community Air Sensor Network (CAIRSENSE) project ...
Advances in air pollution sensor technology have enabled the development of small and low cost systems to measure outdoor air pollution. The deployment of a large number of sensors across a small geographic area would have potential benefits to supplement traditional monitoring networks with additional geographic and temporal measurement resolution, if the data quality were sufficient. To understand the capability of emerging air sensor technology, the Community Air Sensor Network (CAIRSENSE) project deployed low cost, continuous and commercially-available air pollution sensors at a regulatory air monitoring site and as a local sensor network over a surrounding ~2 km area in Southeastern U.S. Co-location of sensors measuring oxides of nitrogen, ozone, carbon monoxide, sulfur dioxide, and particles revealed highly variable performance, both in terms of comparison to a reference monitor as well as whether multiple identical sensors reproduced the same signal. Multiple ozone, nitrogen dioxide, and carbon monoxide sensors revealed low to very high correlation with a reference monitor, with Pearson sample correlation coefficient (r) ranging from 0.39 to 0.97, -0.25 to 0.76, -0.40 to 0.82, respectively. The only sulfur dioxide sensor tested revealed no correlation (r 0.5), step-wise multiple linear regression was performed to determine if ambient temperature, relative humidity (RH), or age of the sensor in sampling days could be used in a correction algorihm to im
Environmental Predictors of US County Mortality Patterns on a National Basis.
Chan, Melissa P L; Weinhold, Robert S; Thomas, Reuben; Gohlke, Julia M; Portier, Christopher J
2015-01-01
A growing body of evidence has found that mortality rates are positively correlated with social inequalities, air pollution, elevated ambient temperature, availability of medical care and other factors. This study develops a model to predict the mortality rates for different diseases by county across the US. The model is applied to predict changes in mortality caused by changing environmental factors. A total of 3,110 counties in the US, excluding Alaska and Hawaii, were studied. A subset of 519 counties from the 3,110 counties was chosen by using systematic random sampling and these samples were used to validate the model. Step-wise and linear regression analyses were used to estimate the ability of environmental pollutants, socio-economic factors and other factors to explain variations in county-specific mortality rates for cardiovascular diseases, cancers, chronic obstructive pulmonary disease (COPD), all causes combined and lifespan across five population density groups. The estimated models fit adequately for all mortality outcomes for all population density groups and, adequately predicted risks for the 519 validation counties. This study suggests that, at local county levels, average ozone (0.07 ppm) is the most important environmental predictor of mortality. The analysis also illustrates the complex inter-relationships of multiple factors that influence mortality and lifespan, and suggests the need for a better understanding of the pathways through which these factors, mortality, and lifespan are related at the community level.
Environmental Predictors of US County Mortality Patterns on a National Basis
Thomas, Reuben; Gohlke, Julia M.; Portier, Christopher J.
2015-01-01
A growing body of evidence has found that mortality rates are positively correlated with social inequalities, air pollution, elevated ambient temperature, availability of medical care and other factors. This study develops a model to predict the mortality rates for different diseases by county across the US. The model is applied to predict changes in mortality caused by changing environmental factors. A total of 3,110 counties in the US, excluding Alaska and Hawaii, were studied. A subset of 519 counties from the 3,110 counties was chosen by using systematic random sampling and these samples were used to validate the model. Step-wise and linear regression analyses were used to estimate the ability of environmental pollutants, socio-economic factors and other factors to explain variations in county-specific mortality rates for cardiovascular diseases, cancers, chronic obstructive pulmonary disease (COPD), all causes combined and lifespan across five population density groups. The estimated models fit adequately for all mortality outcomes for all population density groups and, adequately predicted risks for the 519 validation counties. This study suggests that, at local county levels, average ozone (0.07 ppm) is the most important environmental predictor of mortality. The analysis also illustrates the complex inter-relationships of multiple factors that influence mortality and lifespan, and suggests the need for a better understanding of the pathways through which these factors, mortality, and lifespan are related at the community level. PMID:26629706
Cohen, Lisa Janet; Tanis, Thachell; Bhattacharjee, Reetuparna; Nesci, Christina; Halmi, Winter; Galynker, Igor
2014-01-30
While considerable data support the relationship between childhood trauma and adult personality pathology in general, there is little research investigating the specific relationships between different types of childhood maltreatment and adult personality disorders. The present study tested a model incorporating five a priori hypotheses regarding the association between distinct forms of childhood maltreatment and personality pathology in 231 psychiatric patients using multiple self-report measures (Personality Diagnostic Questionnaire-4th Edition, Child Trauma Questionnaire, Conflict in Tactics Scale Parent-Child Child-Adult, and Multidimensional Neglectful Behavior Scale). Step-wise linear regressions supported three out of five hypotheses, suggesting independent relationships between: physical abuse and antisocial personality disorder traits; emotional abuse and Cluster C personality disorder traits; and maternal neglect and Cluster A personality disorder traits after controlling for co-occurring maltreatment types and personality disorder traits. Results did not support an independent relationship between sexual abuse and borderline personality traits nor between emotional abuse and narcissistic personality disorder traits. Additionally, there were three unexpected findings: physical abuse was independently and positively associated with narcissistic and paranoid traits and negatively associated with Cluster C traits. These findings can help refine our understanding of adult personality pathology and support the future development of clinical tools for survivors of childhood maltreatment. © 2013 Published by Elsevier Ireland Ltd.
Single Image Super-Resolution Using Global Regression Based on Multiple Local Linear Mappings.
Choi, Jae-Seok; Kim, Munchurl
2017-03-01
Super-resolution (SR) has become more vital, because of its capability to generate high-quality ultra-high definition (UHD) high-resolution (HR) images from low-resolution (LR) input images. Conventional SR methods entail high computational complexity, which makes them difficult to be implemented for up-scaling of full-high-definition input images into UHD-resolution images. Nevertheless, our previous super-interpolation (SI) method showed a good compromise between Peak-Signal-to-Noise Ratio (PSNR) performances and computational complexity. However, since SI only utilizes simple linear mappings, it may fail to precisely reconstruct HR patches with complex texture. In this paper, we present a novel SR method, which inherits the large-to-small patch conversion scheme from SI but uses global regression based on local linear mappings (GLM). Thus, our new SR method is called GLM-SI. In GLM-SI, each LR input patch is divided into 25 overlapped subpatches. Next, based on the local properties of these subpatches, 25 different local linear mappings are applied to the current LR input patch to generate 25 HR patch candidates, which are then regressed into one final HR patch using a global regressor. The local linear mappings are learned cluster-wise in our off-line training phase. The main contribution of this paper is as follows: Previously, linear-mapping-based conventional SR methods, including SI only used one simple yet coarse linear mapping to each patch to reconstruct its HR version. On the contrary, for each LR input patch, our GLM-SI is the first to apply a combination of multiple local linear mappings, where each local linear mapping is found according to local properties of the current LR patch. Therefore, it can better approximate nonlinear LR-to-HR mappings for HR patches with complex texture. Experiment results show that the proposed GLM-SI method outperforms most of the state-of-the-art methods, and shows comparable PSNR performance with much lower computational complexity when compared with a super-resolution method based on convolutional neural nets (SRCNN15). Compared with the previous SI method that is limited with a scale factor of 2, GLM-SI shows superior performance with average 0.79 dB higher in PSNR, and can be used for scale factors of 3 or higher.
Yousefi, Siamak; Balasubramanian, Madhusudhanan; Goldbaum, Michael H; Medeiros, Felipe A; Zangwill, Linda M; Weinreb, Robert N; Liebmann, Jeffrey M; Girkin, Christopher A; Bowd, Christopher
2016-05-01
To validate Gaussian mixture-model with expectation maximization (GEM) and variational Bayesian independent component analysis mixture-models (VIM) for detecting glaucomatous progression along visual field (VF) defect patterns (GEM-progression of patterns (POP) and VIM-POP). To compare GEM-POP and VIM-POP with other methods. GEM and VIM models separated cross-sectional abnormal VFs from 859 eyes and normal VFs from 1117 eyes into abnormal and normal clusters. Clusters were decomposed into independent axes. The confidence limit (CL) of stability was established for each axis with a set of 84 stable eyes. Sensitivity for detecting progression was assessed in a sample of 83 eyes with known progressive glaucomatous optic neuropathy (PGON). Eyes were classified as progressed if any defect pattern progressed beyond the CL of stability. Performance of GEM-POP and VIM-POP was compared to point-wise linear regression (PLR), permutation analysis of PLR (PoPLR), and linear regression (LR) of mean deviation (MD), and visual field index (VFI). Sensitivity and specificity for detecting glaucomatous VFs were 89.9% and 93.8%, respectively, for GEM and 93.0% and 97.0%, respectively, for VIM. Receiver operating characteristic (ROC) curve areas for classifying progressed eyes were 0.82 for VIM-POP, 0.86 for GEM-POP, 0.81 for PoPLR, 0.69 for LR of MD, and 0.76 for LR of VFI. GEM-POP was significantly more sensitive to PGON than PoPLR and linear regression of MD and VFI in our sample, while providing localized progression information. Detection of glaucomatous progression can be improved by assessing longitudinal changes in localized patterns of glaucomatous defect identified by unsupervised machine learning.
Building dynamic population graph for accurate correspondence detection.
Du, Shaoyi; Guo, Yanrong; Sanroma, Gerard; Ni, Dong; Wu, Guorong; Shen, Dinggang
2015-12-01
In medical imaging studies, there is an increasing trend for discovering the intrinsic anatomical difference across individual subjects in a dataset, such as hand images for skeletal bone age estimation. Pair-wise matching is often used to detect correspondences between each individual subject and a pre-selected model image with manually-placed landmarks. However, the large anatomical variability across individual subjects can easily compromise such pair-wise matching step. In this paper, we present a new framework to simultaneously detect correspondences among a population of individual subjects, by propagating all manually-placed landmarks from a small set of model images through a dynamically constructed image graph. Specifically, we first establish graph links between models and individual subjects according to pair-wise shape similarity (called as forward step). Next, we detect correspondences for the individual subjects with direct links to any of model images, which is achieved by a new multi-model correspondence detection approach based on our recently-published sparse point matching method. To correct those inaccurate correspondences, we further apply an error detection mechanism to automatically detect wrong correspondences and then update the image graph accordingly (called as backward step). After that, all subject images with detected correspondences are included into the set of model images, and the above two steps of graph expansion and error correction are repeated until accurate correspondences for all subject images are established. Evaluations on real hand X-ray images demonstrate that our proposed method using a dynamic graph construction approach can achieve much higher accuracy and robustness, when compared with the state-of-the-art pair-wise correspondence detection methods as well as a similar method but using static population graph. Copyright © 2015 Elsevier B.V. All rights reserved.
Amsterdam, Jay D; Lorenzo-Luaces, Lorenzo; DeRubeis, Robert J
2016-11-01
This study examined the relationship between the number of prior antidepressant treatment trials and step-wise increase in pharmacodynamic tolerance (or progressive loss of effectiveness) in subjects with bipolar II depression. Subjects ≥18 years old with bipolar II depression (n=129) were randomized to double-blind venlafaxine or lithium carbonate monotherapy for 12 weeks. Responders (n=59) received continuation monotherapy for six additional months. After controlling for baseline covariates of prior medications, there was a 25% reduction in the likelihood of response to treatment with each increase in the number of prior antidepressant trials (odds ratio [OR]=0.75, unstandardized coefficient [B]=-0.29, standard error (SE)=0.12; χ 2 =5.70, P<.02], as well as a 32% reduction in the likelihood of remission with each prior antidepressant trial (OR=0.68, B=-0.39, SE=0.13; χ 2 =9.71, P=.002). This step-wise increase in pharmacodynamic tolerance occurred in both treatment conditions. Prior selective serotonin reuptake inhibitor (SSRI) therapy was specifically associated with a step-wise increase in tolerance, whereas other prior antidepressants or mood stabilizers were not associated with pharmacodynamic tolerance. Neither the number of prior antidepressants, nor the number of prior SSRIs, or mood stabilizers, were associated with an increase in relapse during continuation therapy. The odds of responding or remitting during venlafaxine or lithium monotherapy were reduced by 25% and 32%, respectively, with each increase in the number of prior antidepressant treatment trials. There was no relationship between prior antidepressant exposure and depressive relapse during continuation therapy of bipolar II disorder. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
[Application of ordinary Kriging method in entomologic ecology].
Zhang, Runjie; Zhou, Qiang; Chen, Cuixian; Wang, Shousong
2003-01-01
Geostatistics is a statistic method based on regional variables and using the tool of variogram to analyze the spatial structure and the patterns of organism. In simulating the variogram within a great range, though optimal simulation cannot be obtained, the simulation method of a dialogue between human and computer can be used to optimize the parameters of the spherical models. In this paper, the method mentioned above and the weighted polynomial regression were utilized to simulate the one-step spherical model, the two-step spherical model and linear function model, and the available nearby samples were used to draw on the ordinary Kriging procedure, which provided a best linear unbiased estimate of the constraint of the unbiased estimation. The sum of square deviation between the estimating and measuring values of varying theory models were figured out, and the relative graphs were shown. It was showed that the simulation based on the two-step spherical model was the best simulation, and the one-step spherical model was better than the linear function model.
NASA Technical Reports Server (NTRS)
Moser, Robert D.; Rogers, Michael M.
1992-01-01
The evolution of three-dimensional temporally evolving plane mixing layers through as many as three pairings was simulated numerically. Initial conditions for all simulations consisted of a few low-wavenumber disturbances, usually derived from linear stability theory, in addition to the mean velocity. Three-dimensional perturbations were used with amplitudes ranging from infinitesimal to large enough to trigger a rapid transition to turbulence. Pairing is found both to inhibit the growth of infinitesimal three-dimensional disturbances and to trigger the transition to turbulence in highly three dimensional flows. The mechanisms responsible for the growth of three-dimensionality as well as the initial phases of the transition to turbulence are described. The transition to turbulence is accompanied by the formation of thin sheets of span wise vorticity, which undergo a secondary roll up. Transition also produces an increase in the degree of scalar mixing, in agreement with experimental observations of mixing transition. Simulations were also conducted to investigate changes in span wise length scale that may occur in response to the change in stream wise length scale during a pairing. The linear mechanism for this process was found to be very slow, requiring roughly three pairings to complete a doubling of the span wise scale. Stronger three-dimensionality can produce more rapid scale changes but is also likely to trigger transition to turbulence. No evidence was found for a change from an organized array of rib vortices at one span wise scale to a similar array at a larger span wise scale.
Craig, Marlies H; Sharp, Brian L; Mabaso, Musawenkosi LH; Kleinschmidt, Immo
2007-01-01
Background Several malaria risk maps have been developed in recent years, many from the prevalence of infection data collated by the MARA (Mapping Malaria Risk in Africa) project, and using various environmental data sets as predictors. Variable selection is a major obstacle due to analytical problems caused by over-fitting, confounding and non-independence in the data. Testing and comparing every combination of explanatory variables in a Bayesian spatial framework remains unfeasible for most researchers. The aim of this study was to develop a malaria risk map using a systematic and practicable variable selection process for spatial analysis and mapping of historical malaria risk in Botswana. Results Of 50 potential explanatory variables from eight environmental data themes, 42 were significantly associated with malaria prevalence in univariate logistic regression and were ranked by the Akaike Information Criterion. Those correlated with higher-ranking relatives of the same environmental theme, were temporarily excluded. The remaining 14 candidates were ranked by selection frequency after running automated step-wise selection procedures on 1000 bootstrap samples drawn from the data. A non-spatial multiple-variable model was developed through step-wise inclusion in order of selection frequency. Previously excluded variables were then re-evaluated for inclusion, using further step-wise bootstrap procedures, resulting in the exclusion of another variable. Finally a Bayesian geo-statistical model using Markov Chain Monte Carlo simulation was fitted to the data, resulting in a final model of three predictor variables, namely summer rainfall, mean annual temperature and altitude. Each was independently and significantly associated with malaria prevalence after allowing for spatial correlation. This model was used to predict malaria prevalence at unobserved locations, producing a smooth risk map for the whole country. Conclusion We have produced a highly plausible and parsimonious model of historical malaria risk for Botswana from point-referenced data from a 1961/2 prevalence survey of malaria infection in 1–14 year old children. After starting with a list of 50 potential variables we ended with three highly plausible predictors, by applying a systematic and repeatable staged variable selection procedure that included a spatial analysis, which has application for other environmentally determined infectious diseases. All this was accomplished using general-purpose statistical software. PMID:17892584
NASA Astrophysics Data System (ADS)
Abbondanza, Claudio; Altamimi, Zuheir; Chin, Toshio; Collilieux, Xavier; Dach, Rolf; Gross, Richard; Heflin, Michael; König, Rolf; Lemoine, Frank; Macmillan, Dan; Parker, Jay; van Dam, Tonie; Wu, Xiaoping
2014-05-01
The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, we assess the impact of non-tidal atmospheric loading (NTAL) corrections on the TRF computation. Focusing on the a-posteriori approach, (i) the NTAL model derived from the National Centre for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations; (ii) adopting a Kalman-filter based approach, two distinct linear TRFs are estimated combining the 4 SG solutions with (corrected TRF solution) and without the NTAL displacements (standard TRF solution). Linear fits (offset and atmospheric velocity) of the NTAL displacements removed during step (i) are estimated accounting for the station position discontinuities introduced in the SG solutions and adopting different weighting strategies. The NTAL-derived (atmospheric) velocity fields are compared to those obtained from the TRF reductions during step (ii). The consistency between the atmospheric and the TRF-derived velocity fields is examined. We show how the presence of station position discontinuities in SG solutions degrades the agreement between the velocity fields and compare the effect of different weighting structure adopted while estimating the linear fits to the NTAL displacements. Finally, we evaluate the effect of restoring the atmospheric velocities determined through the linear fits of the NTAL displacements to the single-technique linear reference frames obtained by stacking the standard SG SINEX files. Differences between the velocity fields obtained restoring the NTAL displacements and the standard stacked linear reference frames are discussed.
Billard, Hélène; Simon, Laure; Desnots, Emmanuelle; Sochard, Agnès; Boscher, Cécile; Riaublanc, Alain; Alexandre-Gouabau, Marie-Cécile; Boquien, Clair-Yves
2016-08-01
Human milk composition analysis seems essential to adapt human milk fortification for preterm neonates. The Miris human milk analyzer (HMA), based on mid-infrared methodology, is convenient for a unique determination of macronutrients. However, HMA measurements are not totally comparable with reference methods (RMs). The primary aim of this study was to compare HMA results with results from biochemical RMs for a large range of protein, fat, and carbohydrate contents and to establish a calibration adjustment. Human milk was fractionated in protein, fat, and skim milk by covering large ranges of protein (0-3 g/100 mL), fat (0-8 g/100 mL), and carbohydrate (5-8 g/100 mL). For each macronutrient, a calibration curve was plotted by linear regression using measurements obtained using HMA and RMs. For fat, 53 measurements were performed, and the linear regression equation was HMA = 0.79RM + 0.28 (R(2) = 0.92). For true protein (29 measurements), the linear regression equation was HMA = 0.9RM + 0.23 (R(2) = 0.98). For carbohydrate (15 measurements), the linear regression equation was HMA = 0.59RM + 1.86 (R(2) = 0.95). A homogenization step with a disruptor coupled to a sonication step was necessary to obtain better accuracy of the measurements. Good repeatability (coefficient of variation < 7%) and reproducibility (coefficient of variation < 17%) were obtained after calibration adjustment. New calibration curves were developed for the Miris HMA, allowing accurate measurements in large ranges of macronutrient content. This is necessary for reliable use of this device in individualizing nutrition for preterm newborns. © The Author(s) 2015.
Automating approximate Bayesian computation by local linear regression.
Thornton, Kevin R
2009-07-07
In several biological contexts, parameter inference often relies on computationally-intensive techniques. "Approximate Bayesian Computation", or ABC, methods based on summary statistics have become increasingly popular. A particular flavor of ABC based on using a linear regression to approximate the posterior distribution of the parameters, conditional on the summary statistics, is computationally appealing, yet no standalone tool exists to automate the procedure. Here, I describe a program to implement the method. The software package ABCreg implements the local linear-regression approach to ABC. The advantages are: 1. The code is standalone, and fully-documented. 2. The program will automatically process multiple data sets, and create unique output files for each (which may be processed immediately in R), facilitating the testing of inference procedures on simulated data, or the analysis of multiple data sets. 3. The program implements two different transformation methods for the regression step. 4. Analysis options are controlled on the command line by the user, and the program is designed to output warnings for cases where the regression fails. 5. The program does not depend on any particular simulation machinery (coalescent, forward-time, etc.), and therefore is a general tool for processing the results from any simulation. 6. The code is open-source, and modular.Examples of applying the software to empirical data from Drosophila melanogaster, and testing the procedure on simulated data, are shown. In practice, the ABCreg simplifies implementing ABC based on local-linear regression.
Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.
2009-01-01
In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.
Satellite remote sensing of fine particulate air pollutants over Indian mega cities
NASA Astrophysics Data System (ADS)
Sreekanth, V.; Mahesh, B.; Niranjan, K.
2017-11-01
In the backdrop of the need for high spatio-temporal resolution data on PM2.5 mass concentrations for health and epidemiological studies over India, empirical relations between Aerosol Optical Depth (AOD) and PM2.5 mass concentrations are established over five Indian mega cities. These relations are sought to predict the surface PM2.5 mass concentrations from high resolution columnar AOD datasets. Current study utilizes multi-city public domain PM2.5 data (from US Consulate and Embassy's air monitoring program) and MODIS AOD, spanning for almost four years. PM2.5 is found to be positively correlated with AOD. Station-wise linear regression analysis has shown spatially varying regression coefficients. Similar analysis has been repeated by eliminating data from the elevated aerosol prone seasons, which has improved the correlation coefficient. The impact of the day to day variability in the local meteorological conditions on the AOD-PM2.5 relationship has been explored by performing a multiple regression analysis. A cross-validation approach for the multiple regression analysis considering three years of data as training dataset and one-year data as validation dataset yielded an R value of ∼0.63. The study was concluded by discussing the factors which can improve the relationship.
Type-II generalized family-wise error rate formulas with application to sample size determination.
Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie
2016-07-20
Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Johnson, Henry C.; Rosevear, G. Craig
1977-01-01
This study explored the relationship between traditional admissions criteria, performance in the first semester of medical school, and performance on the National Board of Medical Examiners' (NBME) Examination, Part 1 for minority medical students, non-minority medical students, and the two groups combined. Correlational analysis and step-wise multiple regression procedures were used as the analysis techniques. A different pattern of admissions variables related to National Board Part 1 performance for the two groups. The General Information section of the Medical College Admission Test (MCAT) contributed the most variance for the minority student group. MCAT-Science contributed the most variance for the non-minority student group. MCATs accounted for a substantial portion of the variance on the National Board examination. PMID:904005
Balabin, Roman M; Smirnov, Sergey V
2011-04-29
During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm(-1)) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic techniques application, such as Raman, ultraviolet-visible (UV-vis), or nuclear magnetic resonance (NMR) spectroscopies, can be greatly improved by an appropriate feature selection choice. Copyright © 2011 Elsevier B.V. All rights reserved.
Using Our National Forests Wisely.
ERIC Educational Resources Information Center
Feuchter, Roy
1987-01-01
Lists nine steps camps can follow to insure successful use of national forests. Steps are identifying national forest resources; matching expectations with the right setting; using recreation opportunity guides; planning for safety; practicing forest etiquette; practicing fire prevention; knowing the forest environment; participating in volunteer…
NASA Astrophysics Data System (ADS)
Deng, Lu; Garrett, W. R.; Payne, M. G.; Moore, M. A.
1997-05-01
We show that multiphoton destructive interference leading to gain suppression can be produced even when two different step-wise stimulated emissions, such as stimulated Raman and hyper-Raman emissions, are included in the interference loop.
Impact of HealthWise South Africa on polydrug use and high-risk sexual behavior.
Tibbits, Melissa K; Smith, Edward A; Caldwell, Linda L; Flisher, Alan J
2011-08-01
This study was designed to evaluate the efficacy of the HealthWise South Africa HIV and substance abuse prevention program at impacting adolescents' polydrug use and sexual risk behaviors. HealthWise is a school-based intervention designed to promote social-emotional skills, increase knowledge and refusal skills relevant to substance use and sexual behaviors, and encourage healthy free time activities. Four intervention schools in one township near Cape Town, South Africa were matched to five comparison schools (N = 4040). The sample included equal numbers of male and female participants (Mean age = 14.0). Multiple regression was used to assess the impact of HealthWise on the outcomes of interest. Findings suggest that among virgins at baseline (beginning of eighth grade) who had sex by Wave 5 (beginning of 10th grade), HealthWise youth were less likely than comparison youth to engage in two or more risk behaviors at last sex. Additionally, HealthWise was effective at slowing the onset of frequent polydrug use among non-users at baseline and slowing the increase in this outcome among all participants. Program effects were not found for lifetime sexual activity, condomless sex refusal and past-month polydrug use. These findings suggest that HealthWise is a promising approach to HIV and substance abuse prevention.
STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.
Fan, Jianqing; Xue, Lingzhou; Zou, Hui
2014-06-01
Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.
STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION
Fan, Jianqing; Xue, Lingzhou; Zou, Hui
2014-01-01
Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression. PMID:25598560
Fixed Pattern Noise pixel-wise linear correction for crime scene imaging CMOS sensor
NASA Astrophysics Data System (ADS)
Yang, Jie; Messinger, David W.; Dube, Roger R.; Ientilucci, Emmett J.
2017-05-01
Filtered multispectral imaging technique might be a potential method for crime scene documentation and evidence detection due to its abundant spectral information as well as non-contact and non-destructive nature. Low-cost and portable multispectral crime scene imaging device would be highly useful and efficient. The second generation crime scene imaging system uses CMOS imaging sensor to capture spatial scene and bandpass Interference Filters (IFs) to capture spectral information. Unfortunately CMOS sensors suffer from severe spatial non-uniformity compared to CCD sensors and the major cause is Fixed Pattern Noise (FPN). IFs suffer from "blue shift" effect and introduce spatial-spectral correlated errors. Therefore, Fixed Pattern Noise (FPN) correction is critical to enhance crime scene image quality and is also helpful for spatial-spectral noise de-correlation. In this paper, a pixel-wise linear radiance to Digital Count (DC) conversion model is constructed for crime scene imaging CMOS sensor. Pixel-wise conversion gain Gi,j and Dark Signal Non-Uniformity (DSNU) Zi,j are calculated. Also, conversion gain is divided into four components: FPN row component, FPN column component, defects component and effective photo response signal component. Conversion gain is then corrected to average FPN column and row components and defects component so that the sensor conversion gain is uniform. Based on corrected conversion gain and estimated image incident radiance from the reverse of pixel-wise linear radiance to DC model, corrected image spatial uniformity can be enhanced to 7 times as raw image, and the bigger the image DC value within its dynamic range, the better the enhancement.
Korczowski, L; Congedo, M; Jutten, C
2015-08-01
The classification of electroencephalographic (EEG) data recorded from multiple users simultaneously is an important challenge in the field of Brain-Computer Interface (BCI). In this paper we compare different approaches for classification of single-trials Event-Related Potential (ERP) on two subjects playing a collaborative BCI game. The minimum distance to mean (MDM) classifier in a Riemannian framework is extended to use the diversity of the inter-subjects spatio-temporal statistics (MDM-hyper) or to merge multiple classifiers (MDM-multi). We show that both these classifiers outperform significantly the mean performance of the two users and analogous classifiers based on the step-wise linear discriminant analysis. More importantly, the MDM-multi outperforms the performance of the best player within the pair.
Faizullah, Faiz
2016-01-01
The aim of the current paper is to present the path-wise and moment estimates for solutions to stochastic functional differential equations with non-linear growth condition in the framework of G-expectation and G-Brownian motion. Under the nonlinear growth condition, the pth moment estimates for solutions to SFDEs driven by G-Brownian motion are proved. The properties of G-expectations, Hölder's inequality, Bihari's inequality, Gronwall's inequality and Burkholder-Davis-Gundy inequalities are used to develop the above mentioned theory. In addition, the path-wise asymptotic estimates and continuity of pth moment for the solutions to SFDEs in the G-framework, with non-linear growth condition are shown.
Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation.
Zana, F; Klein, J C
2001-01-01
This paper presents an algorithm based on mathematical morphology and curvature evaluation for the detection of vessel-like patterns in a noisy environment. Such patterns are very common in medical images. Vessel detection is interesting for the computation of parameters related to blood flow. Its tree-like geometry makes it a usable feature for registration between images that can be of a different nature. In order to define vessel-like patterns, segmentation is performed with respect to a precise model. We define a vessel as a bright pattern, piece-wise connected, and locally linear, mathematical morphology is very well adapted to this description, however other patterns fit such a morphological description. In order to differentiate vessels from analogous background patterns, a cross-curvature evaluation is performed. They are separated out as they have a specific Gaussian-like profile whose curvature varies smoothly along the vessel. The detection algorithm that derives directly from this modeling is based on four steps: (1) noise reduction; (2) linear pattern with Gaussian-like profile improvement; (3) cross-curvature evaluation; (4) linear filtering. We present its theoretical background and illustrate it on real images of various natures, then evaluate its robustness and its accuracy with respect to noise.
Song, Kai; Xue, Yiqun; Wang, Xiaohua; Wan, Yinglang; Deng, Xin; Lin, Jinxing
2017-06-01
Membrane proteins exert functions by forming oligomers or molecular complexes. Currently, step-wise photobleaching has been applied to count the fluorescently labelled subunits in plant cells, for which an accurate and reliable control is required to distinguish individual subunits and define the basal fluorescence. However, the common procedure using immobilized GFP molecules is obviously not applicable for analysis in living plant cells. Using the spatial intensity distribution analysis (SpIDA), we found that the A206K mutation reduced the dimerization of GFP molecules. Further ectopic expression of Myristoyl-GFP A206K driven by the endogenous AtCLC2 promoter allowed imaging of individual molecules at a low expression level. As a result, the percentage of dimers in the transgenic pCLC2::Myristoyl-mGFP A206K line was significantly reduced in comparison to that of the pCLC2::Myristoyl-GFP line, confirming its application in defining the basal fluorescence intensity of GFP. Taken together, our results demonstrated that pCLC2::Myristoyl-mGFP A206K can be used as a standard control for monomer GFP, facilitating the analysis of the step-wise photobleaching of membrane proteins in Arabidopsis thaliana. Copyright © 2017 Elsevier GmbH. All rights reserved.
Step-wise extinctions at the Cretaceous-Tertiary boundary and their climatic implications
NASA Technical Reports Server (NTRS)
Maurrasse, Florentin J-M. R.
1988-01-01
A comparative study of planktonic foraminifera and radiolarian assemblages from the Cretaceous-Tertiary (K-T) boundary section of the Beloc Formation in the southern Peninsula of Haiti, and the lowermost Danian sequence of the Micara Formation in southern Cuba reveals a remarkable pattern of step-wise extinctions. This pattern is consistent in both places despite the widely different lithologies of the two formations. Because of a step-wise extinction and the delayed disappearance of taxa known to be more representative of cooler water realms, it is inferred that a cooling trend which characterized the close of the Maastrichtian and the onset of the Tertiary had the major adverse effect on the existing biota. Although repetitive lithologic and faunal fluctuations throughout the Maastrichtian sediments found at Deep Sea Drilling Project (DSDP) site 146/149 in the Caribbean Sea indicate variations reminiscent of known climatically induced cycles in the Cenozoic, rapid biotic succession appears to have taken place during a crisis period of a duration greater than 2 mission years. Widespread and abundant volcanic activities recorded in the Caribbean area during the crisis period gives further credence to earlier contention that intense volcanism may have played a major role in exhacerbating pre-existing climatic conditions during that time.
Predicting residue-wise contact orders in proteins by support vector regression.
Song, Jiangning; Burrage, Kevin
2006-10-03
The residue-wise contact order (RWCO) describes the sequence separations between the residues of interest and its contacting residues in a protein sequence. It is a new kind of one-dimensional protein structure that represents the extent of long-range contacts and is considered as a generalization of contact order. Together with secondary structure, accessible surface area, the B factor, and contact number, RWCO provides comprehensive and indispensable important information to reconstructing the protein three-dimensional structure from a set of one-dimensional structural properties. Accurately predicting RWCO values could have many important applications in protein three-dimensional structure prediction and protein folding rate prediction, and give deep insights into protein sequence-structure relationships. We developed a novel approach to predict residue-wise contact order values in proteins based on support vector regression (SVR), starting from primary amino acid sequences. We explored seven different sequence encoding schemes to examine their effects on the prediction performance, including local sequence in the form of PSI-BLAST profiles, local sequence plus amino acid composition, local sequence plus molecular weight, local sequence plus secondary structure predicted by PSIPRED, local sequence plus molecular weight and amino acid composition, local sequence plus molecular weight and predicted secondary structure, and local sequence plus molecular weight, amino acid composition and predicted secondary structure. When using local sequences with multiple sequence alignments in the form of PSI-BLAST profiles, we could predict the RWCO distribution with a Pearson correlation coefficient (CC) between the predicted and observed RWCO values of 0.55, and root mean square error (RMSE) of 0.82, based on a well-defined dataset with 680 protein sequences. Moreover, by incorporating global features such as molecular weight and amino acid composition we could further improve the prediction performance with the CC to 0.57 and an RMSE of 0.79. In addition, combining the predicted secondary structure by PSIPRED was found to significantly improve the prediction performance and could yield the best prediction accuracy with a CC of 0.60 and RMSE of 0.78, which provided at least comparable performance compared with the other existing methods. The SVR method shows a prediction performance competitive with or at least comparable to the previously developed linear regression-based methods for predicting RWCO values. In contrast to support vector classification (SVC), SVR is very good at estimating the raw value profiles of the samples. The successful application of the SVR approach in this study reinforces the fact that support vector regression is a powerful tool in extracting the protein sequence-structure relationship and in estimating the protein structural profiles from amino acid sequences.
Dihedral Angles As A Diagnostic Tool For Interpreting The Cooling History Of Mafic Rocks
NASA Astrophysics Data System (ADS)
Holness, M. B.
2016-12-01
The geometry of three-grain junctions in mafic rocks, particularly those involving two grains of plagioclase, overwhelmingly results from processes occurring during solidification. Sub-solidus textural modification is only significant for fine-grained rocks that have remained hot for a considerable time (e.g. chill zones). The underlying control on the geometry of junctions involving plagioclase is the response of the different plagioclase growth faces to changes in cooling rate. This is demonstrated by the systematic co-variation of plagioclase grain shape and the median value of the pyroxene-plag-plag dihedral angle across (unfractionated) mafic sills. In mafic layered intrusions the median dihedral angle is constant across large stretches of stratigraphy, changing in a step-wise manner as the number of liquidus phases changes in the bulk magma. In the Skaergaard layered intrusion, the shape of cumulus plagioclase grains changes smoothly through the stratigraphy, consistent with continuously decreasing cooling rates in a well-mixed chamber: there is no correlation between overall plagioclase grain shape and dihedral angle. However, three-grain junctions are formed during the last stages of crystallization and therefore record events at the base of the crystal mushy layer. While the overall shape of plagioclase grains is dominated by growth at the magma-mush interface or in the bulk magma, it is the post-accumulation overgrowth that creates the dihedral angle: the shape of this overgrowth changes in a step-wise fashion, matching the step-wise variation in dihedral angle. Dihedral angles in layered intrusions can be used to place constraints on the thickness of the mushy layer, using the stratigraphic offset between the step-wise change in dihedral angle and the first appearance/disappearance of the associated liquidus phase. Dihedral angles also have the potential to constrain intrusion size for fragments of cumulate rocks entrained in volcanic ejecta.
Gorban, A N; Mirkes, E M; Zinovyev, A
2016-12-01
Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0
Do MCAT scores predict USMLE scores? An analysis on 5 years of medical student data.
Gauer, Jacqueline L; Wolff, Josephine M; Jackson, J Brooks
2016-01-01
The purpose of this study was to determine the associations and predictive values of Medical College Admission Test (MCAT) component and composite scores prior to 2015 with U.S. Medical Licensure Exam (USMLE) Step 1 and Step 2 Clinical Knowledge (CK) scores, with a focus on whether students scoring low on the MCAT were particularly likely to continue to score low on the USMLE exams. Multiple linear regression, correlation, and chi-square analyses were performed to determine the relationship between MCAT component and composite scores and USMLE Step 1 and Step 2 CK scores from five graduating classes (2011-2015) at the University of Minnesota Medical School ( N =1,065). The multiple linear regression analyses were both significant ( p <0.001). The three MCAT component scores together explained 17.7% of the variance in Step 1 scores ( p< 0.001) and 12.0% of the variance in Step 2 CK scores ( p <0.001). In the chi-square analyses, significant, albeit weak associations were observed between almost all MCAT component scores and USMLE scores (Cramer's V ranged from 0.05 to 0.24). Each of the MCAT component scores was significantly associated with USMLE Step 1 and Step 2 CK scores, although the effect size was small. Being in the top or bottom scoring range of the MCAT exam was predictive of being in the top or bottom scoring range of the USMLE exams, although the strengths of the associations were weak to moderate. These results indicate that MCAT scores are predictive of student performance on the USMLE exams, but, given the small effect sizes, should be considered as part of the holistic view of the student.
Do MCAT scores predict USMLE scores? An analysis on 5 years of medical student data
Gauer, Jacqueline L.; Wolff, Josephine M.; Jackson, J. Brooks
2016-01-01
Introduction The purpose of this study was to determine the associations and predictive values of Medical College Admission Test (MCAT) component and composite scores prior to 2015 with U.S. Medical Licensure Exam (USMLE) Step 1 and Step 2 Clinical Knowledge (CK) scores, with a focus on whether students scoring low on the MCAT were particularly likely to continue to score low on the USMLE exams. Method Multiple linear regression, correlation, and chi-square analyses were performed to determine the relationship between MCAT component and composite scores and USMLE Step 1 and Step 2 CK scores from five graduating classes (2011–2015) at the University of Minnesota Medical School (N=1,065). Results The multiple linear regression analyses were both significant (p<0.001). The three MCAT component scores together explained 17.7% of the variance in Step 1 scores (p<0.001) and 12.0% of the variance in Step 2 CK scores (p<0.001). In the chi-square analyses, significant, albeit weak associations were observed between almost all MCAT component scores and USMLE scores (Cramer's V ranged from 0.05 to 0.24). Discussion Each of the MCAT component scores was significantly associated with USMLE Step 1 and Step 2 CK scores, although the effect size was small. Being in the top or bottom scoring range of the MCAT exam was predictive of being in the top or bottom scoring range of the USMLE exams, although the strengths of the associations were weak to moderate. These results indicate that MCAT scores are predictive of student performance on the USMLE exams, but, given the small effect sizes, should be considered as part of the holistic view of the student. PMID:27702431
Pointwise influence matrices for functional-response regression.
Reiss, Philip T; Huang, Lei; Wu, Pei-Shien; Chen, Huaihou; Colcombe, Stan
2017-12-01
We extend the notion of an influence or hat matrix to regression with functional responses and scalar predictors. For responses depending linearly on a set of predictors, our definition is shown to reduce to the conventional influence matrix for linear models. The pointwise degrees of freedom, the trace of the pointwise influence matrix, are shown to have an adaptivity property that motivates a two-step bivariate smoother for modeling nonlinear dependence on a single predictor. This procedure adapts to varying complexity of the nonlinear model at different locations along the function, and thereby achieves better performance than competing tensor product smoothers in an analysis of the development of white matter microstructure in the brain. © 2017, The International Biometric Society.
Step wise, multiple objective calibration of a hydrologic model for a snowmelt dominated basin
Hay, L.E.; Leavesley, G.H.; Clark, M.P.; Markstrom, S.L.; Viger, R.J.; Umemoto, M.
2006-01-01
The ability to apply a hydrologic model to large numbers of basins for forecasting purposes requires a quick and effective calibration strategy. This paper presents a step wise, multiple objective, automated procedure for hydrologic model calibration. This procedure includes the sequential calibration of a model's simulation of solar radiation (SR), potential evapotranspiration (PET), water balance, and daily runoff. The procedure uses the Shuffled Complex Evolution global search algorithm to calibrate the U.S. Geological Survey's Precipitation Runoff Modeling System in the Yampa River basin of Colorado. This process assures that intermediate states of the model (SR and PET on a monthly mean basis), as well as the water balance and components of the daily hydrograph are simulated, consistently with measured values.
Le, Yuan; Stein, Ashley; Berry, Colin; Kellman, Peter; Bennett, Eric E.; Taylor, Joni; Lucas, Katherine; Kopace, Rael; Chefd’Hotel, Christophe; Lorenz, Christine H.; Croisille, Pierre; Wen, Han
2010-01-01
The purpose of this study is to develop and evaluate a displacement-encoded pulse sequence for simultaneous perfusion and strain imaging. Displacement-encoded images in 2–3 myocardial slices were repeatedly acquired using a single shot pulse sequence for 3 to 4 minutes, which covers a bolus infusion of Gd. The magnitudes of the images were T1 weighted and provided quantitative measures of perfusion, while the phase maps yielded strain measurements. In an acute coronary occlusion swine protocol (n=9), segmental perfusion measurements were validated against microsphere reference standard with a linear regression (slope 0.986, R2 = 0.765, Bland-Altman standard deviation = 0.15 ml/min/g). In a group of ST-elevation myocardial infarction(STEMI) patients (n=11), the scan success rate was 76%. Short-term contrast washout rate and perfusion are highly correlated (R2=0.72), and the pixel-wise relationship between circumferential strain and perfusion was better described with a sigmoidal Hill curve than linear functions. This study demonstrates the feasibility of measuring strain and perfusion from a single set of images. PMID:20544714
Prediction of siRNA potency using sparse logistic regression.
Hu, Wei; Hu, John
2014-06-01
RNA interference (RNAi) can modulate gene expression at post-transcriptional as well as transcriptional levels. Short interfering RNA (siRNA) serves as a trigger for the RNAi gene inhibition mechanism, and therefore is a crucial intermediate step in RNAi. There have been extensive studies to identify the sequence characteristics of potent siRNAs. One such study built a linear model using LASSO (Least Absolute Shrinkage and Selection Operator) to measure the contribution of each siRNA sequence feature. This model is simple and interpretable, but it requires a large number of nonzero weights. We have introduced a novel technique, sparse logistic regression, to build a linear model using single-position specific nucleotide compositions which has the same prediction accuracy of the linear model based on LASSO. The weights in our new model share the same general trend as those in the previous model, but have only 25 nonzero weights out of a total 84 weights, a 54% reduction compared to the previous model. Contrary to the linear model based on LASSO, our model suggests that only a few positions are influential on the efficacy of the siRNA, which are the 5' and 3' ends and the seed region of siRNA sequences. We also employed sparse logistic regression to build a linear model using dual-position specific nucleotide compositions, a task LASSO is not able to accomplish well due to its high dimensional nature. Our results demonstrate the superiority of sparse logistic regression as a technique for both feature selection and regression over LASSO in the context of siRNA design.
Jafari, Ramin; Chhabra, Shalini; Prince, Martin R; Wang, Yi; Spincemaille, Pascal
2018-04-01
To propose an efficient algorithm to perform dual input compartment modeling for generating perfusion maps in the liver. We implemented whole field-of-view linear least squares (LLS) to fit a delay-compensated dual-input single-compartment model to very high temporal resolution (four frames per second) contrast-enhanced 3D liver data, to calculate kinetic parameter maps. Using simulated data and experimental data in healthy subjects and patients, whole-field LLS was compared with the conventional voxel-wise nonlinear least-squares (NLLS) approach in terms of accuracy, performance, and computation time. Simulations showed good agreement between LLS and NLLS for a range of kinetic parameters. The whole-field LLS method allowed generating liver perfusion maps approximately 160-fold faster than voxel-wise NLLS, while obtaining similar perfusion parameters. Delay-compensated dual-input liver perfusion analysis using whole-field LLS allows generating perfusion maps with a considerable speedup compared with conventional voxel-wise NLLS fitting. Magn Reson Med 79:2415-2421, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
A regression-kriging model for estimation of rainfall in the Laohahe basin
NASA Astrophysics Data System (ADS)
Wang, Hong; Ren, Li L.; Liu, Gao H.
2009-10-01
This paper presents a multivariate geostatistical algorithm called regression-kriging (RK) for predicting the spatial distribution of rainfall by incorporating five topographic/geographic factors of latitude, longitude, altitude, slope and aspect. The technique is illustrated using rainfall data collected at 52 rain gauges from the Laohahe basis in northeast China during 1986-2005 . Rainfall data from 44 stations were selected for modeling and the remaining 8 stations were used for model validation. To eliminate multicollinearity, the five explanatory factors were first transformed using factor analysis with three Principal Components (PCs) extracted. The rainfall data were then fitted using step-wise regression and residuals interpolated using SK. The regression coefficients were estimated by generalized least squares (GLS), which takes the spatial heteroskedasticity between rainfall and PCs into account. Finally, the rainfall prediction based on RK was compared with that predicted from ordinary kriging (OK) and ordinary least squares (OLS) multiple regression (MR). For correlated topographic factors are taken into account, RK improves the efficiency of predictions. RK achieved a lower relative root mean square error (RMSE) (44.67%) than MR (49.23%) and OK (73.60%) and a lower bias than MR and OK (23.82 versus 30.89 and 32.15 mm) for annual rainfall. It is much more effective for the wet season than for the dry season. RK is suitable for estimation of rainfall in areas where there are no stations nearby and where topography has a major influence on rainfall.
Linking land cover and water quality in New York City's water supply watersheds.
Mehaffey, M H; Nash, M S; Wade, T G; Ebert, D W; Jones, K B; Rager, A
2005-08-01
The Catskill/Delaware reservoirs supply 90% of New York City's drinking water. The City has implemented a series of watershed protection measures, including land acquisition, aimed at preserving water quality in the Catskill/Delaware watersheds. The objective of this study was to examine how relationships between landscape and surface water measurements change between years. Thirty-two drainage areas delineated from surface water sample points (total nitrogen, total phosphorus, and fecal coliform bacteria concentrations) were used in step-wise regression analyses to test landscape and surface-water quality relationships. Two measurements of land use, percent agriculture and percent urban development, were positively related to water quality and consistently present in all regression models. Together these two land uses explained 25 to 75% of the regression model variation. However, the contribution of agriculture to water quality condition showed a decreasing trend with time as overall agricultural land cover decreased. Results from this study demonstrate that relationships between land cover and surface water concentrations of total nitrogen, total phosphorus, and fecal coliform bacteria counts over a large area can be evaluated using a relatively simple geographic information system method. Land managers may find this method useful for targeting resources in relation to a particular water quality concern, focusing best management efforts, and maximizing benefits to water quality with minimal costs.
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
Student goal orientation in learning inquiry skills with modifiable software advisors
NASA Astrophysics Data System (ADS)
Shimoda, Todd A.; White, Barbara Y.; Frederiksen, John R.
2002-03-01
A computer support environment (SCI-WISE) for learning and doing science inquiry projects was designed. SCI-WISE incorporates software advisors that give general advice about a skill such as hypothesizing. By giving general advice (rather than step-by-step procedures), the system is intended to help students conduct experiments that are more epistemologically authentic. Also, students using SCI-WISE can select the type of advice the advisors give and when they give advice, as well as modify the advisors' knowledge bases. The system is based partly on a theoretical framework of levels of agency and goal orientation. This framework assumes that giving students higher levels of agency facilitates higher-level goal orientations (such as mastery or knowledge building as opposed to task completion) that in turn produce higher levels of competence. A study of sixth grade science students was conducted. Students took a pretest questionnaire that measured their goal orientations for science projects and their inquiry skills. The students worked in pairs on an open-ended inquiry project that requires complex reasoning about human memory. The students used one of two versions of SCI-WISE - one that was modifiable and one that was not. After finishing the project, the students took a posttest questionnaire similar to the pretest, and evaluated the version of the system they used. The main results showed that (a) there was no correlation of goal orientation with grade point average, (b) knowledge-oriented students using the modifiable version tended to rate SCI-WISE more helpful than task-oriented students, and (c) knowledge-oriented pairs using the nonmodifiable version tended to have higher posttest inquiry skills scores than other pair types.
NASA Technical Reports Server (NTRS)
Ko, William L.; Fleischer, Van Tran
2013-01-01
This report presents a new method for estimating operational loads (bending moments, shear loads, and torques) acting on slender aerospace structures using distributed surface strains (unidirectional strains). The surface strain-sensing stations are to be evenly distributed along each span-wise strain-sensing line. A depth-wise cross section of the structure along each strain-sensing line can then be considered as an imaginary embedded beam. The embedded beam was first evenly divided into multiple small domains with domain junctures matching the strain-sensing stations. The new method is comprised of two steps. The first step is to determine the structure stiffness (bending or torsion) using surface strains obtained from a simple bending (or torsion) loading case, for which the applied bending moment (or torque) is known. The second step is to use the strain-determined structural stiffness (bending or torsion), and a new set of surface strains induced by any other loading case to calculate the associated operational loads (bending moments, shear loads, or torques). Performance of the new method for estimating operational loads was studied in light of finite-element analyses of several example structures subjected to different loading conditions. The new method for estimating operational loads was found to be fairly accurate, and is very promising for applications to the flight load monitoring of flying vehicles with slender wings.
Decomposition of P(CH 3) 3 on Ru(0001): comparison with PH 3 and PCl 3
NASA Astrophysics Data System (ADS)
Tao, H.-S.; Diebold, U.; Shinn, N. D.; Madey, T. E.
1997-04-01
The decomposition of P(CH 3) 3 adsorbed on Ru(0001) at 80 K is studied by soft X-ray photoelectron spectroscopy using synchrotron radiation. Using the chemical shifts in the P 2p core levels, we are able to identify various phosphorus-containing surface reaction products and follow their reactions on Ru(0001). It is found that P(CH 3) 3 undergoes a step-wise demethylation on Ru(0001), P(CH 3) 3 → P(CH 3) 2 → P(CH 3) → P, which is complete around ˜450 K. These results are compared with the decomposition of isostructural PH 3 and PCl 3 on Ru(0001). The decomposition of PH 3 involves a stable intermediate, labeled as PH x, and follows a reaction of: PH 3 → PH x → P, which is complete around ˜190 K. The conversion of chemisorbed phosphorus to ruthenium phosphide is observed and is complete around ˜700 K on Ru(0001). PCl 3 also follows a step-wise decomposition reaction, PCl 3 → PCl 2 → PCl → P, which is complete around ˜300 K. The energetics of the adsorption and the step-wise decomposition reactions of PH 3, PCl 3 and P(CH 3) 3 are estimated using the bond order conservation Morse potential (BOCMP) method. The energetics calculated using the BOCMP method agree qualitatively with the experimental data.
Yang, Xiaowei; Nie, Kun
2008-03-15
Longitudinal data sets in biomedical research often consist of large numbers of repeated measures. In many cases, the trajectories do not look globally linear or polynomial, making it difficult to summarize the data or test hypotheses using standard longitudinal data analysis based on various linear models. An alternative approach is to apply the approaches of functional data analysis, which directly target the continuous nonlinear curves underlying discretely sampled repeated measures. For the purposes of data exploration, many functional data analysis strategies have been developed based on various schemes of smoothing, but fewer options are available for making causal inferences regarding predictor-outcome relationships, a common task seen in hypothesis-driven medical studies. To compare groups of curves, two testing strategies with good power have been proposed for high-dimensional analysis of variance: the Fourier-based adaptive Neyman test and the wavelet-based thresholding test. Using a smoking cessation clinical trial data set, this paper demonstrates how to extend the strategies for hypothesis testing into the framework of functional linear regression models (FLRMs) with continuous functional responses and categorical or continuous scalar predictors. The analysis procedure consists of three steps: first, apply the Fourier or wavelet transform to the original repeated measures; then fit a multivariate linear model in the transformed domain; and finally, test the regression coefficients using either adaptive Neyman or thresholding statistics. Since a FLRM can be viewed as a natural extension of the traditional multiple linear regression model, the development of this model and computational tools should enhance the capacity of medical statistics for longitudinal data.
Influencing Adolescent Leisure Motivation: Intervention Effects of HealthWise South Africa
Caldwell, Linda L.; Patrick, Megan E.; Smith, Edward A.; Palen, Lori-Ann; Wegner, Lisa
2014-01-01
This study investigates changes in self-reported motivation for leisure due to participation in HealthWise, a high school curriculum aimed at decreasing risk behavior and promoting health behavior. Participants were 2,193 mixed race adolescents (M = 14 years old) from 9 schools (4 intervention, 5 control) near Cape Town, South Africa. Students in the HealthWise school with the greatest involvement in teacher training and implementation fidelity reported increased intrinsic and identified motivation and decreased introjected motivation and amotivation compared to students in control schools. These results point to the potential for intervention programming to influence leisure motivation among adolescents in South Africa and represent a first step toward identifying leisure motivation as a mediator of program effects. PMID:25429164
A Fast and Accurate Algorithm for l1 Minimization Problems in Compressive Sampling (Preprint)
2013-01-22
However, updating uk+1 via the formulation of Step 2 in Algorithm 1 can be implemented through the use of the component-wise Gauss - Seidel iteration which...may accelerate the rate of convergence of the algorithm and therefore reduce the total CPU-time consumed. The efficiency of component-wise Gauss - Seidel ...Micchelli, L. Shen, and Y. Xu, A proximity algorithm accelerated by Gauss - Seidel iterations for L1/TV denoising models, Inverse Problems, 28 (2012), p
Pretest variables that improve the predictive value of exercise testing in women.
Lamont, L S; Bobb, J; Blissmer, B; Desai, V
2015-12-01
Graded exercise testing (GXT) is used in coronary artery disease (CAD) prevention and rehabilitation programs. In women, this test has a decreased accuracy and predictive value but there are few studies that examine the predictors of a verified positive test. The aim of this study was to determine those pretest variables that might enhance the predictive value of the GXT in women clients. Medical records of 1761 patients referred for GXT's over a 5 yr period of time were screened. Demographic, medical, and exercise test variables were analyzed. The GXT's of 403 women were available for inclusion and they were stratified into 3 groups: positive responders that were subsequently shown to have CAD (N.=28 verified positive [VP]), positive responders that were not shown to have CAD (N.=84 non-verified positive [NVP]) and negative GXT responders (N.=291). Both univariate and a multivariate step-wise regression statistics were performed on this data. Pretest variables that differentiated between VP and NVP groups are: (an older age=65.8 vs. 60.2 yrs. P<0.05; a greater BMI=30.8 vs. 28.8 kg/m2; diabetes status or an elevated fasting glucose =107.4 vs. 95.2 mg/dL P<0.05; and the use of some cardiovascular medications. Our subsequent linear regression analysis emphasized that HDL cholesterol and beta blocker usage were the most predictive of a positive exercise test in this cohort. The American Heart Association recommends GXT's in women with an intermediate pretest probability of CAD. But there are only two clinical variables available prior to testing to make this probability decision: age and quality of chest pain. This study outlined that other pre-exercise test variables such as: BMI, blood chemistry (glucose and lipoprotein levels) and the use of cardiovascular medications are useful in clinical decision making. These pre-exercise test variables improved the predictive value of the GXT's in our sample.
Yoneoka, Daisuke; Henmi, Masayuki
2017-11-30
Recently, the number of clinical prediction models sharing the same regression task has increased in the medical literature. However, evidence synthesis methodologies that use the results of these regression models have not been sufficiently studied, particularly in meta-analysis settings where only regression coefficients are available. One of the difficulties lies in the differences between the categorization schemes of continuous covariates across different studies. In general, categorization methods using cutoff values are study specific across available models, even if they focus on the same covariates of interest. Differences in the categorization of covariates could lead to serious bias in the estimated regression coefficients and thus in subsequent syntheses. To tackle this issue, we developed synthesis methods for linear regression models with different categorization schemes of covariates. A 2-step approach to aggregate the regression coefficient estimates is proposed. The first step is to estimate the joint distribution of covariates by introducing a latent sampling distribution, which uses one set of individual participant data to estimate the marginal distribution of covariates with categorization. The second step is to use a nonlinear mixed-effects model with correction terms for the bias due to categorization to estimate the overall regression coefficients. Especially in terms of precision, numerical simulations show that our approach outperforms conventional methods, which only use studies with common covariates or ignore the differences between categorization schemes. The method developed in this study is also applied to a series of WHO epidemiologic studies on white blood cell counts. Copyright © 2017 John Wiley & Sons, Ltd.
A constitutive model for the warp-weft coupled non-linear behavior of knitted biomedical textiles.
Yeoman, Mark S; Reddy, Daya; Bowles, Hellmut C; Bezuidenhout, Deon; Zilla, Peter; Franz, Thomas
2010-11-01
Knitted textiles have been used in medical applications due to their high flexibility and low tendency to fray. Their mechanics have, however, received limited attention. A constitutive model for soft tissue using a strain energy function was extended, by including shear and increasing the number and order of coefficients, to represent the non-linear warp-weft coupled mechanics of coarse textile knits under uniaxial tension. The constitutive relationship was implemented in a commercial finite element package. The model and its implementation were verified and validated for uniaxial tension and simple shear using patch tests and physical test data of uniaxial tensile tests of four very different knitted fabric structures. A genetic algorithm with step-wise increase in resolution and linear reduction in range of the search space was developed for the optimization of the fabric model coefficients. The numerically predicted stress-strain curves exhibited non-linear stiffening characteristic for fabrics. For three fabrics, the predicted mechanics correlated well with physical data, at least in one principal direction (warp or weft), and moderately in the other direction. The model exhibited limitations in approximating the linear elastic behavior of the fourth fabric. With proposals to address this limitation and to incorporate time-dependent changes in the fabric mechanics associated with tissue ingrowth, the constitutive model offers a tool for the design of tissue regenerative knit textile implants. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Which factor contribute most to empower farmers through e-Agriculture in Bangladesh?
Rashid, Sheikh Mohammed Mamur; Islam, Md Rezwan; Quamruzzaman, Md
2016-01-01
The purpose of this research was designed to investigate the impact of e-Agriculture on farmers of Bangladesh. Empowerment is stratified as economic, family and social, political, knowledge and psychological empowerment. Data were collected in Bhatbour Block of Dhighi union under Sadar Upazila of Minikganj District. Data were collected in two phases from the same group of respondents (in August, 2013 and September, 2015). Two sample t test and step-wise multiple regression method were used for analysis. The results showed that e-Agriculture had significant impact on the empowerment of farmers of Bangladesh. Additionally, the study concluded that the most significant factor behind the empowerment of farmer was the use of e-Agriculture which could explain almost 84 % of the total variation of the empowerment. Based on the findings, it is recommended that government should implement e-Agriculture based projects on a massive scale for the empowerment of the farmers.
Maintenance cost study of rotary wing aircraft, phase 2
NASA Technical Reports Server (NTRS)
1979-01-01
The Navy's maintenance and materials management data base was used in a study to determine the feasibility of predicting unscheduled maintenance costs for the dynamic systems of military rotary wing aircraft. The major operational and design variables were identified and the direct maintenance man hours per flight hour were obtained by step-wise multiple regression analysis. Five nonmilitary helicopter users were contacted to supply data on which variables were important factors in civil applications. These uses included offshore oil exploration and support, police and fire department rescue and enforcement, logging and heavy equipment movement, and U.S. Army military operations. The equations developed were highly effective in predicting unscheduled direct maintenance man hours per flying hours for military aircraft, but less effective for commercial or public service helicopters, probably because of the longer mission durations and the much higher utilization of civil users.
Alcohol-related problems: emergency physicians' current practice and attitudes.
O'Rourke, Maria; Richardson, Lynne D; Wilets, Ilene; D'Onofrio, Gail
2006-04-01
To determine whether emergency physicians' (EPs) attitudes affect their support and practice of brief intervention in the Emergency Department (ED), EPs completed an anonymous survey. EPs were asked about their attitudes toward patients with alcohol problems, current ED screening, use of brief intervention, and barriers to use of brief intervention. Chi-square analysis was used and a step-wise regression model was constructed. Respondents reported a high prevalence of patients with alcohol-related problems: 18% in a typical shift. Eighty-one percent said it is important to advise patients to change behavior; half said using a brief intervention is important. Attending physicians had significantly less alcohol education than residents, but were significantly more likely to support the use of brief intervention. Support was not associated with gender, race, census, hours of education, or personal experience. EPs who felt that brief intervention was an integral part of their job were more likely to use it in their daily practice.
Guillaume, Bryan; Wang, Changqing; Poh, Joann; Shen, Mo Jun; Ong, Mei Lyn; Tan, Pei Fang; Karnani, Neerja; Meaney, Michael; Qiu, Anqi
2018-06-01
Statistical inference on neuroimaging data is often conducted using a mass-univariate model, equivalent to fitting a linear model at every voxel with a known set of covariates. Due to the large number of linear models, it is challenging to check if the selection of covariates is appropriate and to modify this selection adequately. The use of standard diagnostics, such as residual plotting, is clearly not practical for neuroimaging data. However, the selection of covariates is crucial for linear regression to ensure valid statistical inference. In particular, the mean model of regression needs to be reasonably well specified. Unfortunately, this issue is often overlooked in the field of neuroimaging. This study aims to adopt the existing Confounder Adjusted Testing and Estimation (CATE) approach and to extend it for use with neuroimaging data. We propose a modification of CATE that can yield valid statistical inferences using Principal Component Analysis (PCA) estimators instead of Maximum Likelihood (ML) estimators. We then propose a non-parametric hypothesis testing procedure that can improve upon parametric testing. Monte Carlo simulations show that the modification of CATE allows for more accurate modelling of neuroimaging data and can in turn yield a better control of False Positive Rate (FPR) and Family-Wise Error Rate (FWER). We demonstrate its application to an Epigenome-Wide Association Study (EWAS) on neonatal brain imaging and umbilical cord DNA methylation data obtained as part of a longitudinal cohort study. Software for this CATE study is freely available at http://www.bioeng.nus.edu.sg/cfa/Imaging_Genetics2.html. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
Living environment and mobility of older adults.
Cress, M Elaine; Orini, Stefania; Kinsler, Laura
2011-01-01
Older adults often elect to move into smaller living environments. Smaller living space and the addition of services provided by a retirement community (RC) may make living easier for the individual, but it may also reduce the amount of daily physical activity and ultimately reduce functional ability. With home size as an independent variable, the primary purpose of this study was to evaluate daily physical activity and physical function of community dwellers (CD; n = 31) as compared to residents of an RC (n = 30). In this cross-sectional study design, assessments included: the Continuous Scale Physical Functional Performance - 10 test, with a possible range of 0-100, higher scores reflecting better function; Step Activity Monitor (StepWatch 3.1); a physical activity questionnaire, the area of the home (in square meters). Groups were compared by one-way ANOVA. A general linear regression model was used to predict the number of steps per day at home. The level of significance was p < 0.05. Of the 61 volunteers (mean age: 79 ± 6.3 years; range: 65-94 years), the RC living space (68 ± 37.7 m(2)) was 62% smaller than the CD living space (182.8 ± 77.9 m(2); p = 0.001). After correcting for age, the RC took fewer total steps per day excluding exercise (p = 0.03) and had lower function (p = 0.005) than the CD. On average, RC residents take 3,000 steps less per day and have approximately 60% of the living space of a CD. Home size and physical function were primary predictors of the number of steps taken at home, as found using a general linear regression analysis. Copyright © 2010 S. Karger AG, Basel.
A Multiomics Approach to Identify Genes Associated with Childhood Asthma Risk and Morbidity.
Forno, Erick; Wang, Ting; Yan, Qi; Brehm, John; Acosta-Perez, Edna; Colon-Semidey, Angel; Alvarez, Maria; Boutaoui, Nadia; Cloutier, Michelle M; Alcorn, John F; Canino, Glorisa; Chen, Wei; Celedón, Juan C
2017-10-01
Childhood asthma is a complex disease. In this study, we aim to identify genes associated with childhood asthma through a multiomics "vertical" approach that integrates multiple analytical steps using linear and logistic regression models. In a case-control study of childhood asthma in Puerto Ricans (n = 1,127), we used adjusted linear or logistic regression models to evaluate associations between several analytical steps of omics data, including genome-wide (GW) genotype data, GW methylation, GW expression profiling, cytokine levels, asthma-intermediate phenotypes, and asthma status. At each point, only the top genes/single-nucleotide polymorphisms/probes/cytokines were carried forward for subsequent analysis. In step 1, asthma modified the gene expression-protein level association for 1,645 genes; pathway analysis showed an enrichment of these genes in the cytokine signaling system (n = 269 genes). In steps 2-3, expression levels of 40 genes were associated with intermediate phenotypes (asthma onset age, forced expiratory volume in 1 second, exacerbations, eosinophil counts, and skin test reactivity); of those, methylation of seven genes was also associated with asthma. Of these seven candidate genes, IL5RA was also significant in analytical steps 4-8. We then measured plasma IL-5 receptor α levels, which were associated with asthma age of onset and moderate-severe exacerbations. In addition, in silico database analysis showed that several of our identified IL5RA single-nucleotide polymorphisms are associated with transcription factors related to asthma and atopy. This approach integrates several analytical steps and is able to identify biologically relevant asthma-related genes, such as IL5RA. It differs from other methods that rely on complex statistical models with various assumptions.
Bayesian Group Bridge for Bi-level Variable Selection.
Mallick, Himel; Yi, Nengjun
2017-06-01
A Bayesian bi-level variable selection method (BAGB: Bayesian Analysis of Group Bridge) is developed for regularized regression and classification. This new development is motivated by grouped data, where generic variables can be divided into multiple groups, with variables in the same group being mechanistically related or statistically correlated. As an alternative to frequentist group variable selection methods, BAGB incorporates structural information among predictors through a group-wise shrinkage prior. Posterior computation proceeds via an efficient MCMC algorithm. In addition to the usual ease-of-interpretation of hierarchical linear models, the Bayesian formulation produces valid standard errors, a feature that is notably absent in the frequentist framework. Empirical evidence of the attractiveness of the method is illustrated by extensive Monte Carlo simulations and real data analysis. Finally, several extensions of this new approach are presented, providing a unified framework for bi-level variable selection in general models with flexible penalties.
Quantile rank maps: a new tool for understanding individual brain development.
Chen, Huaihou; Kelly, Clare; Castellanos, F Xavier; He, Ye; Zuo, Xi-Nian; Reiss, Philip T
2015-05-01
We propose a novel method for neurodevelopmental brain mapping that displays how an individual's values for a quantity of interest compare with age-specific norms. By estimating smoothly age-varying distributions at a set of brain regions of interest, we derive age-dependent region-wise quantile ranks for a given individual, which can be presented in the form of a brain map. Such quantile rank maps could potentially be used for clinical screening. Bootstrap-based confidence intervals are proposed for the quantile rank estimates. We also propose a recalibrated Kolmogorov-Smirnov test for detecting group differences in the age-varying distribution. This test is shown to be more robust to model misspecification than a linear regression-based test. The proposed methods are applied to brain imaging data from the Nathan Kline Institute Rockland Sample and from the Autism Brain Imaging Data Exchange (ABIDE) sample. Copyright © 2015 Elsevier Inc. All rights reserved.
Jones, Molly; Barclay, Alan W; Brand-Miller, Jennie C; Louie, Jimmy Chun Yu
2016-07-01
This study aimed to examine the dietary glycaemic index (GI) and glycaemic load (GL) of Australian children and adolescents, as well as the major food groups contributing to GL, in the recent 2011-2012 Australian Health Survey. Plausible food intake data from 1876 children and adolescents (51 % boys), collected using a multiple-pass 24-h recall, were analysed. The GI of foods was assigned based on a step-wise published method using values from common GI databases. Descriptive statistics were calculated for dietary GI, GL and contribution to GL by food groups, stratified by age group and sex. Linear regression was used to test for trends across age groups for BMI, dietary GI and GL, and intakes of energy, nutrients and food groups. Pearson's χ 2 test was used to test for differences between age groups for categorical subject characteristic variables. Mean dietary GI and GL of participants were 55·5 (sd 5·3) and 137·4 (sd 50·8), respectively. The main contributors to dietary GL were starchy foods: breads, cereal-based dishes, breakfast cereals, flours, grains and potatoes accounted for 41 % of total GL. Sweetened beverages, fruit and vegetable juices/drinks, cake-type desserts and sweet biscuits contributed 15 %. No significant difference (at P<0·001) was observed between sexes. In conclusion, Australian children and adolescents appear to consume diets with a lower GI than European children. Exchanging high-GI foods for low-GI alternatives within core and non-core foods may improve diet quality of Australian children and adolescents.
Genetic variation in alpha2-adrenoreceptors and heart rate recovery after exercise
Kohli, Utkarsh; Diedrich, André; Kannankeril, Prince J.; Muszkat, Mordechai; Sofowora, Gbenga G.; Hahn, Maureen K.; English, Brett A.; Blakely, Randy D.; Stein, C. Michael
2015-01-01
Heart rate recovery (HRR) after exercise is an independent predictor of adverse cardiovascular outcomes. HRR is mediated by both parasympathetic reactivation and sympathetic withdrawal and is highly heritable. We examined whether common genetic variants in adrenergic and cholinergic receptors and transporters affect HRR. In our study 126 healthy subjects (66 Caucasians, 56 African Americans) performed an 8 min step-wise bicycle exercise test with continuous computerized ECG recordings. We fitted an exponential curve to the postexercise R-R intervals for each subject to calculate the recovery constant (kr) as primary outcome. Secondary outcome was the root mean square residuals averaged over 1 min (RMS1min), a marker of parasympathetic tone. We used multiple linear regressions to determine the effect of functional candidate genetic variants in autonomic pathways (6 ADRA2A, 1 ADRA2B, 4 ADRA2C, 2 ADRB1, 3 ADRB2, 2 NET, 2 CHT, and 1 GRK5) on the outcomes before and after adjustment for potential confounders. Recovery constant was lower (indicating slower HRR) in ADRA2B 301–303 deletion carriers (n = 54, P = 0.01), explaining 3.6% of the interindividual variability in HRR. ADRA2A Asn251Lys, ADRA2C rs13118771, and ADRB1 Ser49Gly genotypes were associated with RMS1min. Genetic variability in adrenergic receptors may be associated with HRR after exercise. However, most of the interindividual variability in HRR remained unexplained by the variants examined. Noncandidate gene-driven approaches to study genetic contributions to HRR in larger cohorts will be of interest. PMID:26058836
Deep Full-sky Coadds from Three Years of WISE and NEOWISE Observations
Meisner, A. M.; Lang, D.; Schlegel, D. J.
2017-09-26
Here, we have reprocessed over 100 terabytes of single-exposure Wide-field Infrared Survey Explorer (WISE)/NEOWISE images to create the deepest ever full-sky maps at 3-5 microns. We include all publicly available W1 and W2 imaging - a total of ~8 million exposures in each band - from ~37 months of observations spanning 2010 January to 2015 December. Our coadds preserve the native WISE resolution and typically incorporate ~3× more input frames than those of the AllWISE Atlas stacks. Our coadds are designed to enable deep forced photometry, in particular for the Dark Energy Camera Legacy Survey (DECaLS) and Mayall z-Band Legacymore » Survey (MzLS), both of which are being used to select targets for the Dark Energy Spectroscopic Instrument. We describe newly introduced processing steps aimed at leveraging added redundancy to remove artifacts, with the intent of facilitating uniform target selection and searches for rare/exotic objects (e.g., high-redshift quasars and distant galaxy clusters). Forced photometry depths achieved with these coadds extend 0.56 (0.46) magnitudes deeper in W1 (W2) than is possible with only pre-hibernation WISE imaging.« less
Deep Full-sky Coadds from Three Years of WISE and NEOWISE Observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meisner, A. M.; Lang, D.; Schlegel, D. J., E-mail: ameisner@lbl.gov
We have reprocessed over 100 terabytes of single-exposure Wide-field Infrared Survey Explorer ( WISE )/NEOWISE images to create the deepest ever full-sky maps at 3–5 microns. We include all publicly available W1 and W2 imaging—a total of ∼8 million exposures in each band—from ∼37 months of observations spanning 2010 January to 2015 December. Our coadds preserve the native WISE resolution and typically incorporate ∼3× more input frames than those of the AllWISE Atlas stacks. Our coadds are designed to enable deep forced photometry, in particular for the Dark Energy Camera Legacy Survey (DECaLS) and Mayall z-Band Legacy Survey (MzLS), bothmore » of which are being used to select targets for the Dark Energy Spectroscopic Instrument. We describe newly introduced processing steps aimed at leveraging added redundancy to remove artifacts, with the intent of facilitating uniform target selection and searches for rare/exotic objects (e.g., high-redshift quasars and distant galaxy clusters). Forced photometry depths achieved with these coadds extend 0.56 (0.46) magnitudes deeper in W1 (W2) than is possible with only pre-hibernation WISE imaging.« less
Deep Full-sky Coadds from Three Years of WISE and NEOWISE Observations
NASA Astrophysics Data System (ADS)
Meisner, A. M.; Lang, D.; Schlegel, D. J.
2017-10-01
We have reprocessed over 100 terabytes of single-exposure Wide-field Infrared Survey Explorer (WISE)/NEOWISE images to create the deepest ever full-sky maps at 3-5 microns. We include all publicly available W1 and W2 imaging—a total of ˜8 million exposures in each band—from ˜37 months of observations spanning 2010 January to 2015 December. Our coadds preserve the native WISE resolution and typically incorporate ˜3× more input frames than those of the AllWISE Atlas stacks. Our coadds are designed to enable deep forced photometry, in particular for the Dark Energy Camera Legacy Survey (DECaLS) and Mayall z-Band Legacy Survey (MzLS), both of which are being used to select targets for the Dark Energy Spectroscopic Instrument. We describe newly introduced processing steps aimed at leveraging added redundancy to remove artifacts, with the intent of facilitating uniform target selection and searches for rare/exotic objects (e.g., high-redshift quasars and distant galaxy clusters). Forced photometry depths achieved with these coadds extend 0.56 (0.46) magnitudes deeper in W1 (W2) than is possible with only pre-hibernation WISE imaging.
Deep Full-sky Coadds from Three Years of WISE and NEOWISE Observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meisner, A. M.; Lang, D.; Schlegel, D. J.
Here, we have reprocessed over 100 terabytes of single-exposure Wide-field Infrared Survey Explorer (WISE)/NEOWISE images to create the deepest ever full-sky maps at 3-5 microns. We include all publicly available W1 and W2 imaging - a total of ~8 million exposures in each band - from ~37 months of observations spanning 2010 January to 2015 December. Our coadds preserve the native WISE resolution and typically incorporate ~3× more input frames than those of the AllWISE Atlas stacks. Our coadds are designed to enable deep forced photometry, in particular for the Dark Energy Camera Legacy Survey (DECaLS) and Mayall z-Band Legacymore » Survey (MzLS), both of which are being used to select targets for the Dark Energy Spectroscopic Instrument. We describe newly introduced processing steps aimed at leveraging added redundancy to remove artifacts, with the intent of facilitating uniform target selection and searches for rare/exotic objects (e.g., high-redshift quasars and distant galaxy clusters). Forced photometry depths achieved with these coadds extend 0.56 (0.46) magnitudes deeper in W1 (W2) than is possible with only pre-hibernation WISE imaging.« less
Preliminary evidence for school-based physical activity policy needs in Washington, DC.
Goodman, Emily; Evans, W Douglas; DiPietro, Loretta
2012-01-01
The school setting could be a primary venue for promoting physical activity among inner-city children due to the structured natured of the school day. We examined differences in step counts between structured school days (SSD) and weekend days (WED) among a sample of public school children in Washington, DC. Subjects (N = 29) were third- to sixth-grade students enrolled in government-funded, extended-day enrichment programs. Step counts were measured using a pedometer (Bodytronics) over 2 SSD and 2 WED. Differences in mean step counts between SSD and WED were determined using multivariable linear regression, with adjustments for age, sex, and reported distance between house and school (miles). Recorded step counts were low on both SSD and WED (7735 ± 3540 and 8339 ± 5314 steps/day). Boys tended to record more steps on SSD compared with girls (8080 ± 3141 vs. 7491 ± 3872 steps/day, respectively), whereas girls recorded more steps on the WED compared with boys (9292 ± 6381 vs. 7194 ± 3669 steps/day). Parameter estimates from the regression modeling suggest distance from school (P < .01) to be the strongest predictor of daily step counts, independent of day (SSD/WED), sex, and age. Among inner-city school children, a safe walking route to and from school may provide an important opportunity for daily physical activity.
On spectral synthesis on element-wise compact Abelian groups
NASA Astrophysics Data System (ADS)
Platonov, S. S.
2015-08-01
Let G be an arbitrary locally compact Abelian group and let C(G) be the space of all continuous complex-valued functions on G. A closed linear subspace \\mathscr H\\subseteq C(G) is referred to as an invariant subspace if it is invariant with respect to the shifts τ_y\\colon f(x)\\mapsto f(xy), y\\in G. By definition, an invariant subspace \\mathscr H\\subseteq C(G) admits strict spectral synthesis if \\mathscr H coincides with the closure in C(G) of the linear span of all characters of G belonging to \\mathscr H. We say that strict spectral synthesis holds in the space C(G) on G if every invariant subspace \\mathscr H\\subseteq C(G) admits strict spectral synthesis. An element x of a topological group G is said to be compact if x is contained in some compact subgroup of G. A group G is said to be element-wise compact if all elements of G are compact. The main result of the paper is the proof of the fact that strict spectral synthesis holds in C(G) for a locally compact Abelian group G if and only if G is element-wise compact. Bibliography: 14 titles.
Solving large mixed linear models using preconditioned conjugate gradient iteration.
Strandén, I; Lidauer, M
1999-12-01
Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.
K/Ar dating of lunar soils. IV - Orange glass from 74220 and agglutinates from 14259 and 14163
NASA Technical Reports Server (NTRS)
Alexander, E. C., Jr.; Coscio, M. R., Jr.; Dragon, J. C.; Saito, K.
1980-01-01
Total fusion Ar-40 - A-39 analyses of orange glass from lunar soil 74220 combined with the sums of earlier stepwise heating data by other workers have yielded a precise K/Ar isochron with a slope corresponding to an age of 3.66 + or - 0.03 G.y. for the orange glass. The result is in marginal agreement with Huneke's (1978) age of 3.60 + or - 0.04 G.y. for 74220 glass. The Ar systematics in the agglutinates from 14259 and 14163 are dominated by volume correlated argon. Step-wise heating analyses yield data which define experimentally reproducible linear arrays in Ar-40/Ar-36 vs. K-40/Ar-36 diagrams. The slopes of these arrays correspond formally to very old ages, but it is not clear, however, that such ages have any physical significance.
Soil stabilization linked to plant diversity and environmental context in coastal wetlands.
Ford, Hilary; Garbutt, Angus; Ladd, Cai; Malarkey, Jonathan; Skov, Martin W
2016-03-01
Plants play a pivotal role in soil stabilization, with above-ground vegetation and roots combining to physically protect soil against erosion. It is possible that diverse plant communities boost root biomass, with knock-on positive effects for soil stability, but these relationships are yet to be disentangled. We hypothesize that soil erosion rates fall with increased plant species richness, and test explicitly how closely root biomass is associated with plant diversity. We tested this hypothesis in salt marsh grasslands, dynamic ecosystems with a key role in flood protection. Using step-wise regression, the influences of biotic (e.g. plant diversity) and abiotic variables on root biomass and soil stability were determined for salt marshes with two contrasting soil types: erosion-resistant clay (Essex, southeast UK) and erosion-prone sand (Morecambe Bay, northwest UK). A total of 132 (30-cm depth) cores of natural marsh were extracted and exposed to lateral erosion by water in a re-circulating flume. Soil erosion rates fell with increased plant species richness ( R 2 = 0.55), when richness was modelled as a single explanatory variable, but was more important in erosion-prone ( R 2 = 0.44) than erosion-resistant ( R 2 = 0.18) regions. As plant species richness increased from two to nine species·m -2 , the coefficient of variation in soil erosion rate decreased significantly ( R 2 = 0.92). Plant species richness was a significant predictor of root biomass ( R 2 = 0.22). Step-wise regression showed that five key variables accounted for 80% of variation in soil erosion rate across regions. Clay-silt fraction and soil carbon stock were linked to lower rates, contributing 24% and 31%, respectively, to variation in erosion rate. In regional analysis, abiotic factors declined in importance, with root biomass explaining 25% of variation. Plant diversity explained 12% of variation in the erosion-prone sandy region. Our study indicates that soil stabilization and root biomass are positively associated with plant diversity. Diversity effects are more pronounced in biogeographical contexts where soils are erosion-prone (sandy, low organic content), suggesting that the pervasive influence of biodiversity on environmental processes also applies to the ecosystem service of erosion protection.
Yang, Chuntao; Gao, Peng; Hou, Fujiang; Yan, Tianhai; Chang, Shenghua; Chen, Xianjiang; Wang, Zhaofeng
2018-04-02
To better utilize native pasture at the high altitude region, three-consecutive-year feeding experiments and a total of seven metabolism trials were conducted to evaluate the impact of three forage stages of maturity on the chemical composition, nutrient digestibility, and energy metabolism of native forage in Tibetan sheep on the Qinghai-Tibetan Plateau (QTP). Forages were harvested from June to July, August to October, and November to December of 2011 to 2013, corresponding to the vegetative, bloom, and senescent stages of the annual forages. Twenty male Tibetan sheep were selected for each study and fed native forage ad libitum. The digestibility of DM, OM, CP, NDF, ADF, DE, DE/GE, and ME/GE were greatest (P < 0.01) from the vegetative stage, intermediate (P < 0.01) from the bloom stage, and least (P < 0.01) from the senescent stage. Nutrient digestibility and energy parameters correlated positively (linear, 0.422 to 0.778; quadratic, 0.568 to 0.815; P < 0.01) with the CP content of forage but correlated negatively with the content of NDF (linear, 0.343 to 0.689; quadratic, 0.444 to 0.777; P ≤ 0.02), ADF (linear, 0.563 to 0.766; quadratic, 0.582 to 0.770; P < 0.01), and ether extract (EE, linear, 0.283 to 0.574; quadratic, 0.366 to 0.718; P ≤ 0.04) of forage. For each predicted variable, the prediction of DMI expressed as grams per kilogram of BW (g/kg BW·d) yielded a greater R2 value (0.677 to 0.761 vs. 0.616 to 0.711) compared with the equations of DMI expressed as g/kg metabolic BW by step-wise regression. The results suggest that parameters of forage CP, NDF, and ADF content were most closely related to nutrient digestibility. Contrary to previous studies, in this study, ADF content had a greater linear relationship (0.766 vs. 0.563 to 0.732) with OM digestibility than the other parameters of nutrient digestibility. The quadratic relationship between forage CP content and CP digestibility indicates that when forage CP content exceeds the peak point (9.7% DM in the present study), increasing forage CP content could decrease CP digestibility when Tibetan sheep were offered native forage alone on the QTP. Additionally, using the forage CP, EE, NDF, and ADF content to predict DMI (g/kg BW·d) yielded the best fit equation for Tibetan sheep living in the northeast portion of the QTP.
Effects of cold and hot temperature on dehydration: a mechanism of cardiovascular burden.
Lim, Youn-Hee; Park, Min-Seon; Kim, Yoonhee; Kim, Ho; Hong, Yun-Chul
2015-08-01
The association between temperature (cold or heat) and cardiovascular mortality has been well documented. However, few studies have investigated the underlying mechanism of the cold or heat effect. The main goal of this study was to examine the effect of temperature on dehydration markers and to explain the pathophysiological disturbances caused by changes of temperature. We investigated the relationship between outdoor temperature and dehydration markers (blood urea nitrogen (BUN)/creatinine ratio, urine specific gravity, plasma tonicity and haematocrit) in 43,549 adults from Seoul, South Korea, during 1995-2008. We used piece-wise linear regression to find the flexion point of apparent temperature and estimate the effects below or above the apparent temperature. Levels of dehydration markers decreased linearly with an increase in the apparent temperature until a point between 22 and 27 °C, which was regarded as the flexion point of apparent temperature, and then increased with apparent temperature. Because the associations between temperature and cardiovascular mortality are known to be U-shaped, our findings suggest that temperature-related changes in hydration status underlie the increased cardiovascular mortality and morbidity during high- or low-temperature conditions.
Texas traffic thermostat software tool.
DOT National Transportation Integrated Search
2013-04-01
The traffic thermostat decision tool is built to help guide the user through a logical, step-wise, process of examining potential changes to their Manage Lane/toll facility. : **NOTE: Project Title: Application of the Traffic Thermostat Framework. Ap...
Texas traffic thermostat marketing package.
DOT National Transportation Integrated Search
2013-04-01
The traffic thermostat decision tool is built to help guide the user through a logical, step-wise, process of examining potential changes to their Manage Lane/toll facility. : **NOTE: Project Title: Application of the Traffic Thermostat Framework. Ap...
NASA Astrophysics Data System (ADS)
Borders, Kareen; Mendez, Bryan; Thaller, Michelle; Gorjian, Varoujan; Borders, Kyla; Pitman, Peter; Pereira, Vincent; Sepulveda, Babs; Stark, Ron; Knisely, Cindy; Dandrea, Amy; Winglee, Robert; Plecki, Marge; Goebel, Jeri; Condit, Matt; Kelly, Susan
The Spitzer Space Telescope and the recently launched WISE (Wide Field Infrared Survey Explorer) observe the sky in infrared light. Among the objects WISE will study are asteroids, the coolest and dimmest stars, and the most luminous galaxies. Secondary students can do authentic research using infrared data. For example, students will use WISE data to mea-sure physical properties of asteroids. In order to prepare students and teachers at this level with a high level of rigor and scientific understanding, the WISE and the Spitzer Space Tele-scope Education programs provided an immersive teacher professional development workshop in infrared astronomy.The lessons learned from the Spitzer and WISE teacher and student pro-grams can be applied to other programs engaging them in authentic research experiences using data from space-borne observatories such as Herschel and Planck. Recently, WISE Educator Ambassadors and NASA Explorer School teachers developed and led an infrared astronomy workshop at Arecibo Observatory in PuertoRico. As many common misconceptions involve scale and distance, teachers worked with Moon/Earth scale, solar system scale, and distance and age of objects in the Universe. Teachers built and used basic telescopes, learned about the history of telescopes, explored ground and satellite based telescopes, and explored and worked on models of WISE Telescope. An in-depth explanation of WISE and the Spitzer telescopes gave participants background knowledge for infrared astronomy observations. We taught the electromagnetic spectrum through interactive stations. We will outline specific steps for sec-ondary astronomy professional development, detail student involvement in infrared telescope data analysis, provide data demonstrating the impact of the above professional development on educator understanding and classroom use, and detail future plans for additional secondary professional development and student involvement in infrared astronomy. Funding was provided by NASA, WISE Telescope, the Spitzer Space Telescope, the American Institute of Aeronautics and Astronautics, the National Optical Astronomy Observatory, Starbucks, and Washington Space Grant Consortium.
On neural networks in identification and control of dynamic systems
NASA Technical Reports Server (NTRS)
Phan, Minh; Juang, Jer-Nan; Hyland, David C.
1993-01-01
This paper presents a discussion of the applicability of neural networks in the identification and control of dynamic systems. Emphasis is placed on the understanding of how the neural networks handle linear systems and how the new approach is related to conventional system identification and control methods. Extensions of the approach to nonlinear systems are then made. The paper explains the fundamental concepts of neural networks in their simplest terms. Among the topics discussed are feed forward and recurrent networks in relation to the standard state-space and observer models, linear and nonlinear auto-regressive models, linear, predictors, one-step ahead control, and model reference adaptive control for linear and nonlinear systems. Numerical examples are presented to illustrate the application of these important concepts.
NASA Astrophysics Data System (ADS)
Patil, Prataprao; Vyasarayani, C. P.; Ramji, M.
2017-06-01
In this work, digital photoelasticity technique is used to estimate the crack tip fracture parameters for different crack configurations. Conventionally, only isochromatic data surrounding the crack tip is used for SIF estimation, but with the advent of digital photoelasticity, pixel-wise availability of both isoclinic and isochromatic data could be exploited for SIF estimation in a novel way. A linear least square approach is proposed to estimate the mixed-mode crack tip fracture parameters by solving the multi-parameter stress field equation. The stress intensity factor (SIF) is extracted from those estimated fracture parameters. The isochromatic and isoclinic data around the crack tip is estimated using the ten-step phase shifting technique. To get the unwrapped data, the adaptive quality guided phase unwrapping algorithm (AQGPU) has been used. The mixed mode fracture parameters, especially SIF are estimated for specimen configurations like single edge notch (SEN), center crack and straight crack ahead of inclusion using the proposed algorithm. The experimental SIF values estimated using the proposed method are compared with analytical/finite element analysis (FEA) results, and are found to be in good agreement.
MRI-based intelligence quotient (IQ) estimation with sparse learning.
Wang, Liye; Wee, Chong-Yaw; Suk, Heung-Il; Tang, Xiaoying; Shen, Dinggang
2015-01-01
In this paper, we propose a novel framework for IQ estimation using Magnetic Resonance Imaging (MRI) data. In particular, we devise a new feature selection method based on an extended dirty model for jointly considering both element-wise sparsity and group-wise sparsity. Meanwhile, due to the absence of large dataset with consistent scanning protocols for the IQ estimation, we integrate multiple datasets scanned from different sites with different scanning parameters and protocols. In this way, there is large variability in these different datasets. To address this issue, we design a two-step procedure for 1) first identifying the possible scanning site for each testing subject and 2) then estimating the testing subject's IQ by using a specific estimator designed for that scanning site. We perform two experiments to test the performance of our method by using the MRI data collected from 164 typically developing children between 6 and 15 years old. In the first experiment, we use a multi-kernel Support Vector Regression (SVR) for estimating IQ values, and obtain an average correlation coefficient of 0.718 and also an average root mean square error of 8.695 between the true IQs and the estimated ones. In the second experiment, we use a single-kernel SVR for IQ estimation, and achieve an average correlation coefficient of 0.684 and an average root mean square error of 9.166. All these results show the effectiveness of using imaging data for IQ prediction, which is rarely done in the field according to our knowledge.
Sang-aroon, Wichien; Amornkitbamrung, Vittaya; Ruangpornvisuti, Vithaya
2013-12-01
In this work, peptide bond cleavages at carboxy- and amino-sides of the aspartic residue in a peptide model via direct (concerted and step-wise) and cyclic intermediate hydrolysis reaction pathways were explored computationally. The energetics, thermodynamic properties, rate constants, and equilibrium constants of all hydrolysis reactions, as well as their energy profiles were computed at the B3LYP/6-311++G(d,p) level of theory. The result indicated that peptide bond cleavage of the Asp residue occurred most preferentially via the cyclic intermediate hydrolysis pathway. In all reaction pathways, cleavage of the peptide bond at the amino-side occurred less preferentially than at the carboxy-side. The overall reaction rate constants of peptide bond cleavage of the Asp residue at the carboxy-side for the assisted system were, in increasing order: concerted < step-wise < cyclic intermediate.
Testing electroexplosive devices by programmed pulsing techniques
NASA Technical Reports Server (NTRS)
Rosenthal, L. A.; Menichelli, V. J.
1976-01-01
A novel method for testing electroexplosive devices is proposed wherein capacitor discharge pulses, with increasing energy in a step-wise fashion, are delivered to the device under test. The size of the energy increment can be programmed so that firing takes place after many, or after only a few, steps. The testing cycle is automatically terminated upon firing. An energy-firing contour relating the energy required to the programmed step size describes the single-pulse firing energy and the possible sensitization or desensitization of the explosive device.
Doan, Nhat Trung; van Rooden, Sanneke; Versluis, Maarten J; Buijs, Mathijs; Webb, Andrew G; van der Grond, Jeroen; van Buchem, Mark A; Reiber, Johan H C; Milles, Julien
2015-07-01
High field T 2 * -weighted MR images of the cerebral cortex are increasingly used to study tissue susceptibility changes related to aging or pathologies. This paper presents a novel automated method for the computation of quantitative cortical measures and group-wise comparison using 7 Tesla T 2 * -weighted magnitude and phase images. The cerebral cortex was segmented using a combination of T 2 * -weighted magnitude and phase information and subsequently was parcellated based on an anatomical atlas. Local gray matter (GM)/white matter (WM) contrast and cortical profiles, which depict the magnitude or phase variation across the cortex, were computed from the magnitude and phase images in each parcellated region and further used for group-wise comparison. Differences in local GM/WM contrast were assessed using linear regression analysis. Regional cortical profiles were compared both globally and locally using permutation testing. The method was applied to compare a group of 10 young volunteers with a group of 15 older subjects. Using local GM/WM contrast, significant differences were revealed in at least 13 of 17 studied regions. Highly significant differences between cortical profiles were shown in all regions. The proposed method can be a useful tool for studying cortical changes in normal aging and potentially in neurodegenerative diseases. Magn Reson Med 74:240-248, 2015. © 2014 Wiley Periodicals, Inc. © 2014 Wiley Periodicals, Inc.
Gordine, Samantha Alex; Fedak, Michael; Boehme, Lars
2015-01-01
ABSTRACT In southern elephant seals (Mirounga leonina), fasting- and foraging-related fluctuations in body composition are reflected by buoyancy changes. Such buoyancy changes can be monitored by measuring changes in the rate at which a seal drifts passively through the water column, i.e. when all active swimming motion ceases. Here, we present an improved knowledge-based method for detecting buoyancy changes from compressed and abstracted dive profiles received through telemetry. By step-wise filtering of the dive data, the developed algorithm identifies fragments of dives that correspond to times when animals drift. In the dive records of 11 southern elephant seals from South Georgia, this filtering method identified 0.8–2.2% of all dives as drift dives, indicating large individual variation in drift diving behaviour. The obtained drift rate time series exhibit that, at the beginning of each migration, all individuals were strongly negatively buoyant. Over the following 75–150 days, the buoyancy of all individuals peaked close to or at neutral buoyancy, indicative of a seal's foraging success. Independent verification with visually inspected detailed high-resolution dive data confirmed that this method is capable of reliably detecting buoyancy changes in the dive records of drift diving species using abstracted data. This also affirms that abstracted dive profiles convey the geometric shape of drift dives in sufficient detail for them to be identified. Further, it suggests that, using this step-wise filtering method, buoyancy changes could be detected even in old datasets with compressed dive information, for which conventional drift dive classification previously failed. PMID:26486362
NASA Astrophysics Data System (ADS)
Bhowmik, R. N.; Vijayasri, G.
2015-06-01
We have studied current-voltage (I-V) characteristics of α-Fe1.64Ga0.36O3, a typical canted ferromagnetic semiconductor. The sample showed a transformation of the I-V curves from linear to non-linear character with the increase of bias voltage. The I-V curves showed irreversible features with hysteresis loop and bi-stable electronic states for up and down modes of voltage sweep. We report positive magnetoresistance and magnetic field induced negative differential resistance as the first time observed phenomena in metal doped hematite system. The magnitudes of critical voltage at which I-V curve showed peak and corresponding peak current are affected by magnetic field cycling. The shift of the peak voltage with magnetic field showed a step-wise jump between two discrete voltage levels with least gap (ΔVP) 0.345(± 0.001) V. The magnetic spin dependent electronic charge transport in this new class of magnetic semiconductor opens a wide scope for tuning large electroresistance (˜500-700%), magnetoresistance (70-135 %) and charge-spin dependent conductivity under suitable control of electric and magnetic fields. The electric and magnetic field controlled charge-spin transport is interesting for applications of the magnetic materials in spintronics, e.g., magnetic sensor, memory devices and digital switching.
Hu, Yanzhu; Ai, Xinbo
2016-01-01
Complex network methodology is very useful for complex system explorer. However, the relationships among variables in complex system are usually not clear. Therefore, inferring association networks among variables from their observed data has been a popular research topic. We propose a synthetic method, named small-shuffle partial symbolic transfer entropy spectrum (SSPSTES), for inferring association network from multivariate time series. The method synthesizes surrogate data, partial symbolic transfer entropy (PSTE) and Granger causality. A proper threshold selection is crucial for common correlation identification methods and it is not easy for users. The proposed method can not only identify the strong correlation without selecting a threshold but also has the ability of correlation quantification, direction identification and temporal relation identification. The method can be divided into three layers, i.e. data layer, model layer and network layer. In the model layer, the method identifies all the possible pair-wise correlation. In the network layer, we introduce a filter algorithm to remove the indirect weak correlation and retain strong correlation. Finally, we build a weighted adjacency matrix, the value of each entry representing the correlation level between pair-wise variables, and then get the weighted directed association network. Two numerical simulated data from linear system and nonlinear system are illustrated to show the steps and performance of the proposed approach. The ability of the proposed method is approved by an application finally. PMID:27832153
NASA Astrophysics Data System (ADS)
Camera, Corrado; Bruggeman, Adriana; Hadjinicolaou, Panos; Pashiardis, Stelios; Lange, Manfred
2014-05-01
High-resolution gridded daily datasets are essential for natural resource management and the analysis of climate changes and their effects. This study aimed to create gridded datasets of daily precipitation and daily minimum and maximum temperature, for the future (2020-2050). The horizontal resolution of the developed datasets is 1 x 1 km2, covering the area under control of the Republic of Cyprus (5.760 km2). The study is divided into two parts. The first consists of the evaluation of the performance of different interpolation techniques for daily rainfall and temperature data (1980-2010) for the creation of the gridded datasets. Rainfall data recorded at 145 stations and temperature data from 34 stations were used. For precipitation, inverse distance weighting (IDW) performs best for local events, while a combination of step-wise geographically weighted regression and IDW proves to be the best method for large scale events. For minimum and maximum temperature, a combination of step-wise linear multiple regression and thin plate splines is recognized as the best method. Six Regional Climate Models (RCMs) for the A1B SRES emission scenario from the EU ENSEMBLE project database were selected as sources for future climate projections. The RCMs were evaluated for their capacity to simulate Cyprus climatology for the period 1980-2010. Data for the period 2020-2050 from the three best performing RCMs were downscaled, using the change factors approach, at the location of observational stations. Daily time series were created with a stochastic rainfall and temperature generator. The RainSim V3 software (Burton et al., 2008) was used to generate spatial-temporal coherent rainfall fields. The temperature generator was developed in R and modeled temperature as a weakly stationary process with the daily mean and standard deviation conditioned on the wet and dry state of the day (Richardson, 1981). Finally gridded datasets depicting projected future climate conditions were created with the identified best interpolation methods. The difference between the input and simulated mean daily rainfall, averaged over all the stations, was 0.03 mm (2.2%), while the error related to the number of dry days was 2 (0.6%). For mean daily minimum temperature the error was 0.005 ºC (0.04%), while for maximum temperature it was 0.01 ºC (0.04%). Overall, the weather generators were found to be reliable instruments for the downscaling of precipitation and temperature. The resulting datasets indicate a decrease of the mean annual rainfall over the study area between 5 and 70 mm (1-15%) for 2020-2050, relative to 1980-2010. Average annual minimum and maximum temperature over the Republic of Cyprus are projected to increase between 1.2 and 1.5 ºC. The dataset is currently used to compute agricultural production and water use indicators, as part of the AGWATER project (AEIFORIA/GEORGO/0311(BIE)/06), co-financed by the European Regional Development Fund and the Republic of Cyprus through the Research Promotion Foundation. Burton, A., Kilsby, C.G., Fowler, H.J., Cowpertwait, P.S.P., and O'Connell, P.E.: RainSim: A spatial-temporal stochastic rainfall modelling system. Environ. Model. Software 23, 1356-1369, 2008 Richardson, C.W.: Stochastic simulation of daily precipitation, temperature, and solar radiation. Water Resour. Res. 17, 182-190, 1981.
Maximum step length: relationships to age and knee and hip extensor capacities.
Schulz, Brian W; Ashton-Miller, James A; Alexander, Neil B
2007-07-01
Maximum Step Length may be used to identify older adults at increased risk for falls. Since leg muscle weakness is a risk factor for falls, we tested the hypotheses that maximum knee and hip extension speed, strength, and power capacities would significantly correlate with Maximum Step Length and also that the "step out and back" Maximum Step Length [Medell, J.L., Alexander, N.B., 2000. A clinical measure of maximal and rapid stepping in older women. J. Gerontol. A Biol. Sci. Med. Sci. 55, M429-M433.] would also correlate with the Maximum Step Length of its two sub-tasks: stepping "out only" and stepping "back only". These sub-tasks will be referred to as versions of Maximum Step Length. Unimpaired younger (N=11, age=24[3]years) and older (N=10, age=73[5]years) women performed the above three versions of Maximum Step Length. Knee and hip extension speed, strength, and power capacities were determined on a separate day and regressed on Maximum Step Length and age group. Version and practice effects were quantified and subjective impressions of test difficulty recorded. Hypotheses were tested using linear regressions, analysis of variance, and Fisher's exact test. Maximum Step Length explained 6-22% additional variance in knee and hip extension speed, strength, and power capacities after controlling for age group. Within- and between-block and test-retest correlation values were high (>0.9) for all test versions. Shorter Maximum Step Lengths are associated with reduced knee and hip extension speed, strength, and power capacities after controlling for age. A single out-and-back step of maximal length is a feasible, rapid screening measure that may provide insight into underlying functional impairment, regardless of age.
Quantitative measures detect sensory and motor impairments in multiple sclerosis.
Newsome, Scott D; Wang, Joseph I; Kang, Jonathan Y; Calabresi, Peter A; Zackowski, Kathleen M
2011-06-15
Sensory and motor dysfunction in multiple sclerosis (MS) is often assessed with rating scales which rely heavily on clinical judgment. Quantitative devices may be more precise than rating scales. To quantify lower extremity sensorimotor measures in individuals with MS, evaluate the extent to which they can detect functional systems impairments, and determine their relationship to global disability measures. We tested 145 MS subjects and 58 controls. Vibration thresholds were quantified using a Vibratron-II device. Strength was quantified by a hand-held dynamometer. We also recorded Expanded Disability Status Scale (EDSS) and Timed 25-Foot Walk (T25FW). t-tests and Wilcoxon-rank sum were used to compare group data. Spearman correlations were used to assess relationships between each measure. We also used a step-wise linear regression model to determine how much the quantitative measures explain the variance in the respective functional systems scores (FSS). EDSS scores ranged from 0-7.5, mean disease duration was 10.4 ± 9.6 years, and 66% were female. In relapsing-remitting MS, but not progressive MS, poorer vibration sensation correlated with a worse EDSS score, whereas progressive groups' ankle/hip strength changed significantly with EDSS progression. Interestingly, not only did sensorimotor measures significantly correlate with global disability measures (i.e., EDSS), but they had improved sensitivity, as they detected impairments in up to 32% of MS subjects with normal sensory and pyramidal FSS. Sensory and motor deficits in MS can be quantified using clinically accessible tools and distinguish differences among MS subtypes. We show that quantitative sensorimotor measures are more sensitive than FSS from the EDSS. These tools have the potential to be used as clinical outcome measures in practice and for future MS clinical trials of neurorehabilitative and neuroreparative interventions. Copyright © 2011 Elsevier B.V. All rights reserved.
Quantitative measures detect sensory and motor impairments in multiple sclerosis
Newsome, Scott D.; Wang, Joseph I.; Kang, Jonathan Y.; Calabresi, Peter A.; Zackowski, Kathleen M.
2011-01-01
Background Sensory and motor dysfunction in multiple sclerosis (MS) is often assessed with rating scales which rely heavily on clinical judgment. Quantitative devices may be more precise than rating scales. Objective To quantify lower extremity sensorimotor measures in individuals with MS, evaluate the extent to which they can detect functional systems impairments, and determine their relationship to global disability measures. Methods We tested 145 MS subjects and 58 controls. Vibration thresholds were quantified using a Vibratron-II device. Strength was quantified by a hand-held dynamometer. We also recorded Expanded Disability Status Scale (EDSS) and timed 25-foot walk (T25FW). T-tests and Wilcoxon-rank sum were used to compare group data. Spearman correlations were used to assess relationships between each measure. We also used a step-wise linear regression model to determine how much the quantitative measures explain the variance in the respective functional systems scores (FSS). Results EDSS scores ranged from 0-7.5, mean disease duration was 10.4±9.6 years, and 66% were female. In RRMS, but not progressive MS, poorer vibration sensation correlated with a worse EDSS score, whereas progressive groups’ ankle/hip strength changed significantly with EDSS progression. Interestingly, not only did sensorimotor measures significantly correlate with global disability measures (EDSS), but they had improved sensitivity, as they detected impairments in up to 32% of MS subjects with normal sensory FSS. Conclusions Sensory and motor deficits can be quantified using clinically accessible tools and distinguish differences among MS subtypes. We show that quantitative sensorimotor measures are more sensitive than FSS from the EDSS. These tools have the potential to be used as clinical outcome measures in practice and for future MS clinical trials of neurorehabilitative and neuroreparative interventions. PMID:21458828
Crawford, John R; Garthwaite, Paul H; Denham, Annie K; Chelune, Gordon J
2012-12-01
Regression equations have many useful roles in psychological assessment. Moreover, there is a large reservoir of published data that could be used to build regression equations; these equations could then be employed to test a wide variety of hypotheses concerning the functioning of individual cases. This resource is currently underused because (a) not all psychologists are aware that regression equations can be built not only from raw data but also using only basic summary data for a sample, and (b) the computations involved are tedious and prone to error. In an attempt to overcome these barriers, Crawford and Garthwaite (2007) provided methods to build and apply simple linear regression models using summary statistics as data. In the present study, we extend this work to set out the steps required to build multiple regression models from sample summary statistics and the further steps required to compute the associated statistics for drawing inferences concerning an individual case. We also develop, describe, and make available a computer program that implements these methods. Although there are caveats associated with the use of the methods, these need to be balanced against pragmatic considerations and against the alternative of either entirely ignoring a pertinent data set or using it informally to provide a clinical "guesstimate." Upgraded versions of earlier programs for regression in the single case are also provided; these add the point and interval estimates of effect size developed in the present article.
Meng, Yu; Li, Gang; Gao, Yaozong; Lin, Weili; Shen, Dinggang
2016-11-01
Longitudinal neuroimaging analysis of the dynamic brain development in infants has received increasing attention recently. Many studies expect a complete longitudinal dataset in order to accurately chart the brain developmental trajectories. However, in practice, a large portion of subjects in longitudinal studies often have missing data at certain time points, due to various reasons such as the absence of scan or poor image quality. To make better use of these incomplete longitudinal data, in this paper, we propose a novel machine learning-based method to estimate the subject-specific, vertex-wise cortical morphological attributes at the missing time points in longitudinal infant studies. Specifically, we develop a customized regression forest, named dynamically assembled regression forest (DARF), as the core regression tool. DARF ensures the spatial smoothness of the estimated maps for vertex-wise cortical morphological attributes and also greatly reduces the computational cost. By employing a pairwise estimation followed by a joint refinement, our method is able to fully exploit the available information from both subjects with complete scans and subjects with missing scans for estimation of the missing cortical attribute maps. The proposed method has been applied to estimating the dynamic cortical thickness maps at missing time points in an incomplete longitudinal infant dataset, which includes 31 healthy infant subjects, each having up to five time points in the first postnatal year. The experimental results indicate that our proposed framework can accurately estimate the subject-specific vertex-wise cortical thickness maps at missing time points, with the average error less than 0.23 mm. Hum Brain Mapp 37:4129-4147, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Efficient least angle regression for identification of linear-in-the-parameters models
Beach, Thomas H.; Rezgui, Yacine
2017-01-01
Least angle regression, as a promising model selection method, differentiates itself from conventional stepwise and stagewise methods, in that it is neither too greedy nor too slow. It is closely related to L1 norm optimization, which has the advantage of low prediction variance through sacrificing part of model bias property in order to enhance model generalization capability. In this paper, we propose an efficient least angle regression algorithm for model selection for a large class of linear-in-the-parameters models with the purpose of accelerating the model selection process. The entire algorithm works completely in a recursive manner, where the correlations between model terms and residuals, the evolving directions and other pertinent variables are derived explicitly and updated successively at every subset selection step. The model coefficients are only computed when the algorithm finishes. The direct involvement of matrix inversions is thereby relieved. A detailed computational complexity analysis indicates that the proposed algorithm possesses significant computational efficiency, compared with the original approach where the well-known efficient Cholesky decomposition is involved in solving least angle regression. Three artificial and real-world examples are employed to demonstrate the effectiveness, efficiency and numerical stability of the proposed algorithm. PMID:28293140
Serotonin transporter gene and childhood trauma--a G × E effect on anxiety sensitivity.
Klauke, Benedikt; Deckert, Jürgen; Reif, Andreas; Pauli, Paul; Zwanzger, Peter; Baumann, Christian; Arolt, Volker; Glöckner-Rist, Angelika; Domschke, Katharina
2011-12-21
Genetic factors and environmental factors are assumed to interactively influence the pathogenesis of anxiety disorders. Thus, a gene-environment interaction (G × E) study was conducted with respect to anxiety sensitivity (AS) as a promising intermediate phenotype of anxiety disorders. Healthy subjects (N = 363) were assessed for AS, childhood maltreatment (Childhood Trauma Questionnaire), and genotyped for functional serotonin transporter gene variants (5-HTTLPR/5-HTT rs25531). The influence of genetic and environmental variables on AS and its subdimensions was determined by a step-wise hierarchical regression and a multiple indicator multiple cause (MIMIC) model. A significant G × E effect of the more active 5-HTT genotypes and childhood maltreatment on AS was observed. Furthermore, genotype (LL)-childhood trauma interaction particularly influenced somatic AS subdimensions, whereas cognitive subdimensions were affected by childhood maltreatment only. Results indicate a G × E effect of the more active 5-HTT genotypes and childhood maltreatment on AS, with particular impact on its somatic subcomponent. © 2011 Wiley Periodicals, Inc.
Physical neglect in childhood as a predictor of violent behavior in adolescent males.
McGuigan, William M; Luchette, Jack A; Atterholt, Roxanne
2018-05-01
Research has established that childhood maltreatment experiences are associated with negative outcomes in adolescence, including violent and antisocial behavior (Chapple et al., 2005). Neglect is the most prevalent form of childhood maltreatment (U.S. DHHS, 2012), the consequences of which require further investigation. This study used archival data to explore whether childhood physical neglect increased the likelihood of violent behavior in a random sample of 85 males between the ages of 12-19 held at a long-term detention facility in the Northeastern United States. An anonymous survey gathered background information and data regarding childhood physical neglect and violent behavior in adolescence. A step-wise hierarchal regression model controlled for the effects of age, self-esteem, personal competency, depression, chemical drug use, family violence and a childhood history of physical abuse. Results showed that a history of childhood physical neglect was the strongest predictor of violent adolescent behavior in this sample when the data was tested for all moderator and mediator effects. Copyright © 2018 Elsevier Ltd. All rights reserved.
Le, Thao N; Kato, Tomoko
2006-03-01
The purpose of this study was to investigate the role of age, gender, peer, family, and culture in adolescent risky sexual behavior for Cambodian and Laotian (Lao)/Mien youth. We obtained cross-sectional, in-home interview data including measures of individualism, collectivism, acculturation, risky sexual behavior, peer delinquency, parent engagement, and parent discipline from a sample of mostly second-generation Cambodian (n = 112) and Lao/Mien (n = 67) adolescents. Data were analyzed using step-wise, hierarchical multiple regressions. Peer delinquency and age (older) were significant predictors of risky sexual behavior in both groups. Parent discipline also significantly predicted risky sexual behavior, but only for Lao/Mien adolescents. Vertical and horizontal individualism were associated positively with risky sexual behavior for Cambodian youth whereas collectivism (horizontal) was associated negatively with risky sexual behavior for Lao/Mien youth. Acculturation was nonsignificant in both groups. In addition to age, parents, and peer groups, the findings suggest that culture also matters in risky sexual behavior, particularly for Cambodian and Laotian youth.
Yang, Jinhua; Liu, Yanhui; Chen, Yan; Pan, Xiaoyan
2014-08-01
The purposes of this study were (1) to examine the level of structural empowerment, organizational commitment and job satisfaction in Chinese nurses; and (2) to investigate the relationships among the three variables. A high turnover rate was identified in Chinese staff nurses, and it was highly correlated with lower job satisfaction. Structural empowerment and organizational commitment have been positively related to job satisfaction in western countries. A cross-sectional survey design was employed. Data analysis included descriptive statistics and multiple step-wise regression to test the hypothesized model. Moderate levels of the three variables were found in this study. Both empowerment and commitment were found to be significantly associated with job satisfaction (r=0.722, r=0.693, p<0.01, respectively). The variables of work objectives, resources, support and informal power, normative and ideal commitment were significant predictors of job satisfaction. Support for an expanded model of Kanter's structural empowerment was achieved in this study. Copyright © 2014 Elsevier Inc. All rights reserved.
Infrared Astronomy Professional Development for K-12 Educators: WISE Telescope
NASA Astrophysics Data System (ADS)
Borders, Kareen; Mendez, B. M.
2010-01-01
K-12 educators need effective and relevant astronomy professional development. WISE Telescope (Wide-Field Infrared Survey Explorer) and Spitzer Space Telescope Education programs provided an immersive teacher professional development workshop at Arecibo Observatory in Puerto Rico during the summer of 2009. As many common misconceptions involve scale and distance, teachers worked with Moon/Earth scale, solar system scale, and distance of objects in the universe. Teachers built and used basic telescopes, learned about the history of telescopes, explored ground and satellite based telescopes, and explored and worked on models of WISE Telescope. An in-depth explanation of WISE and Spitzer telescopes gave participants background knowledge for infrared astronomy observations. We taught the electromagnetic spectrum through interactive stations. The stations included an overview via lecture and power point, the use of ultraviolet beads to determine ultraviolet exposure, the study of WISE lenticulars and diagramming of infrared data, listening to light by using speakers hooked up to photoreceptor cells, looking at visible light through diffraction glasses and diagramming the data, protocols for using astronomy based research in the classroom, and infrared thermometers to compare environmental conditions around the observatory. An overview of LIDAR physics was followed up by a simulated LIDAR mapping of the topography of Mars. We will outline specific steps for K-12 infrared astronomy professional development, provide data demonstrating the impact of the above professional development on educator understanding and classroom use, and detail future plans for additional K-12 professional development. Funding was provided by WISE Telescope, Spitzer Space Telescope, Starbucks, Arecibo Observatory, the American Institute of Aeronautics and Astronautics, and the Washington Space Grant Consortium.
Hansen, Dominique; Jacobs, Nele; Thijs, Herbert; Dendale, Paul; Claes, Neree
2016-09-01
Healthcare professionals with limited access to ergospirometry remain in need of valid and simple submaximal exercise tests to predict maximal oxygen uptake (VO2max ). Despite previous validation studies concerning fixed-rate step tests, accurate equations for the estimation of VO2max remain to be formulated from a large sample of healthy adults between age 18-75 years (n > 100). The aim of this study was to develop a valid equation to estimate VO2max from a fixed-rate step test in a larger sample of healthy adults. A maximal ergospirometry test, with assessment of cardiopulmonary parameters and VO2max , and a 5-min fixed-rate single-stage step test were executed in 112 healthy adults (age 18-75 years). During the step test and subsequent recovery, heart rate was monitored continuously. By linear regression analysis, an equation to predict VO2max from the step test was formulated. This equation was assessed for level of agreement by displaying Bland-Altman plots and calculation of intraclass correlations with measured VO2max . Validity further was assessed by employing a Jackknife procedure. The linear regression analysis generated the following equation to predict VO2max (l min(-1) ) from the step test: 0·054(BMI)+0·612(gender)+3·359(body height in m)+0·019(fitness index)-0·012(HRmax)-0·011(age)-3·475. This equation explained 78% of the variance in measured VO2max (F = 66·15, P<0·001). The level of agreement and intraclass correlation was high (ICC = 0·94, P<0·001) between measured and predicted VO2max . From this study, a valid fixed-rate single-stage step test equation has been developed to estimate VO2max in healthy adults. This tool could be employed by healthcare professionals with limited access to ergospirometry. © 2015 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.
ELASTIC NET FOR COX'S PROPORTIONAL HAZARDS MODEL WITH A SOLUTION PATH ALGORITHM.
Wu, Yichao
2012-01-01
For least squares regression, Efron et al. (2004) proposed an efficient solution path algorithm, the least angle regression (LAR). They showed that a slight modification of the LAR leads to the whole LASSO solution path. Both the LAR and LASSO solution paths are piecewise linear. Recently Wu (2011) extended the LAR to generalized linear models and the quasi-likelihood method. In this work we extend the LAR further to handle Cox's proportional hazards model. The goal is to develop a solution path algorithm for the elastic net penalty (Zou and Hastie (2005)) in Cox's proportional hazards model. This goal is achieved in two steps. First we extend the LAR to optimizing the log partial likelihood plus a fixed small ridge term. Then we define a path modification, which leads to the solution path of the elastic net regularized log partial likelihood. Our solution path is exact and piecewise determined by ordinary differential equation systems.
Raman exfoliative cytology for oral precancer diagnosis
NASA Astrophysics Data System (ADS)
Sahu, Aditi; Gera, Poonam; Pai, Venkatesh; Dubey, Abhishek; Tyagi, Gunjan; Waghmare, Mandavi; Pagare, Sandeep; Mahimkar, Manoj; Murali Krishna, C.
2017-11-01
Oral premalignant lesions (OPLs) such as leukoplakia, erythroplakia, and oral submucous fibrosis, often precede oral cancer. Screening and management of these premalignant conditions can improve prognosis. Raman spectroscopy has previously demonstrated potential in the diagnosis of oral premalignant conditions (in vivo), detected viral infection, and identified cancer in both oral and cervical exfoliated cells (ex vivo). The potential of Raman exfoliative cytology (REC) in identifying premalignant conditions was investigated. Oral exfoliated samples were collected from healthy volunteers (n=20), healthy volunteers with tobacco habits (n=20), and oral premalignant conditions (n=27, OPL) using Cytobrush. Spectra were acquired using Raman microprobe. Spectral acquisition parameters were: λex: 785 nm, laser power: 40 mW, acquisition time: 15 s, and average: 3. Postspectral acquisition, cell pellet was subjected to Pap staining. Multivariate analysis was carried out using principal component analysis and principal component-linear discriminant analysis using both spectra- and patient-wise approaches in three- and two-group models. OPLs could be identified with ˜77% (spectra-wise) and ˜70% (patient-wise) sensitivity in the three-group model while with 86% (spectra-wise) and 83% (patient-wise) in the two-group model. Use of histopathologically confirmed premalignant cases and better sampling devices may help in development of improved standard models and also enhance the sensitivity of the method. Future longitudinal studies can help validate potential of REC in screening and monitoring high-risk populations and prognosis prediction of premalignant lesions.
Layer moduli of Nebraska pavements for the new Mechanistic-Empirical Pavement Design Guide (MEPDG).
DOT National Transportation Integrated Search
2010-12-01
As a step-wise implementation effort of the Mechanistic-Empirical Pavement Design Guide (MEPDG) for the design : and analysis of Nebraska flexible pavement systems, this research developed a database of layer moduli dynamic : modulus, creep compl...
EPA MODELING TOOLS FOR CAPTURE ZONE DELINEATION
The EPA Office of Research and Development supports a step-wise modeling approach for design of wellhead protection areas for water supply wells. A web-based WellHEDSS (wellhead decision support system) is under development for determining when simple capture zones (e.g., centri...
Hoffman, Haydn; Lee, Sunghoon I; Garst, Jordan H; Lu, Derek S; Li, Charles H; Nagasawa, Daniel T; Ghalehsari, Nima; Jahanforouz, Nima; Razaghy, Mehrdad; Espinal, Marie; Ghavamrezaii, Amir; Paak, Brian H; Wu, Irene; Sarrafzadeh, Majid; Lu, Daniel C
2015-09-01
This study introduces the use of multivariate linear regression (MLR) and support vector regression (SVR) models to predict postoperative outcomes in a cohort of patients who underwent surgery for cervical spondylotic myelopathy (CSM). Currently, predicting outcomes after surgery for CSM remains a challenge. We recruited patients who had a diagnosis of CSM and required decompressive surgery with or without fusion. Fine motor function was tested preoperatively and postoperatively with a handgrip-based tracking device that has been previously validated, yielding mean absolute accuracy (MAA) results for two tracking tasks (sinusoidal and step). All patients completed Oswestry disability index (ODI) and modified Japanese Orthopaedic Association questionnaires preoperatively and postoperatively. Preoperative data was utilized in MLR and SVR models to predict postoperative ODI. Predictions were compared to the actual ODI scores with the coefficient of determination (R(2)) and mean absolute difference (MAD). From this, 20 patients met the inclusion criteria and completed follow-up at least 3 months after surgery. With the MLR model, a combination of the preoperative ODI score, preoperative MAA (step function), and symptom duration yielded the best prediction of postoperative ODI (R(2)=0.452; MAD=0.0887; p=1.17 × 10(-3)). With the SVR model, a combination of preoperative ODI score, preoperative MAA (sinusoidal function), and symptom duration yielded the best prediction of postoperative ODI (R(2)=0.932; MAD=0.0283; p=5.73 × 10(-12)). The SVR model was more accurate than the MLR model. The SVR can be used preoperatively in risk/benefit analysis and the decision to operate. Copyright © 2015 Elsevier Ltd. All rights reserved.
Wolfson, Daniel; Santa, John; Slass, Lorie
2014-07-01
Wise management of health care resources is a core tenet of medical professionalism. To support physicians in fulfilling this responsibility and to engage patients in discussions about unnecessary care, tests, and procedures, in April 2012 the American Board of Internal Medicine Foundation, Consumer Reports, and nine medical specialty societies launched the Choosing Wisely campaign. The authors describe the rationale for and history of the campaign, its structure and approach in terms of engaging both physicians and patients, lessons learned, and future steps.In developing the Choosing Wisely campaign, the specialty societies each developed lists of five tests and procedures that physicians and patients should question. Over 50 specialty societies have developed more than 250 evidence-based recommendations, some of which Consumer Reports has "translated" into consumer-friendly language and helped disseminate to tens of millions of consumers. A number of delivery systems, specialty societies, state medical societies, and regional health collaboratives are also advancing the campaign's recommendations. The campaign's success lies in its unique focus on professional values and patient-physician conversations to reduce unnecessary care. Measurement and evaluation of the campaign's impact on attitudinal and behavioral change is needed.
Estimating the global incidence of traumatic spinal cord injury.
Fitzharris, M; Cripps, R A; Lee, B B
2014-02-01
Population modelling--forecasting. To estimate the global incidence of traumatic spinal cord injury (TSCI). An initiative of the International Spinal Cord Society (ISCoS) Prevention Committee. Regression techniques were used to derive regional and global estimates of TSCI incidence. Using the findings of 31 published studies, a regression model was fitted using a known number of TSCI cases as the dependent variable and the population at risk as the single independent variable. In the process of deriving TSCI incidence, an alternative TSCI model was specified in an attempt to arrive at an optimal way of estimating the global incidence of TSCI. The global incidence of TSCI was estimated to be 23 cases per 1,000,000 persons in 2007 (179,312 cases per annum). World Health Organization's regional results are provided. Understanding the incidence of TSCI is important for health service planning and for the determination of injury prevention priorities. In the absence of high-quality epidemiological studies of TSCI in each country, the estimation of TSCI obtained through population modelling can be used to overcome known deficits in global spinal cord injury (SCI) data. The incidence of TSCI is context specific, and an alternative regression model demonstrated how TSCI incidence estimates could be improved with additional data. The results highlight the need for data standardisation and comprehensive reporting of national level TSCI data. A step-wise approach from the collation of conventional epidemiological data through to population modelling is suggested.
Measurements of the talus in the assessment of population affinity.
Bidmos, Mubarak A; Dayal, Manisha R; Adegboye, Oyelola A
2018-06-01
As part of their routine work, forensic anthropologists are expected to report population affinity as part of the biological profile of an individual. The skull is the most widely used bone for the estimation of population affinity but it is not always present in a forensic case. Thus, other bones that preserve well have been shown to give a good indication of either the sex or population affinity of an individual. In this study, the potential of measurements of the talus was investigated for the purpose of estimating population affinity in South Africans. Nine measurements from two hundred and twenty tali of South African Africans (SAA) and South African Whites (SAW) from the Raymond A. Dart Collection of Human Skeletons were used. Direct and step-wise discriminant function and logistic regression analyses were carried out using SPSS and SAS. Talar length was the best single variable for discriminating between these two groups for males while in females the head height was the best single predictor. Average accuracies for correct population affinity classification using logistic regression analysis were higher than those obtained from discriminant function analysis. This study was the first of its type to employ discriminant function analyses and logistic regression analyses to estimate the population affinity of an individual from the talus. Thus these equations can now be used by South African anthropologists when estimating the population affinity of dismembered or damaged or incomplete skeletal remains of SAA and SAW. Copyright © 2018 Elsevier B.V. All rights reserved.
A statistical framework for applying RNA profiling to chemical hazard detection.
Kostich, Mitchell S
2017-12-01
Use of 'omics technologies in environmental science is expanding. However, application is mostly restricted to characterizing molecular steps leading from toxicant interaction with molecular receptors to apical endpoints in laboratory species. Use in environmental decision-making is limited, due to difficulty in elucidating mechanisms in sufficient detail to make quantitative outcome predictions in any single species or in extending predictions to aquatic communities. Here we introduce a mechanism-agnostic statistical approach, supplementing mechanistic investigation by allowing probabilistic outcome prediction even when understanding of molecular pathways is limited, and facilitating extrapolation from results in laboratory test species to predictions about aquatic communities. We use concepts familiar to environmental managers, supplemented with techniques employed for clinical interpretation of 'omics-based biomedical tests. We describe the framework in step-wise fashion, beginning with single test replicates of a single RNA variant, then extending to multi-gene RNA profiling, collections of test replicates, and integration of complementary data. In order to simplify the presentation, we focus on using RNA profiling for distinguishing presence versus absence of chemical hazards, but the principles discussed can be extended to other types of 'omics measurements, multi-class problems, and regression. We include a supplemental file demonstrating many of the concepts using the open source R statistical package. Published by Elsevier Ltd.
Harbinson, Jeremy
2015-01-01
Plants are known to be able to acclimate their photosynthesis to the level of irradiance. Here, we present the analysis of natural genetic variation for photosynthetic light use efficiency (ΦPSII) in response to five light environments among 12 genetically diverse Arabidopsis (Arabidopsis thaliana) accessions. We measured the acclimation of ΦPSII to constant growth irradiances of four different levels (100, 200, 400, and 600 µmol m−2 s−1) by imaging chlorophyll fluorescence after 24 d of growth and compared these results with acclimation of ΦPSII to a step-wise change in irradiance where the growth irradiance was increased from 100 to 600 µmol m−2 s−1 after 24 d of growth. Genotypic variation for ΦPSII is shown by calculating heritability for the short-term ΦPSII response to different irradiance levels as well as for the relation of ΦPSII measured at light saturation (a measure of photosynthetic capacity) to growth irradiance level and for the kinetics of the response to a step-wise increase in irradiance from 100 to 600 µmol m−2 s−1. A genome-wide association study for ΦPSII measured 1 h after a step-wise increase in irradiance identified several new candidate genes controlling this trait. In conclusion, the different photosynthetic responses to a changing light environment displayed by different Arabidopsis accessions are due to genetic differences, and we have identified candidate genes for the photosynthetic response to an irradiance change. The genetic variation for photosynthetic acclimation to irradiance found in this study will allow future identification and analysis of the causal genes for the regulation of ΦPSII in plants. PMID:25670817
Intrinsic increase in lymphangion muscle contractility in response to elevated afterload
Scallan, Joshua P.; Wolpers, John H.; Muthuchamy, Mariappan; Gashev, Anatoliy A.; Zawieja, David C.
2012-01-01
Collecting lymphatic vessels share functional and biochemical characteristics with cardiac muscle; thus, we hypothesized that the lymphatic vessel pump would exhibit behavior analogous to homeometric regulation of the cardiac pump in its adaptation to elevated afterload, i.e., an increase in contractility. Single lymphangions containing two valves were isolated from the rat mesenteric microcirculation, cannulated, and pressurized for in vitro study. Pressures at either end of the lymphangion [input pressure (Pin), preload; output pressure (Pout), afterload] were set by a servo controller. Intralymphangion pressure (PL) was measured using a servo-null micropipette while internal diameter and valve positions were monitored using video methods. The responses to step- and ramp-wise increases in Pout (at low, constant Pin) were determined. PL and diameter data recorded during single contraction cycles were used to generate pressure-volume (P-V) relationships for the subsequent analysis of lymphangion pump behavior. Ramp-wise Pout elevation led to progressive vessel constriction, a rise in end-systolic diameter, and an increase in contraction frequency. Step-wise Pout elevation produced initial vessel distention followed by time-dependent declines in end-systolic and end-diastolic diameters. Significantly, a 30% leftward shift in the end-systolic P-V relationship accompanied an 84% increase in dP/dt after a step increase in Pout, consistent with an increase in contractility. Calculations of stroke work from the P-V loop area revealed that robust pumps produced net positive work to expel fluid throughout the entire afterload range, whereas weaker pumps exhibited progressively more negative work as gradual afterload elevation led to pump failure. We conclude that lymphatic muscle adapts to output pressure elevation with an intrinsic increase in contractility and that this compensatory mechanism facilitates the maintenance of lymph pump output in the face of edemagenic and/or gravitational loads. PMID:22886407
A step-wise approach for analysis of the mouse embryonic heart using 17.6 Tesla MRI
Gabbay-Benziv, Rinat; Reece, E. Albert; Wang, Fang; Bar-Shir, Amnon; Harman, Chris; Turan, Ozhan M.; Yang, Peixin; Turan, Sifa
2018-01-01
Background The mouse embryo is ideal for studying human cardiac development. However, laboratory discoveries do not easily translate into clinical findings partially because of histological diagnostic techniques that induce artifacts and lack standardization. Aim To present a step-wise approach using 17.6 T MRI, for evaluation of mice embryonic heart and accurate identification of congenital heart defects. Subjects 17.5-embryonic days embryos from low-risk (non-diabetic) and high-risk (diabetic) model dams. Study design Embryos were imaged using 17.6 Tesla MRI. Three-dimensional volumes were analyzed using ImageJ software. Outcome measures Embryonic hearts were evaluated utilizing anatomic landmarks to locate the four-chamber view, the left- and right-outflow tracts, and the arrangement of the great arteries. Inter- and intra-observer agreement were calculated using kappa scores by comparing two researchers’ evaluations independently analyzing all hearts, blinded to the model, on three different, timed occasions. Each evaluated 16 imaging volumes of 16 embryos: 4 embryos from normal dams, and 12 embryos from diabetic dams. Results Inter-observer agreement and reproducibility were 0.779 (95% CI 0.653–0.905) and 0.763 (95% CI 0.605–0.921), respectively. Embryonic hearts were structurally normal in 4/4 and 7/12 embryos from normal and diabetic dams, respectively. Five embryos from diabetic dams had defects: ventricular septal defects (n = 2), transposition of great arteries (n = 2) and Tetralogy of Fallot (n = 1). Both researchers identified all cardiac lesions. Conclusion A step-wise approach for analysis of MRI-derived 3D imaging provides reproducible detailed cardiac evaluation of normal and abnormal mice embryonic hearts. This approach can accurately reveal cardiac structure and, thus, increases the yield of animal model in congenital heart defect research. PMID:27569369
NASA Astrophysics Data System (ADS)
Durech, Josef; Hanus, Josef; Delbo, Marco; Ali-Lagoa, Victor; Carry, Benoit
2014-11-01
Convex shape models and spin vectors of asteroids are now routinely derived from their disk-integrated lightcurves by the lightcurve inversion method of Kaasalainen et al. (2001, Icarus 153, 37). These shape models can be then used in combination with thermal infrared data and a thermophysical model to derive other physical parameters - size, albedo, macroscopic roughness and thermal inertia of the surface. In this classical two-step approach, the shape and spin parameters are kept fixed during the thermophysical modeling when the emitted thermal flux is computed from the surface temperature, which is computed by solving a 1-D heat diffusion equation in sub-surface layers. A novel method of simultaneous inversion of optical and infrared data was presented by Durech et al. (2012, LPI Contribution No. 1667, id.6118). The new algorithm uses the same convex shape representation as the lightcurve inversion but optimizes all relevant physical parameters simultaneously (including the shape, size, rotation vector, thermal inertia, albedo, surface roughness, etc.), which leads to a better fit to the thermal data and a reliable estimation of model uncertainties. We applied this method to selected asteroids using their optical lightcurves from archives and thermal infrared data observed by the Wide-field Infrared Survey Explorer (WISE) satellite. We will (i) show several examples of how well our model fits both optical and infrared data, (ii) discuss the uncertainty of derived parameters (namely the thermal inertia), (iii) compare results obtained with the two-step approach with those obtained by our method, (iv) discuss the advantages of this simultaneous approach with respect to the classical two-step approach, and (v) advertise the possibility to use this approach to tens of thousands asteroids for which enough WISE and optical data exist.
Placebo and Nocebo Effects: The Advantage of Measuring Expectations and Psychological Factors
Corsi, Nicole; Colloca, Luana
2017-01-01
Several studies have explored the predictability of placebo and nocebo individual responses by investigating personality factors and expectations of pain decreases and increases. Psychological factors such as optimism, suggestibility, empathy and neuroticism have been linked to placebo effects, while pessimism, anxiety and catastrophizing have been associated to nocebo effects. We aimed to investigate the interplay between psychological factors, expectations of low and high pain and placebo hypoalgesia and nocebo hyperalgesia. We studied 46 healthy participants using a well-validated conditioning paradigm with contact heat thermal stimulations. Visual cues were presented to alert participants about the level of intensity of an upcoming thermal pain. We delivered high, medium and low levels of pain associated with red, yellow and green cues, respectively, during the conditioning phase. During the testing phase, the level of painful stimulations was surreptitiously set at the medium control level with all the three cues to measure placebo and nocebo effects. We found both robust placebo hypolagesic and nocebo hyperalgesic responses that were highly correlated with expectancy of low and high pain. Simple linear regression analyses showed that placebo responses were negatively correlated with anxiety severity and different aspects of fear of pain (e.g., medical pain, severe pain). Nocebo responses were positively correlated with anxiety sensitivity and physiological suggestibility with a trend toward catastrophizing. Step-wise regression analyses indicated that an aggregate score of motivation (value/utility and pressure/tense subscales) and suggestibility (physiological reactivity and persuadability subscales), accounted for the 51% of the variance in the placebo responsiveness. When considered together, anxiety severity, NEO openness-extraversion and depression accounted for the 49.1% of the variance of the nocebo responses. Psychological factors per se did not influence expectations. In fact, mediation analyses including expectations, personality factors and placebo and nocebo responses, revealed that expectations were not influenced by personality factors. These findings highlight the potential advantage of considering batteries of personality factors and measurements of expectation in predicting placebo and nocebo effects related to experimental acute pain. PMID:28321201
DOE Office of Scientific and Technical Information (OSTI.GOV)
Magome, T; Haga, A; Igaki, H
Purpose: Although many outcome prediction models based on dose-volume information have been proposed, it is well known that the prognosis may be affected also by multiple clinical factors. The purpose of this study is to predict the survival time after radiotherapy for high-grade glioma patients based on features including clinical and dose-volume histogram (DVH) information. Methods: A total of 35 patients with high-grade glioma (oligodendroglioma: 2, anaplastic astrocytoma: 3, glioblastoma: 30) were selected in this study. All patients were treated with prescribed dose of 30–80 Gy after surgical resection or biopsy from 2006 to 2013 at The University of Tokyomore » Hospital. All cases were randomly separated into training dataset (30 cases) and test dataset (5 cases). The survival time after radiotherapy was predicted based on a multiple linear regression analysis and artificial neural network (ANN) by using 204 candidate features. The candidate features included the 12 clinical features (tumor location, extent of surgical resection, treatment duration of radiotherapy, etc.), and the 192 DVH features (maximum dose, minimum dose, D95, V60, etc.). The effective features for the prediction were selected according to a step-wise method by using 30 training cases. The prediction accuracy was evaluated by a coefficient of determination (R{sup 2}) between the predicted and actual survival time for the training and test dataset. Results: In the multiple regression analysis, the value of R{sup 2} between the predicted and actual survival time was 0.460 for the training dataset and 0.375 for the test dataset. On the other hand, in the ANN analysis, the value of R{sup 2} was 0.806 for the training dataset and 0.811 for the test dataset. Conclusion: Although a large number of patients would be needed for more accurate and robust prediction, our preliminary Result showed the potential to predict the outcome in the patients with high-grade glioma. This work was partly supported by the JSPS Core-to-Core Program(No. 23003) and Grant-in-aid from the JSPS Fellows.« less
Kluger, Benzi M.; Brown, R. Preston; Aerts, Shanae; Schenkman, Margaret
2014-01-01
Background Parkinson disease (PD) may lead to functional limitations through both motor and non-motor symptoms. While patients with advanced disease have well-documented and profound functional limitations, less is known about the determinants of function in early to mid-stage disease where interventions may be more likely to benefit and preserve function. Objective The objective of the current study was to identify motor, cognitive and gait determinants of physical functional performance in patients with early to mid-stage PD. Design Secondary analysis of cross-sectional baseline data from a randomized clinical trial of exercise. Setting Tertiary academic medical center. Participants 121 patients with early to mid-stage PD. Methods Our functional performance outcomes included: 1) the Continuous Scale Functional Performance Test (CS-PFP; primary outcome); 2) the timed up and go (TUG) tests; and Section 2 (Activities of Daily Living) of the Unified Parkinson's Disease Rating Scale (UPDRS). Explanatory variables included measures of disease severity, motor function, cognitive function, balance and gait. Step-wise linear regression models were used to determine correlations between explanatory variables and outcome measures. Results In our regression models the CS-PFP significantly correlated with walking endurance (six minute walk; r2 = 0.12, p < .0001), turning ability (360 degree turn; r2 = .03, p = .002), attention (brief test of attention; r2 = .01, p = .03), overall cognitive status (Mini-mental State Examination; r2 = .01, p = .04) and bradykinesia (timed tapping; r2 = .02, p = .02). The TUG significantly correlated with walking speed (5 meter walk; r2 = 0.33, p <.0001), stride length (r2 = 0.25, p <.0001), turning ability (360 turn r2 = .05, p = .0003) and attention (r2 = .016, p = .03). Section 2 of the UPDRS was significantly correlated with endurance (r2 = .09, p < .0001), turning ability (r2 = .03, p = .001) and attention (r2 = .01, p = .03). Conclusions Gait, motor and cognitive function all contribute to objectively measured global functional ability in mild to moderate PD. Subjectively measured functional activity outcomes may underestimate the impact of both motor and non-motor symptoms. PMID:24880056
Ahmadnezhad, Mahsa; Arefhosseini, Seyed Rafie; Parizadeh, Mohammad Reza; Tavallaie, Shima; Tayefi, Maryam; Darroudi, Susan; Ghazizadeh, Hamideh; Moohebati, Mohsen; Ebrahimi, Mahmoud; Heidari-Bakavoli, Alireza; Azarpajouh, Mahmoud Reza; Ferns, Gordon A; Mogharebzadeh, Vahid; Ghayour-Mobarhan, Majid
2018-05-01
There is persuasive evidence that oxidative stress and inflammation are features of the metabolic syndrome (MetS). We have investigated the relationship between serum pro-oxidant-antioxidant balance (PAB), serum uric acid, and high sensitive C-reactive protein (hs-CRP) in 7,208 participants from the MASHAD study cohort, who were categorized as having MetS, or not, using International Diabetes Foundation (IDF) criteria. Serum hs-CRP was measured by Polyethylene glycol (PEG)-enhanced immunoturbidimetry method using an Alycon analyzer (ABBOTT, Chicago, IL, USA). A colorimetric method was used to determine serum PAB. Serum PAB values were significantly higher in the individuals with MetS compared to those without (P < 0.001). Furthermore, there was a step-wise increase in mean serum PAB concentrations as the number of components of the MetS increased. The combination of features of MetS had different association with serum PAB and hs-CRP. Multiple linear regression analysis showed that body mass index (BMI, B = 2.04, P < 0.001), physical activity level (PAL, B = 18.728, P = 0.001), serum uric acid (B = -1.545, P = 0.003), and serum C-reactive protein (B = 0.663, P < 0.001) were associated with serum PAB in individuals with MetS. Multiple logistic regression analysis showed that serum PAB (B = 0.002, P < 0.001, CI = 1.001-1.003), serum C-reactive protein (B = 0.007, P < 0.015, CI = 1.001-1.013), and serum uric acid (B = 0.207, P < 0.001, CI = 1.186-1.277) were all significantly associated with MetS. Serum PAB was strongly associated with serum uric acid and serum hs-CRP. Moreover, serum PAB as well as serum uric acid and serum hs-CRP were independently associated with MetS. Individual features of MetS were also associated with serum hs-CRP and PAB. © 2018 BioFactors, 44(3):263-271, 2018. © 2018 International Union of Biochemistry and Molecular Biology.
Daily physical activity in stable heart failure patients.
Dontje, Manon L; van der Wal, Martje H L; Stolk, Ronald P; Brügemann, Johan; Jaarsma, Tiny; Wijtvliet, Petra E P J; van der Schans, Cees P; de Greef, Mathieu H G
2014-01-01
Physical activity is the only nonpharmacological therapy that is proven to be effective in heart failure (HF) patients in reducing morbidity. To date, little is known about the levels of daily physical activity in HF patients and about related factors. The objectives of this study were to (a) describe performance-based daily physical activity in HF patients, (b) compare it with physical activity guidelines, and (c) identify related factors of daily physical activity. The daily physical activity of 68 HF patients was measured using an accelerometer (SenseWear) for 48 hours. Psychological characteristics (self-efficacy, motivation, and depression) were measured using questionnaires. To have an indication how to interpret daily physical activity levels of the study sample, time spent on moderate- to vigorous-intensity physical activities was compared with the 30-minute activity guideline. Steps per day was compared with the criteria for healthy adults, in the absence of HF-specific criteria. Linear regression analyses were used to identify related factors of daily physical activity. Forty-four percent were active for less than 30 min/d, whereas 56% were active for more than 30 min/d. Fifty percent took fewer than 5000 steps per day, 35% took 5000 to 10 000 steps per day, and 15% took more than 10 000 steps per day. Linear regression models showed that New York Heart Association classification and self-efficacy were the most important factors explaining variance in daily physical activity. The variance in daily physical activity in HF patients is considerable. Approximately half of the patients had a sedentary lifestyle. Higher New York Heart Association classification and lower self-efficacy are associated with less daily physical activity. These findings contribute to the understanding of daily physical activity behavior of HF patients and can help healthcare providers to promote daily physical activity in sedentary HF patients.
Iqbal, Asif; Kim, You-Sam; Kang, Jun-Mo; Lee, Yun-Mi; Rai, Rajani; Jung, Jong-Hyun; Oh, Dong-Yup; Nam, Ki-Chang; Lee, Hak-Kyo; Kim, Jong-Joo
2015-01-01
Meat and carcass quality attributes are of crucial importance influencing consumer preference and profitability in the pork industry. A set of 400 Berkshire pigs were collected from Dasan breeding farm, Namwon, Chonbuk province, Korea that were born between 2012 and 2013. To perform genome wide association studies (GWAS), eleven meat and carcass quality traits were considered, including carcass weight, backfat thickness, pH value after 24 hours (pH24), Commission Internationale de l’Eclairage lightness in meat color (CIE L), redness in meat color (CIE a), yellowness in meat color (CIE b), filtering, drip loss, heat loss, shear force and marbling score. All of the 400 animals were genotyped with the Porcine 62K SNP BeadChips (Illumina Inc., USA). A SAS general linear model procedure (SAS version 9.2) was used to pre-adjust the animal phenotypes before GWAS with sire and sex effects as fixed effects and slaughter age as a covariate. After fitting the fixed and covariate factors in the model, the residuals of the phenotype regressed on additive effects of each single nucleotide polymorphism (SNP) under a linear regression model (PLINK version 1.07). The significant SNPs after permutation testing at a chromosome-wise level were subjected to stepwise regression analysis to determine the best set of SNP markers. A total of 55 significant (p<0.05) SNPs or quantitative trait loci (QTL) were detected on various chromosomes. The QTLs explained from 5.06% to 8.28% of the total phenotypic variation of the traits. Some QTLs with pleiotropic effect were also identified. A pair of significant QTL for pH24 was also found to affect both CIE L and drip loss percentage. The significant QTL after characterization of the functional candidate genes on the QTL or around the QTL region may be effectively and efficiently used in marker assisted selection to achieve enhanced genetic improvement of the trait considered. PMID:26580276
Does the NBME Surgery Shelf exam constitute a "double jeopardy" of USMLE Step 1 performance?
Ryan, Michael S; Colbert-Getz, Jorie M; Glenn, Salem N; Browning, Joel D; Anand, Rahul J
2017-02-01
Scores from the NBME Subject Examination in Surgery (Surgery Shelf) positively correlate with United States Medical Licensing Examination Step 1 (Step 1). Based on this relationship, the authors evaluated the predictive value of Step 1 on the Surgery Shelf. Surgery Shelf standard scores were substituted for Step 1 standard scores for 395 students in 2012-2014 at one medical school. Linear regression was used to determine how well Step 1 scores predicted Surgery Shelf scores. Percent match between original (with Shelf) and modified (with Step 1) clerkship grades were computed. Step 1 scores significantly predicted Surgery Shelf scores, R 2 = 0.42, P < 0.001. For every point increase in Step 1, a Surgery Shelf score increased by 0.30 points. Seventy-seven percent of original grades matched the modified grades. Replacing Surgery Shelf scores with Step 1 scores did not have an effect on the majority of final clerkship grades. This observation raises concern over use of Surgery Shelf scores as a measure of knowledge obtained during the Surgery clerkship. Copyright © 2016 Elsevier Inc. All rights reserved.
Seerapu, Sunitha; Srinivasan, B. P.
2010-01-01
A simple, sensitive, precise and robust reverse–phase high-performance liquid chromatographic method for analysis of ivabradine hydrochloride in pharmaceutical formulations was developed and validated as per ICH guidelines. The separation was performed on SS Wakosil C18AR, 250×4.6 mm, 5 μm column with methanol:25 mM phosphate buffer (60:40 v/v), adjusted to pH 6.5 with orthophosphoric acid, added drop wise, as mobile phase. A well defined chromatographic peak of Ivabradine hydrochloride was exhibited with a retention time of 6.55±0.05 min and tailing factor of 1.14 at the flow rate of 0.8 ml/min and at ambient temperature, when monitored at 285 nm. The linear regression analysis data for calibration plots showed good linear relationship with R=0.9998 in the concentration range of 30-210 μg/ml. The method was validated for precision, recovery and robustness. Intra and Inter-day precision (% relative standard deviation) were always less than 2%. The method showed the mean % recovery of 99.00 and 98.55 % for Ivabrad and Inapure tablets, respectively. The proposed method has been successfully applied to the commercial tablets without any interference of excipients. PMID:21695008
Context-Sensitive Ethics in School Psychology
ERIC Educational Resources Information Center
Lasser, Jon; Klose, Laurie McGarry; Robillard, Rachel
2013-01-01
Ethical codes and licensing rules provide foundational guidance for practicing school psychologists, but these sources fall short in their capacity to facilitate effective decision-making. When faced with ethical dilemmas, school psychologists can turn to decision-making models, but step-wise decision trees frequently lack the situation…
Use Over-the-Counter Medicines Wisely
... correctly. Simply put, this means that when you buy or use an OTC medicine, remember to: ● Respect that OTCs are serious medicines ... simple steps: ● Read the label—every time you buy or use a nonprescription medicine pay special attention to the ingredients, and directions ...
Lapchuk, Anatoliy; Prygun, Olexandr; Fu, Minglei; Le, Zichun; Xiong, Qiyuan; Kryuchyn, Andriy
2017-06-26
We present the first general theoretical description of speckle suppression efficiency based on an active diffractive optical element (DOE). The approach is based on spectral analysis of diffracted beams and a coherent matrix. Analytical formulae are obtained for the dispersion of speckle suppression efficiency using different DOE structures and different DOE activation methods. We show that a one-sided 2D DOE structure has smaller speckle suppression range than a two-sided 1D DOE structure. Both DOE structures have sufficient speckle suppression range to suppress low-order speckles in the entire visible range, but only the two-sided 1D DOE can suppress higher-order speckles. We also show that a linear shift 2D DOE in a laser projector with a large numerical aperture has higher effective speckle suppression efficiency than the method using switching or step-wise shift DOE structures. The generalized theoretical models elucidate the mechanism and practical realization of speckle suppression.
NASA Astrophysics Data System (ADS)
Gao, Jing; You, Jiang; Huang, Zhihong; Cochran, Sandy; Corner, George
2012-03-01
Tissue-mimicking phantoms, including bovine serum albumin phantoms and egg white phantoms, have been developed for, and in laboratory use for, real-time visualization of high intensity focused ultrasound-induced thermal coagulative necrosis since 2001. However, until now, very few data are available concerning their thermophysical properties. In this article, a step-wise transient plane source method has been used to determine the values of thermal conductivity, thermal diffusivity, and specific heat capacity of egg white phantoms with elevated egg white concentrations (0 v/v% to 40 v/v%, by 10 v/v% interval) at room temperature (~20 °C). The measured thermophysical properties were close to previously reported values; the thermal conductivity and thermal diffusivity were linearly proportional to the egg white concentration within the investigation range, while the specific heat capacity decreased as the egg white concentration increased. Taking account of large differences between real experiment and ideal model, data variations within 20 % were accepted.
NASA Technical Reports Server (NTRS)
Stefanescu, Doru M.; Juretzko, Frank R.; Dhindaw, Brij K.; Catalina, Adrian; Sen, Subhayu; Curreri, Peter A.
1998-01-01
Results of the directional solidification experiments on Particle Engulfment and Pushing by Solidifying Interfaces (PEP) conducted on the space shuttle Columbia during the Life and Microgravity Science Mission are reported. Two pure aluminum (99.999%) 9 mm cylindrical rods, loaded with about 2 vol.% 500 micrometers diameter zirconia particles were melted and resolidified in the microgravity (microg) environment of the shuttle. One sample was processed at step-wise increased solidification velocity, while the other at step-wise decreased velocity. It was found that a pushing-to-engulfment transition (PET) occurred in the velocity range of 0.5 to 1 micrometers. This is smaller than the ground PET velocity of 1.9 to 2.4 micrometers. This demonstrates that natural convection increases the critical velocity. A previously proposed analytical model for PEP was further developed. A major effort to identify and produce data for the surface energy of various interfaces required for calculation was undertaken. The predicted critical velocity for PET was of 0.775 micrometers/s.
Temporal dynamics and developmental memory of 3D chromatin architecture at Hox gene loci
Noordermeer, Daan; Leleu, Marion; Schorderet, Patrick; Joye, Elisabeth; Chabaud, Fabienne; Duboule, Denis
2014-01-01
Hox genes are essential regulators of embryonic development. Their step-wise transcriptional activation follows their genomic topology and the various states of activation are subsequently memorized into domains of progressively overlapping gene products. We have analyzed the 3D chromatin organization of Hox clusters during their early activation in vivo, using high-resolution circular chromosome conformation capture. Initially, Hox clusters are organized as single chromatin compartments containing all genes and bivalent chromatin marks. Transcriptional activation is associated with a dynamic bi-modal 3D organization, whereby the genes switch autonomously from an inactive to an active compartment. These local 3D dynamics occur within a framework of constitutive interactions within the surrounding Topological Associated Domains, indicating that this regulation process is mostly cluster intrinsic. The step-wise progression in time is fixed at various body levels and thus can account for the chromatin architectures previously described at a later stage for different anterior to posterior levels. DOI: http://dx.doi.org/10.7554/eLife.02557.001 PMID:24843030
Integrated, Step-Wise, Mass-Isotopomeric Flux Analysis of the TCA Cycle.
Alves, Tiago C; Pongratz, Rebecca L; Zhao, Xiaojian; Yarborough, Orlando; Sereda, Sam; Shirihai, Orian; Cline, Gary W; Mason, Graeme; Kibbey, Richard G
2015-11-03
Mass isotopomer multi-ordinate spectral analysis (MIMOSA) is a step-wise flux analysis platform to measure discrete glycolytic and mitochondrial metabolic rates. Importantly, direct citrate synthesis rates were obtained by deconvolving the mass spectra generated from [U-(13)C6]-D-glucose labeling for position-specific enrichments of mitochondrial acetyl-CoA, oxaloacetate, and citrate. Comprehensive steady-state and dynamic analyses of key metabolic rates (pyruvate dehydrogenase, β-oxidation, pyruvate carboxylase, isocitrate dehydrogenase, and PEP/pyruvate cycling) were calculated from the position-specific transfer of (13)C from sequential precursors to their products. Important limitations of previous techniques were identified. In INS-1 cells, citrate synthase rates correlated with both insulin secretion and oxygen consumption. Pyruvate carboxylase rates were substantially lower than previously reported but showed the highest fold change in response to glucose stimulation. In conclusion, MIMOSA measures key metabolic rates from the precursor/product position-specific transfer of (13)C-label between metabolites and has broad applicability to any glucose-oxidizing cell. Copyright © 2015 Elsevier Inc. All rights reserved.
MRI-Based Intelligence Quotient (IQ) Estimation with Sparse Learning
Wang, Liye; Wee, Chong-Yaw; Suk, Heung-Il; Tang, Xiaoying; Shen, Dinggang
2015-01-01
In this paper, we propose a novel framework for IQ estimation using Magnetic Resonance Imaging (MRI) data. In particular, we devise a new feature selection method based on an extended dirty model for jointly considering both element-wise sparsity and group-wise sparsity. Meanwhile, due to the absence of large dataset with consistent scanning protocols for the IQ estimation, we integrate multiple datasets scanned from different sites with different scanning parameters and protocols. In this way, there is large variability in these different datasets. To address this issue, we design a two-step procedure for 1) first identifying the possible scanning site for each testing subject and 2) then estimating the testing subject’s IQ by using a specific estimator designed for that scanning site. We perform two experiments to test the performance of our method by using the MRI data collected from 164 typically developing children between 6 and 15 years old. In the first experiment, we use a multi-kernel Support Vector Regression (SVR) for estimating IQ values, and obtain an average correlation coefficient of 0.718 and also an average root mean square error of 8.695 between the true IQs and the estimated ones. In the second experiment, we use a single-kernel SVR for IQ estimation, and achieve an average correlation coefficient of 0.684 and an average root mean square error of 9.166. All these results show the effectiveness of using imaging data for IQ prediction, which is rarely done in the field according to our knowledge. PMID:25822851
Factors Associated With Surgery Clerkship Performance and Subsequent USMLE Step Scores.
Dong, Ting; Copeland, Annesley; Gangidine, Matthew; Schreiber-Gregory, Deanna; Ritter, E Matthew; Durning, Steven J
2018-03-12
We conducted an in-depth empirical investigation to achieve a better understanding of the surgery clerkship from multiple perspectives, including the influence of clerkship sequence on performance, the relationship between self-logged work hours and performance, as well as the association between surgery clerkship performance with subsequent USMLE Step exams' scores. The study cohort consisted of medical students graduating between 2015 and 2018 (n = 687). The primary measures of interest were clerkship sequence (internal medicine clerkship before or after surgery clerkship), self-logged work hours during surgery clerkship, surgery NBME subject exam score, surgery clerkship overall grade, and Step 1, Step 2 CK, and Step 3 exam scores. We reported the descriptive statistics and conducted correlation analysis, stepwise linear regression analysis, and variable selection analysis of logistic regression to answer the research questions. Students who completed internal medicine clerkship prior to surgery clerkship had better performance on surgery subject exam. The subject exam score explained an additional 28% of the variance of the Step 2 CK score, and the clerkship overall score accounted for an additional 24% of the variance after the MCAT scores and undergraduate GPA were controlled. Our finding suggests that the clerkship sequence does matter when it comes to performance on the surgery NBME subject exam. Performance on the surgery subject exam is predictive of subsequent performance on future USMLE Step exams. Copyright © 2018 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Bendix, J.; Hupp, C.R.
2000-01-01
Changes in the macroinvertebrate community in response to flow variations in the Little Stour River, Kent, UK, were examined over a 6 year period (1992-1997). This period included the final year of the 1988-1992 drought, followed by some of the wettest conditions recorded this century and a second period of drought between 1996 and 1997. Each year, samples were collected from 15 sites during late-summer base-flow conditions. Correspondence analysis identified clear differences between samples from upstream and downstream sites, and between drought and non-drought years. Step-wise multiple regression was used to identify hydrological indicators of community variation. Several different indices were used to describe the macroinvertebrate community, including macroinvertebrate community abundance, number of families and species, and individual species. Site characteristics were fundamental in accounting for variation in the unstandardized macroinvertebrate community. However, when differences between sites were controlled, hydrological conditions were found to play a dominant role in explaining ecological variation. Indices of high discharge (or their absence), 4-7 months prior to sampling (i.e. winter-spring), were found to be the most important variables for describing the late-summer community The results are discussed in relation to the role of flow variability in shaping instream communities and management implications. Copyright ?? 2000 John Wiley & Sons, Ltd.Changes in the macroinvertebrate community in response to flow variations in the Little Stour River, Kent, UK, were examined over a 6 year period (1992-1997). This period included the final year of the 1988-1992 drought, followed by some of the wettest conditions recorded this century and a second period of drought between 1996 and 1997. Each year, samples were collected from 15 sites during late-summer base-flow conditions. Correspondence analysis identified clear differences between samples from upstream and downstream sites, and between drought and non-drought years. Step-wise multiple regression was used to identify hydrological indicators of community variation. Several different indices were used to describe the macroinvertebrate community, including macroinvertebrate community abundance, number of families and species, and individual species. Site characteristics were fundamental in accounting for variation in the unstandardized macroinvertebrate community. However, when differences between sites were controlled, hydrological conditions were found to play a dominant role in explaining ecological variation. Indices of high discharge (or their absence), 4-7 months prior to sampling (i.e. winter-spring), were found to be the most important variables for describing the late-summer community. The results are discussed in relation to the role of flow variability in shaping instream communities and management implications.
Catalog of Air Force Weather Technical Documents, 1941-2006
2006-05-19
radiosondes in current use in USA. Elementary discussion of statistical terms and concepts used for expressing accuracy or error is discussed. AWS TR 105...Techniques, Appendix B: Vorticity—An Elementary Discussion of the Concept, August 1956, 27pp. Formerly AWSM 105– 50/1A. Provides the necessary back...steps involved in ordinary multiple linear regression. Conditional probability is calculated using transnormalized variables in the multivariate normal
Design and operation of a continuous integrated monoclonal antibody production process.
Steinebach, Fabian; Ulmer, Nicole; Wolf, Moritz; Decker, Lara; Schneider, Veronika; Wälchli, Ruben; Karst, Daniel; Souquet, Jonathan; Morbidelli, Massimo
2017-09-01
The realization of an end-to-end integrated continuous lab-scale process for monoclonal antibody manufacturing is described. For this, a continuous cultivation with filter-based cell-retention, a continuous two column capture process, a virus inactivation step, a semi-continuous polishing step (twin-column MCSGP), and a batch-wise flow-through polishing step were integrated and operated together. In each unit, the implementation of internal recycle loops allows to improve the performance: (a) in the bioreactor, to simultaneously increase the cell density and volumetric productivity, (b) in the capture process, to achieve improved capacity utilization at high productivity and yield, and (c) in the MCSGP process, to overcome the purity-yield trade-off of classical batch-wise bind-elute polishing steps. Furthermore, the design principles, which allow the direct connection of these steps, some at steady state and some at cyclic steady state, as well as straight-through processing, are discussed. The setup was operated for the continuous production of a commercial monoclonal antibody, resulting in stable operation and uniform product quality over the 17 cycles of the end-to-end integration. The steady-state operation was fully characterized by analyzing at the outlet of each unit at steady state the product titer as well as the process (HCP, DNA, leached Protein A) and product (aggregates, fragments) related impurities. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:1303-1313, 2017. © 2017 American Institute of Chemical Engineers.
Reulen, Holger; Kneib, Thomas
2016-04-01
One important goal in multi-state modelling is to explore information about conditional transition-type-specific hazard rate functions by estimating influencing effects of explanatory variables. This may be performed using single transition-type-specific models if these covariate effects are assumed to be different across transition-types. To investigate whether this assumption holds or whether one of the effects is equal across several transition-types (cross-transition-type effect), a combined model has to be applied, for instance with the use of a stratified partial likelihood formulation. Here, prior knowledge about the underlying covariate effect mechanisms is often sparse, especially about ineffectivenesses of transition-type-specific or cross-transition-type effects. As a consequence, data-driven variable selection is an important task: a large number of estimable effects has to be taken into account if joint modelling of all transition-types is performed. A related but subsequent task is model choice: is an effect satisfactory estimated assuming linearity, or is the true underlying nature strongly deviating from linearity? This article introduces component-wise Functional Gradient Descent Boosting (short boosting) for multi-state models, an approach performing unsupervised variable selection and model choice simultaneously within a single estimation run. We demonstrate that features and advantages in the application of boosting introduced and illustrated in classical regression scenarios remain present in the transfer to multi-state models. As a consequence, boosting provides an effective means to answer questions about ineffectiveness and non-linearity of single transition-type-specific or cross-transition-type effects.
Mi, Zhibao; Novitzky, Dimitri; Collins, Joseph F; Cooper, David KC
2015-01-01
The management of brain-dead organ donors is complex. The use of inotropic agents and replacement of depleted hormones (hormonal replacement therapy) is crucial for successful multiple organ procurement, yet the optimal hormonal replacement has not been identified, and the statistical adjustment to determine the best selection is not trivial. Traditional pair-wise comparisons between every pair of treatments, and multiple comparisons to all (MCA), are statistically conservative. Hsu’s multiple comparisons with the best (MCB) – adapted from the Dunnett’s multiple comparisons with control (MCC) – has been used for selecting the best treatment based on continuous variables. We selected the best hormonal replacement modality for successful multiple organ procurement using a two-step approach. First, we estimated the predicted margins by constructing generalized linear models (GLM) or generalized linear mixed models (GLMM), and then we applied the multiple comparison methods to identify the best hormonal replacement modality given that the testing of hormonal replacement modalities is independent. Based on 10-year data from the United Network for Organ Sharing (UNOS), among 16 hormonal replacement modalities, and using the 95% simultaneous confidence intervals, we found that the combination of thyroid hormone, a corticosteroid, antidiuretic hormone, and insulin was the best modality for multiple organ procurement for transplantation. PMID:25565890
Development of a food frequency questionnaire for Sri Lankan adults
2012-01-01
Background Food Frequency Questionnaires (FFQs) are commonly used in epidemiologic studies to assess long-term nutritional exposure. Because of wide variations in dietary habits in different countries, a FFQ must be developed to suit the specific population. Sri Lanka is undergoing nutritional transition and diet-related chronic diseases are emerging as an important health problem. Currently, no FFQ has been developed for Sri Lankan adults. In this study, we developed a FFQ to assess the regular dietary intake of Sri Lankan adults. Methods A nationally representative sample of 600 adults was selected by a multi-stage random cluster sampling technique and dietary intake was assessed by random 24-h dietary recall. Nutrient analysis of the FFQ required the selection of foods, development of recipes and application of these to cooked foods to develop a nutrient database. We constructed a comprehensive food list with the units of measurement. A stepwise regression method was used to identify foods contributing to a cumulative 90% of variance to total energy and macronutrients. In addition, a series of photographs were included. Results We obtained dietary data from 482 participants and 312 different food items were recorded. Nutritionists grouped similar food items which resulted in a total of 178 items. After performing step-wise multiple regression, 93 foods explained 90% of the variance for total energy intake, carbohydrates, protein, total fat and dietary fibre. Finally, 90 food items and 12 photographs were selected. Conclusion We developed a FFQ and the related nutrient composition database for Sri Lankan adults. Culturally specific dietary tools are central to capturing the role of diet in risk for chronic disease in Sri Lanka. The next step will involve the verification of FFQ reproducibility and validity. PMID:22937734
Synthesis of water-soluble mono- and ditopic imidazoliums for carbene ligands
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anstey, Mitchell; Murtagh, Dustin; Cordaro, Joseph Gabriel
2015-09-01
Synthesis of ditopic imidazoliums was achieved using a modular step-wise procedure. The procedure itself is amenable to a wide array of functional groups that can be incorporated into the imidazolium architecture. The resulting compounds range from ditopic zwitterions to highly-soluble dicationic aromatics
The Body Composition of a College Football Team.
ERIC Educational Resources Information Center
Wickkiser, John D.; Kelly, John M.
This study focuses on the body composition and anthropometric measurements of 65 college football players. Body composition was determined by underwater weighing with an accurate assessment of residual volume. The anthropometric measurements included height, weight, seven skinfolds, waist circumference, and wrist diameter. A step-wise multiple…
Cortical thickness differences between bipolar depression and major depressive disorder.
Lan, Martin J; Chhetry, Binod Thapa; Oquendo, Maria A; Sublette, M Elizabeth; Sullivan, Gregory; Mann, J John; Parsey, Ramin V
2014-06-01
Bipolar disorder (BD) is a psychiatric disorder with high morbidity and mortality that cannot be distinguished from major depressive disorder (MDD) until the first manic episode. A biomarker able to differentiate BD and MDD could help clinicians avoid risks of treating BD with antidepressants without mood stabilizers. Cortical thickness differences were assessed using magnetic resonance imaging in BD depressed patients (n = 18), MDD depressed patients (n = 56), and healthy volunteers (HVs) (n = 54). A general linear model identified clusters of cortical thickness difference between diagnostic groups. Compared to the HV group, the BD group had decreased cortical thickness in six regions, after controlling for age and sex, located within the frontal and parietal lobes, and the posterior cingulate cortex. Mean cortical thickness changes in clusters ranged from 7.6 to 9.6% (cluster-wise p-values from 1.0 e-4 to 0.037). When compared to MDD, three clusters of lower cortical thickness in BD were identified that overlapped with clusters that differentiated the BD and HV groups. Mean cortical thickness changes in the clusters ranged from 7.5 to 8.2% (cluster-wise p-values from 1.0 e-4 to 0.023). The difference in cortical thickness was more pronounced when the subgroup of subjects with bipolar I disorder (BD-I) was compared to the MDD group. Cortical thickness patterns were distinct between BD and MDD. These results are a step toward developing an imaging test to differentiate the two disorders. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Xue, Zhong; Li, Hai; Guo, Lei; Wong, Stephen T.C.
2010-01-01
It is a key step to spatially align diffusion tensor images (DTI) to quantitatively compare neural images obtained from different subjects or the same subject at different timepoints. Different from traditional scalar or multi-channel image registration methods, tensor orientation should be considered in DTI registration. Recently, several DTI registration methods have been proposed in the literature, but deformation fields are purely dependent on the tensor features not the whole tensor information. Other methods, such as the piece-wise affine transformation and the diffeomorphic non-linear registration algorithms, use analytical gradients of the registration objective functions by simultaneously considering the reorientation and deformation of tensors during the registration. However, only relatively local tensor information such as voxel-wise tensor-similarity, is utilized. This paper proposes a new DTI image registration algorithm, called local fast marching (FM)-based simultaneous registration. The algorithm not only considers the orientation of tensors during registration but also utilizes the neighborhood tensor information of each voxel to drive the deformation, and such neighborhood tensor information is extracted from a local fast marching algorithm around the voxels of interest. These local fast marching-based tensor features efficiently reflect the diffusion patterns around each voxel within a spherical neighborhood and can capture relatively distinctive features of the anatomical structures. Using simulated and real DTI human brain data the experimental results show that the proposed algorithm is more accurate compared with the FA-based registration and is more efficient than its counterpart, the neighborhood tensor similarity-based registration. PMID:20382233
"What is relevant in a text document?": An interpretable machine learning approach
Arras, Leila; Horn, Franziska; Montavon, Grégoire; Müller, Klaus-Robert
2017-01-01
Text documents can be described by a number of abstract concepts such as semantic category, writing style, or sentiment. Machine learning (ML) models have been trained to automatically map documents to these abstract concepts, allowing to annotate very large text collections, more than could be processed by a human in a lifetime. Besides predicting the text’s category very accurately, it is also highly desirable to understand how and why the categorization process takes place. In this paper, we demonstrate that such understanding can be achieved by tracing the classification decision back to individual words using layer-wise relevance propagation (LRP), a recently developed technique for explaining predictions of complex non-linear classifiers. We train two word-based ML models, a convolutional neural network (CNN) and a bag-of-words SVM classifier, on a topic categorization task and adapt the LRP method to decompose the predictions of these models onto words. Resulting scores indicate how much individual words contribute to the overall classification decision. This enables one to distill relevant information from text documents without an explicit semantic information extraction step. We further use the word-wise relevance scores for generating novel vector-based document representations which capture semantic information. Based on these document vectors, we introduce a measure of model explanatory power and show that, although the SVM and CNN models perform similarly in terms of classification accuracy, the latter exhibits a higher level of explainability which makes it more comprehensible for humans and potentially more useful for other applications. PMID:28800619
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhowmik, R. N., E-mail: rnbhowmik.phy@pondiuni.edu.in; Vijayasri, G.
2015-06-15
We have studied current-voltage (I-V) characteristics of α-Fe{sub 1.64}Ga{sub 0.36}O{sub 3}, a typical canted ferromagnetic semiconductor. The sample showed a transformation of the I-V curves from linear to non-linear character with the increase of bias voltage. The I-V curves showed irreversible features with hysteresis loop and bi-stable electronic states for up and down modes of voltage sweep. We report positive magnetoresistance and magnetic field induced negative differential resistance as the first time observed phenomena in metal doped hematite system. The magnitudes of critical voltage at which I-V curve showed peak and corresponding peak current are affected by magnetic field cycling.more » The shift of the peak voltage with magnetic field showed a step-wise jump between two discrete voltage levels with least gap (ΔV{sub P}) 0.345(± 0.001) V. The magnetic spin dependent electronic charge transport in this new class of magnetic semiconductor opens a wide scope for tuning large electroresistance (∼500-700%), magnetoresistance (70-135 %) and charge-spin dependent conductivity under suitable control of electric and magnetic fields. The electric and magnetic field controlled charge-spin transport is interesting for applications of the magnetic materials in spintronics, e.g., magnetic sensor, memory devices and digital switching.« less
Probabilistic Forecasting of Surface Ozone with a Novel Statistical Approach
NASA Technical Reports Server (NTRS)
Balashov, Nikolay V.; Thompson, Anne M.; Young, George S.
2017-01-01
The recent change in the Environmental Protection Agency's surface ozone regulation, lowering the surface ozone daily maximum 8-h average (MDA8) exceedance threshold from 75 to 70 ppbv, poses significant challenges to U.S. air quality (AQ) forecasters responsible for ozone MDA8 forecasts. The forecasters, supplied by only a few AQ model products, end up relying heavily on self-developed tools. To help U.S. AQ forecasters, this study explores a surface ozone MDA8 forecasting tool that is based solely on statistical methods and standard meteorological variables from the numerical weather prediction (NWP) models. The model combines the self-organizing map (SOM), which is a clustering technique, with a step wise weighted quadratic regression using meteorological variables as predictors for ozone MDA8. The SOM method identifies different weather regimes, to distinguish between various modes of ozone variability, and groups them according to similarity. In this way, when a regression is developed for a specific regime, data from the other regimes are also used, with weights that are based on their similarity to this specific regime. This approach, regression in SOM (REGiS), yields a distinct model for each regime taking into account both the training cases for that regime and other similar training cases. To produce probabilistic MDA8 ozone forecasts, REGiS weighs and combines all of the developed regression models on the basis of the weather patterns predicted by an NWP model. REGiS is evaluated over the San Joaquin Valley in California and the northeastern plains of Colorado. The results suggest that the model performs best when trained and adjusted separately for an individual AQ station and its corresponding meteorological site.
THE FIRST HUNDRED BROWN DWARFS DISCOVERED BY THE WIDE-FIELD INFRARED SURVEY EXPLORER (WISE)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davy Kirkpatrick, J.; Gelino, Christopher R.; Griffith, Roger L.
2011-12-01
We present ground-based spectroscopic verification of 6 Y dwarfs (see also Cushing et al.), 89 T dwarfs, 8 L dwarfs, and 1 M dwarf identified by the Wide-field Infrared Survey Explorer (WISE). Eighty of these are cold brown dwarfs with spectral types {>=}T6, six of which have been announced earlier by Mainzer et al. and Burgasser et al. We present color-color and color-type diagrams showing the locus of M, L, T, and Y dwarfs in WISE color space. Near-infrared and, in a few cases, optical spectra are presented for these discoveries. Near-infrared classifications as late as early Y are presentedmore » and objects with peculiar spectra are discussed. Using these new discoveries, we are also able to extend the optical T dwarf classification scheme from T8 to T9. After deriving an absolute WISE 4.6 {mu}m (W2) magnitude versus spectral type relation, we estimate spectrophotometric distances to our discoveries. We also use available astrometric measurements to provide preliminary trigonometric parallaxes to four of our discoveries, which have types of L9 pec (red), T8, T9, and Y0; all of these lie within 10 pc of the Sun. The Y0 dwarf, WISE 1541-2250, is the closest at 2.8{sup +1.3}{sub -0.6} pc; if this 2.8 pc value persists after continued monitoring, WISE 1541-2250 will become the seventh closest stellar system to the Sun. Another 10 objects, with types between T6 and >Y0, have spectrophotometric distance estimates also placing them within 10 pc. The closest of these, the T6 dwarf WISE 1506+7027, is believed to fall at a distance of {approx}4.9 pc. WISE multi-epoch positions supplemented with positional info primarily from the Spitzer/Infrared Array Camera allow us to calculate proper motions and tangential velocities for roughly one-half of the new discoveries. This work represents the first step by WISE to complete a full-sky, volume-limited census of late-T and Y dwarfs. Using early results from this census, we present preliminary, lower limits to the space density of these objects and discuss constraints on both the functional form of the mass function and the low-mass limit of star formation.« less
Kidger, Judi; Stone, Tracey; Tilling, Kate; Brockman, Rowan; Campbell, Rona; Ford, Tamsin; Hollingworth, William; King, Michael; Araya, Ricardo; Gunnell, David
2016-10-06
Secondary school teachers are at heightened risk of psychological distress, which can lead to poor work performance, poor quality teacher-student relationships and mental illness. A pilot cluster randomised controlled trial (RCT) - the WISE study - evaluated the feasibility of a full-scale RCT of an intervention to support school staff's own mental health, and train them in supporting student mental health. Six schools were randomised to an intervention or control group. In the intervention schools i) 8-9 staff received Mental Health First Aid (MHFA) training and became staff peer supporters, and ii) youth MHFA training was offered to the wider staff body. Control schools continued with usual practice. We used thematic qualitative data analysis and regression modelling to ascertain the feasibility, acceptability and potential usefulness of the intervention. Thirteen training observations, 14 staff focus groups and 6 staff interviews were completed, and 438 staff (43.5 %) and 1,862 (56.3 %) students (years 8 and 9) completed questionnaires at baseline and one year later. MHFA training was considered relevant for schools, and trainees gained in knowledge, confidence in helping others, and awareness regarding their own mental health. Suggestions for reducing the length of the training and focusing on helping strategies were made. A peer support service was established in all intervention schools and was perceived to be helpful in supporting individuals in difficulty - for example through listening, and signposting to other services - and raising the profile of mental health at a whole school level. Barriers to use included lack of knowledge about the service, concerns about confidentiality and a preference for accessing support from pre-existing networks. The WISE intervention is feasible and acceptable to schools. Results support the development of a full-scale cluster RCT, if steps are taken to improve response rates and implement the suggested improvements to the intervention. International Standard Randomised Controlled Trial Number: ISRCTN13255300 retrospectively registered 28/09/16.
ERIC Educational Resources Information Center
Fong, Kristen E.; Melguizo, Tatiana; Prather, George
2015-01-01
This study tracks students' progression through developmental math sequences and defines progression as both attempting and passing each level of the sequence. A model of successful progression in developmental education was built utilizing individual-, institutional-, and developmental math-level factors. Employing step-wise logistic regression…
Working with Evaluation Stakeholders: A Rationale, Step-Wise Approach and Toolkit
ERIC Educational Resources Information Center
Bryson, John M.; Patton, Michael Quinn; Bowman, Ruth A.
2011-01-01
In the broad field of evaluation, the importance of stakeholders is often acknowledged and different categories of stakeholders are identified. Far less frequent is careful attention to analysis of stakeholders' interests, needs, concerns, power, priorities, and perspectives and subsequent application of that knowledge to the design of…
Harvard Education Letter. Volume 22, Number 1, January-February 2006
ERIC Educational Resources Information Center
Chauncey, Caroline, Ed.
2006-01-01
"Harvard Education Letter" is published bimonthly at the Harvard Graduate School of Education. This issue of "Harvard Education Letter" contains the following articles: (1) The "Data Wise" Improvement Process: Eight Steps for Using Test Data to Improve Teaching and Learning (Kathryn Parker Boudett, Elizabeth A. City,…
Ryan, Michael S; Bishop, Steven; Browning, Joel; Anand, Rahul J; Waterhouse, Elizabeth; Rigby, Fidelma; Al-Mateen, Cheryl S; Lee, Clifton; Bradner, Melissa; Colbert-Getz, Jorie M
2017-06-01
The National Board of Medical Examiners' Clinical Science Subject Examinations are a component used by most U.S. medical schools to determine clerkship grades. The purpose of this study was to examine the validity of this practice. This was a retrospective cohort study of medical students at the Virginia Commonwealth University School of Medicine who completed clerkships in 2012 through 2014. Linear regression was used to determine how well United States Medical Licensing Examination Step 1 scores predicted Subject Examination scores in seven clerkships. The authors then substituted each student's Subject Examination standard scores with his or her Step 1 standard score. Clerkship grades based on the Step 1 substitution were compared with actual grades with the Wilcoxon rank test. A total of 2,777 Subject Examination scores from 432 students were included in the analysis. Step 1 scores significantly predicted between 23% and 44% of the variance in Subject Examination scores, P < .001 for all clerkship regression equations. Mean differences between expected and actual Subject Examination scores were small (≤ 0.2 points). There was a match between 73% of Step 1 substituted final clerkship grades and actual final clerkship grades. The results of this study suggest that performance on Step 1 can be used to identify and counsel students at risk for poor performance on the Subject Examinations. In addition, these findings call into the question the validity of using scores from Subject Examinations as a high-stakes assessment of learning in individual clerkships.
Deckersbach, Thilo; Peters, Amy T.; Sylvia, Louisa G.; Gold, Alexandra K.; da Silva Magalhaes, Pedro Vieira; Henry, David B.; Frank, Ellen; Otto, Michael W.; Berk, Michael; Dougherty, Darin D.; Nierenberg, Andrew A.; Miklowitz, David J.
2016-01-01
Background We sought to address how predictors and moderators of psychotherapy for bipolar depression – identified individually in prior analyses – can inform the development of a metric for prospectively classifying treatment outcome in intensive psychotherapy (IP) versus collaborative care (CC) adjunctive to pharmacotherapy in the Systematic Treatment Enhancement Program (STEP-BD) study. Methods We conducted post-hoc analyses on 135 STEP-BD participants using cluster analysis to identify subsets of participants with similar clinical profiles and investigated this combined metric as a moderator and predictor of response to IP. We used agglomerative hierarchical cluster analyses and k-means clustering to determine the content of the clinical profiles. Logistic regression and Cox proportional hazard models were used to evaluate whether the resulting clusters predicted or moderated likelihood of recovery or time until recovery. Results The cluster analysis yielded a two-cluster solution: 1) “less-recurrent/severe” and 2) “chronic/recurrent.” Rates of recovery in IP were similar for less-recurrent/severe and chronic/recurrent participants. Less-recurrent/severe patients were more likely than chronic/recurrent patients to achieve recovery in CC (p = .040, OR = 4.56). IP yielded a faster recovery for chronic/recurrent participants, whereas CC led to recovery sooner in the less-recurrent/severe cluster (p = .034, OR = 2.62). Limitations Cluster analyses require list-wise deletion of cases with missing data so we were unable to conduct analyses on all STEP-BD participants. Conclusions A well-powered, parametric approach can distinguish patients based on illness history and provide clinicians with symptom profiles of patients that confer differential prognosis in CC vs. IP. PMID:27289316
WiseEye: Next Generation Expandable and Programmable Camera Trap Platform for Wildlife Research.
Nazir, Sajid; Newey, Scott; Irvine, R Justin; Verdicchio, Fabio; Davidson, Paul; Fairhurst, Gorry; Wal, René van der
2017-01-01
The widespread availability of relatively cheap, reliable and easy to use digital camera traps has led to their extensive use for wildlife research, monitoring and public outreach. Users of these units are, however, often frustrated by the limited options for controlling camera functions, the generation of large numbers of images, and the lack of flexibility to suit different research environments and questions. We describe the development of a user-customisable open source camera trap platform named 'WiseEye', designed to provide flexible camera trap technology for wildlife researchers. The novel platform is based on a Raspberry Pi single-board computer and compatible peripherals that allow the user to control its functions and performance. We introduce the concept of confirmatory sensing, in which the Passive Infrared triggering is confirmed through other modalities (i.e. radar, pixel change) to reduce the occurrence of false positives images. This concept, together with user-definable metadata, aided identification of spurious images and greatly reduced post-collection processing time. When tested against a commercial camera trap, WiseEye was found to reduce the incidence of false positive images and false negatives across a range of test conditions. WiseEye represents a step-change in camera trap functionality, greatly increasing the value of this technology for wildlife research and conservation management.
WiseEye: Next Generation Expandable and Programmable Camera Trap Platform for Wildlife Research
Nazir, Sajid; Newey, Scott; Irvine, R. Justin; Verdicchio, Fabio; Davidson, Paul; Fairhurst, Gorry; van der Wal, René
2017-01-01
The widespread availability of relatively cheap, reliable and easy to use digital camera traps has led to their extensive use for wildlife research, monitoring and public outreach. Users of these units are, however, often frustrated by the limited options for controlling camera functions, the generation of large numbers of images, and the lack of flexibility to suit different research environments and questions. We describe the development of a user-customisable open source camera trap platform named ‘WiseEye’, designed to provide flexible camera trap technology for wildlife researchers. The novel platform is based on a Raspberry Pi single-board computer and compatible peripherals that allow the user to control its functions and performance. We introduce the concept of confirmatory sensing, in which the Passive Infrared triggering is confirmed through other modalities (i.e. radar, pixel change) to reduce the occurrence of false positives images. This concept, together with user-definable metadata, aided identification of spurious images and greatly reduced post-collection processing time. When tested against a commercial camera trap, WiseEye was found to reduce the incidence of false positive images and false negatives across a range of test conditions. WiseEye represents a step-change in camera trap functionality, greatly increasing the value of this technology for wildlife research and conservation management. PMID:28076444
ERIC Educational Resources Information Center
Hofmans, Joeri; De Gieter, Sara; Pepermans, Roland
2013-01-01
Although previous research often showed a positive relationship between pay satisfaction and job satisfaction, we dispute the universality of this finding. Cluster-wise regression analyses on three samples consistently show that two types of individuals can be distinguished, each with a different job reward-job satisfaction relationship. For the…
Hollands, K L; Pelton, T A; van der Veen, S; Alharbi, S; Hollands, M A
2016-01-01
Although there is evidence that stroke survivors have reduced gait adaptability, the underlying mechanisms and the relationship to functional recovery are largely unknown. We explored the relationships between walking adaptability and clinical measures of balance, motor recovery and functional ability in stroke survivors. Stroke survivors (n=42) stepped to targets, on a 6m walkway, placed to elicit step lengthening, shortening and narrowing on paretic and non-paretic sides. The number of targets missed during six walks and target stepping speed was recorded. Fugl-Meyer (FM), Berg Balance Scale (BBS), self-selected walking speed (SWWS) and single support (SS) and step length (SL) symmetry (using GaitRite when not walking to targets) were also assessed. Stepwise multiple-linear regression was used to model the relationships between: total targets missed, number missed with paretic and non-paretic legs, target stepping speed, and each clinical measure. Regression revealed a significant model for each outcome variable that included only one independent variable. Targets missed by the paretic limb, was a significant predictor of FM (F(1,40)=6.54, p=0.014,). Speed of target stepping was a significant predictor of each of BBS (F(1,40)=26.36, p<0.0001), SSWS (F(1,40)=37.00, p<0.0001). No variables were significant predictors of SL or SS asymmetry. Speed of target stepping was significantly predictive of BBS and SSWS and paretic targets missed predicted FM, suggesting that fast target stepping requires good balance and accurate stepping demands good paretic leg function. The relationships between these parameters indicate gait adaptability is a clinically meaningful target for measurement and treatment of functionally adaptive walking ability in stroke survivors. Copyright © 2015 Elsevier B.V. All rights reserved.
Cognition, glucose metabolism and amyloid burden in Alzheimer’s disease
Furst, Ansgar J.; Rabinovici, Gil D.; Rostomian, Ara H.; Steed, Tyler; Alkalay, Adi; Racine, Caroline; Miller, Bruce L.; Jagust, William J.
2010-01-01
We investigated relationships between glucose metabolism, amyloid load and measures of cognitive and functional impairment in Alzheimer’s disease (AD). Patients meeting criteria for probable AD underwent [11C]PIB and [18F]FDG PET imaging and were assessed on a set of clinical measures. PIB Distribution volume ratios and FDG scans were spatially normalized and average PIB counts from regions-of-interest (ROI) were used to compute a measure of global PIB uptake. Separate voxel-wise regressions explored local and global relationships between metabolism, amyloid burden and clinical measures. Regressions reflected cognitive domains assessed by individual measures, with visuospatial tests associated with more posterior metabolism, and language tests associated with metabolism in the left hemisphere. Correlating regional FDG uptake with these measures confirmed these findings. In contrast, no correlations were found between either voxel-wise or regional PIB uptake and any of the clinical measures. Finally, there were no associations between regional PIB and FDG uptake. We conclude that regional and global amyloid burden does not correlate with clinical status or glucose metabolism in AD. PMID:20417582
Advanced statistics: linear regression, part I: simple linear regression.
Marill, Keith A
2004-01-01
Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.
NASA Astrophysics Data System (ADS)
Choi, Hon-Chit; Wen, Lingfeng; Eberl, Stefan; Feng, Dagan
2006-03-01
Dynamic Single Photon Emission Computed Tomography (SPECT) has the potential to quantitatively estimate physiological parameters by fitting compartment models to the tracer kinetics. The generalized linear least square method (GLLS) is an efficient method to estimate unbiased kinetic parameters and parametric images. However, due to the low sensitivity of SPECT, noisy data can cause voxel-wise parameter estimation by GLLS to fail. Fuzzy C-Mean (FCM) clustering and modified FCM, which also utilizes information from the immediate neighboring voxels, are proposed to improve the voxel-wise parameter estimation of GLLS. Monte Carlo simulations were performed to generate dynamic SPECT data with different noise levels and processed by general and modified FCM clustering. Parametric images were estimated by Logan and Yokoi graphical analysis and GLLS. The influx rate (K I), volume of distribution (V d) were estimated for the cerebellum, thalamus and frontal cortex. Our results show that (1) FCM reduces the bias and improves the reliability of parameter estimates for noisy data, (2) GLLS provides estimates of micro parameters (K I-k 4) as well as macro parameters, such as volume of distribution (Vd) and binding potential (BP I & BP II) and (3) FCM clustering incorporating neighboring voxel information does not improve the parameter estimates, but improves noise in the parametric images. These findings indicated that it is desirable for pre-segmentation with traditional FCM clustering to generate voxel-wise parametric images with GLLS from dynamic SPECT data.
Random forests on Hadoop for genome-wide association studies of multivariate neuroimaging phenotypes
2013-01-01
Motivation Multivariate quantitative traits arise naturally in recent neuroimaging genetics studies, in which both structural and functional variability of the human brain is measured non-invasively through techniques such as magnetic resonance imaging (MRI). There is growing interest in detecting genetic variants associated with such multivariate traits, especially in genome-wide studies. Random forests (RFs) classifiers, which are ensembles of decision trees, are amongst the best performing machine learning algorithms and have been successfully employed for the prioritisation of genetic variants in case-control studies. RFs can also be applied to produce gene rankings in association studies with multivariate quantitative traits, and to estimate genetic similarities measures that are predictive of the trait. However, in studies involving hundreds of thousands of SNPs and high-dimensional traits, a very large ensemble of trees must be inferred from the data in order to obtain reliable rankings, which makes the application of these algorithms computationally prohibitive. Results We have developed a parallel version of the RF algorithm for regression and genetic similarity learning tasks in large-scale population genetic association studies involving multivariate traits, called PaRFR (Parallel Random Forest Regression). Our implementation takes advantage of the MapReduce programming model and is deployed on Hadoop, an open-source software framework that supports data-intensive distributed applications. Notable speed-ups are obtained by introducing a distance-based criterion for node splitting in the tree estimation process. PaRFR has been applied to a genome-wide association study on Alzheimer's disease (AD) in which the quantitative trait consists of a high-dimensional neuroimaging phenotype describing longitudinal changes in the human brain structure. PaRFR provides a ranking of SNPs associated to this trait, and produces pair-wise measures of genetic proximity that can be directly compared to pair-wise measures of phenotypic proximity. Several known AD-related variants have been identified, including APOE4 and TOMM40. We also present experimental evidence supporting the hypothesis of a linear relationship between the number of top-ranked mutated states, or frequent mutation patterns, and an indicator of disease severity. Availability The Java codes are freely available at http://www2.imperial.ac.uk/~gmontana. PMID:24564704
Wang, Yue; Goh, Wilson; Wong, Limsoon; Montana, Giovanni
2013-01-01
Multivariate quantitative traits arise naturally in recent neuroimaging genetics studies, in which both structural and functional variability of the human brain is measured non-invasively through techniques such as magnetic resonance imaging (MRI). There is growing interest in detecting genetic variants associated with such multivariate traits, especially in genome-wide studies. Random forests (RFs) classifiers, which are ensembles of decision trees, are amongst the best performing machine learning algorithms and have been successfully employed for the prioritisation of genetic variants in case-control studies. RFs can also be applied to produce gene rankings in association studies with multivariate quantitative traits, and to estimate genetic similarities measures that are predictive of the trait. However, in studies involving hundreds of thousands of SNPs and high-dimensional traits, a very large ensemble of trees must be inferred from the data in order to obtain reliable rankings, which makes the application of these algorithms computationally prohibitive. We have developed a parallel version of the RF algorithm for regression and genetic similarity learning tasks in large-scale population genetic association studies involving multivariate traits, called PaRFR (Parallel Random Forest Regression). Our implementation takes advantage of the MapReduce programming model and is deployed on Hadoop, an open-source software framework that supports data-intensive distributed applications. Notable speed-ups are obtained by introducing a distance-based criterion for node splitting in the tree estimation process. PaRFR has been applied to a genome-wide association study on Alzheimer's disease (AD) in which the quantitative trait consists of a high-dimensional neuroimaging phenotype describing longitudinal changes in the human brain structure. PaRFR provides a ranking of SNPs associated to this trait, and produces pair-wise measures of genetic proximity that can be directly compared to pair-wise measures of phenotypic proximity. Several known AD-related variants have been identified, including APOE4 and TOMM40. We also present experimental evidence supporting the hypothesis of a linear relationship between the number of top-ranked mutated states, or frequent mutation patterns, and an indicator of disease severity. The Java codes are freely available at http://www2.imperial.ac.uk/~gmontana.
Rapid micro-scale proteolysis of proteins for MALDI-MS peptide mapping using immobilized trypsin
NASA Astrophysics Data System (ADS)
Gobom, Johan; Nordhoff, Eckhard; Ekman, Rolf; Roepstorff, Peter
1997-12-01
In this study we present a rapid method for tryptic digestion of proteins using micro-columns with enzyme immobilized on perfusion chromatography media. The performance of the method is exemplified with acyl-CoA-binding protein and reduced carbamidomethylated bovine serum albumin. The method proved to be significantly faster and yielded a better sequence coverage and an improved signal-to-noise ratio for the MALDI-MS peptide maps, compared to in-solution- and on-target digestion. Only a single sample transfer step is required, and therefore sample loss due to adsorption to surfaces is reduced, which is a critical issue when handling low picomole to femtomole amounts of proteins. An example is shown with on-column proteolytic digestion and subsequent elution of the digest into a reversed-phase micro-column. This is useful if the sample contains large amounts of salt or is too diluted for MALDI-MS analysis. Furthermore, by step-wise elution from the reversedphase column, a complex digest can be fractionated, which reduces signal suppression and facilitates data interpretation in the subsequent MS-analysis. The method also proved useful for consecutive digestions with enzymes of different cleavage specificity. This is exemplified with on-column tryptic digestion, followed by reversed-phase step-wise elution, and subsequent on-target V8 protease digestion.
Fast parallel image registration on CPU and GPU for diagnostic classification of Alzheimer's disease
Shamonin, Denis P.; Bron, Esther E.; Lelieveldt, Boudewijn P. F.; Smits, Marion; Klein, Stefan; Staring, Marius
2013-01-01
Nonrigid image registration is an important, but time-consuming task in medical image analysis. In typical neuroimaging studies, multiple image registrations are performed, i.e., for atlas-based segmentation or template construction. Faster image registration routines would therefore be beneficial. In this paper we explore acceleration of the image registration package elastix by a combination of several techniques: (i) parallelization on the CPU, to speed up the cost function derivative calculation; (ii) parallelization on the GPU building on and extending the OpenCL framework from ITKv4, to speed up the Gaussian pyramid computation and the image resampling step; (iii) exploitation of certain properties of the B-spline transformation model; (iv) further software optimizations. The accelerated registration tool is employed in a study on diagnostic classification of Alzheimer's disease and cognitively normal controls based on T1-weighted MRI. We selected 299 participants from the publicly available Alzheimer's Disease Neuroimaging Initiative database. Classification is performed with a support vector machine based on gray matter volumes as a marker for atrophy. We evaluated two types of strategies (voxel-wise and region-wise) that heavily rely on nonrigid image registration. Parallelization and optimization resulted in an acceleration factor of 4–5x on an 8-core machine. Using OpenCL a speedup factor of 2 was realized for computation of the Gaussian pyramids, and 15–60 for the resampling step, for larger images. The voxel-wise and the region-wise classification methods had an area under the receiver operator characteristic curve of 88 and 90%, respectively, both for standard and accelerated registration. We conclude that the image registration package elastix was substantially accelerated, with nearly identical results to the non-optimized version. The new functionality will become available in the next release of elastix as open source under the BSD license. PMID:24474917
NASA Astrophysics Data System (ADS)
Lin, Zi-Jing; Li, Lin; Cazzell, Marry; Liu, Hanli
2013-03-01
Functional near-infrared spectroscopy (fNIRS) is a non-invasive imaging technique which measures the hemodynamic changes that reflect the brain activity. Diffuse optical tomography (DOT), a variant of fNIRS with multi-channel NIRS measurements, has demonstrated capability of three dimensional (3D) reconstructions of hemodynamic changes due to the brain activity. Conventional method of DOT image analysis to define the brain activation is based upon the paired t-test between two different states, such as resting-state versus task-state. However, it has limitation because the selection of activation and post-activation period is relatively subjective. General linear model (GLM) based analysis can overcome this limitation. In this study, we combine the 3D DOT image reconstruction with GLM-based analysis (i.e., voxel-wise GLM analysis) to investigate the brain activity that is associated with the risk-decision making process. Risk decision-making is an important cognitive process and thus is an essential topic in the field of neuroscience. The balloon analogue risk task (BART) is a valid experimental model and has been commonly used in behavioral measures to assess human risk taking action and tendency while facing risks. We have utilized the BART paradigm with a blocked design to investigate brain activations in the prefrontal and frontal cortical areas during decision-making. Voxel-wise GLM analysis was performed on 18human participants (10 males and 8females).In this work, we wish to demonstrate the feasibility of using voxel-wise GLM analysis to image and study cognitive functions in response to risk decision making by DOT. Results have shown significant changes in the dorsal lateral prefrontal cortex (DLPFC) during the active choice mode and a different hemodynamic pattern between genders, which are in good agreements with published literatures in functional magnetic resonance imaging (fMRI) and fNIRS studies.
Liu, Menglong; Wang, Kai; Lissenden, Cliff J.; Wang, Qiang; Zhang, Qingming; Long, Renrong; Su, Zhongqing; Cui, Fangsen
2017-01-01
Hypervelocity impact (HVI), ubiquitous in low Earth orbit with an impacting velocity in excess of 1 km/s, poses an immense threat to the safety of orbiting spacecraft. Upon penetration of the outer shielding layer of a typical two-layer shielding system, the shattered projectile, together with the jetted materials of the outer shielding material, subsequently impinge the inner shielding layer, to which pitting damage is introduced. The pitting damage includes numerous craters and cracks disorderedly scattered over a wide region. Targeting the quantitative evaluation of this sort of damage (multitudinous damage within a singular inspection region), a characterization strategy, associating linear with nonlinear features of guided ultrasonic waves, is developed. Linear-wise, changes in the signal features in the time domain (e.g., time-of-flight and energy dissipation) are extracted, for detecting gross damage whose characteristic dimensions are comparable to the wavelength of the probing wave; nonlinear-wise, changes in the signal features in the frequency domain (e.g., second harmonic generation), which are proven to be more sensitive than their linear counterparts to small-scale damage, are explored to characterize HVI-induced pitting damage scattered in the inner layer. A numerical simulation, supplemented with experimental validation, quantitatively reveals the accumulation of nonlinearity of the guided waves when the waves traverse the pitting damage, based on which linear and nonlinear damage indices are proposed. A path-based rapid imaging algorithm, in conjunction with the use of the developed linear and nonlinear indices, is developed, whereby the HVI-induced pitting damage is characterized in images in terms of the probability of occurrence. PMID:28772908
ELASTIC NET FOR COX’S PROPORTIONAL HAZARDS MODEL WITH A SOLUTION PATH ALGORITHM
Wu, Yichao
2012-01-01
For least squares regression, Efron et al. (2004) proposed an efficient solution path algorithm, the least angle regression (LAR). They showed that a slight modification of the LAR leads to the whole LASSO solution path. Both the LAR and LASSO solution paths are piecewise linear. Recently Wu (2011) extended the LAR to generalized linear models and the quasi-likelihood method. In this work we extend the LAR further to handle Cox’s proportional hazards model. The goal is to develop a solution path algorithm for the elastic net penalty (Zou and Hastie (2005)) in Cox’s proportional hazards model. This goal is achieved in two steps. First we extend the LAR to optimizing the log partial likelihood plus a fixed small ridge term. Then we define a path modification, which leads to the solution path of the elastic net regularized log partial likelihood. Our solution path is exact and piecewise determined by ordinary differential equation systems. PMID:23226932
Zhu, Xiaoyan; Song, Changchun; Swarzenski, Christopher M.; Guo, Yuedong; Zhang, Xinhow; Wang, Jiaoyue
2015-01-01
Northern peatlands contain a considerable share of the terrestrial carbon pool, which will be affected by future climatic variability. Using the static chamber technique, we investigated ecosystem respiration and soil respiration over two growing seasons (2012 and 2013) in a Carex lasiocarpa-dominated peatland in the Sanjiang Plain in China. We synchronously monitored the environmental factors controlling CO2 fluxes. Ecosystem respiration during these two growing seasons ranged from 33.3 to 506.7 mg CO2–C m−2 h−1. Through step-wise regression, variations in soil temperature at 10 cm depth alone explained 73.7% of the observed variance in log10(ER). The mean Q10 values ranged from 2.1 to 2.9 depending on the choice of depth where soil temperature was measured. The Q10 value at the 10 cm depth (2.9) appears to be a good representation for herbaceous peatland in the Sanjiang Plain when applying field-estimation based Q10values to current terrestrial ecosystem models due to the most optimized regression coefficient (63.2%). Soil respiration amounted to 57% of ecosystem respiration and played a major role in peatland carbon balance in our study. Emphasis on ecosystem respiration from temperate peatlands in the Sanjiang Plain will improve our basic understanding of carbon exchange between peatland ecosystem and the atmosphere.
Zhu, Hongxiao; Morris, Jeffrey S; Wei, Fengrong; Cox, Dennis D
2017-07-01
Many scientific studies measure different types of high-dimensional signals or images from the same subject, producing multivariate functional data. These functional measurements carry different types of information about the scientific process, and a joint analysis that integrates information across them may provide new insights into the underlying mechanism for the phenomenon under study. Motivated by fluorescence spectroscopy data in a cervical pre-cancer study, a multivariate functional response regression model is proposed, which treats multivariate functional observations as responses and a common set of covariates as predictors. This novel modeling framework simultaneously accounts for correlations between functional variables and potential multi-level structures in data that are induced by experimental design. The model is fitted by performing a two-stage linear transformation-a basis expansion to each functional variable followed by principal component analysis for the concatenated basis coefficients. This transformation effectively reduces the intra-and inter-function correlations and facilitates fast and convenient calculation. A fully Bayesian approach is adopted to sample the model parameters in the transformed space, and posterior inference is performed after inverse-transforming the regression coefficients back to the original data domain. The proposed approach produces functional tests that flag local regions on the functional effects, while controlling the overall experiment-wise error rate or false discovery rate. It also enables functional discriminant analysis through posterior predictive calculation. Analysis of the fluorescence spectroscopy data reveals local regions with differential expressions across the pre-cancer and normal samples. These regions may serve as biomarkers for prognosis and disease assessment.
Cure-WISE: HETDEX data reduction with Astro-WISE
NASA Astrophysics Data System (ADS)
Snigula, J. M.; Cornell, M. E.; Drory, N.; Fabricius, Max.; Landriau, M.; Hill, G. J.; Gebhardt, K.
2012-09-01
The Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) is a blind spectroscopic survey to map the evolution of dark energy using Lyman-alpha emitting galaxies at redshifts 1:9 < z < 3:5 as tracers. The survey instrument, VIRUS, consists of 75 IFUs distributed across the 22-arcmin field of the upgraded 9.2-m HET. Each exposure gathers 33,600 spectra. Over the projected five year run of the survey we expect about 170 GB of data per night. For the data reduction we developed the Cure pipeline. Cure is designed to automatically find and calibrate the observed spectra, subtract the sky background, and detect and classify different types of sources. Cure employs rigorous statistical methods and complete pixel-level error propagation throughout the reduction process to ensure Poisson-limited performance and meaningful significance values. To automate the reduction of the whole dataset we implemented the Cure pipeline in the Astro-WISE framework. This integration provides for HETDEX a database backend with complete dependency tracking of the various reduction steps, automated checks, and a searchable interface to the detected sources and user management. It can be used to create various web interfaces for data access and quality control. Astro-WISE allows us to reduce the data from all the IFUs in parallel on a compute cluster. This cluster allows us to reduce the observed data in quasi real time and still have excess capacity for rerunning parts of the reduction. Finally, the Astro-WISE interface will be used to provide access to reduced data products to the general community.
Relationship of Mobile Learning Readiness to Teacher Proficiency in Classroom Technology Integration
ERIC Educational Resources Information Center
Christensen, Rhonda; Knezek, Gerald
2016-01-01
Mobile learning readiness as a new aspect of technology integration for classroom teachers is confirmed through the findings of this study to be significantly aligned with well-established measures based on older information technologies. The Mobile Learning Readiness Survey (MLRS) generally exhibits the desirable properties of step-wise increases…
IMRT verification using a radiochromic/optical-CT dosimetry system
NASA Astrophysics Data System (ADS)
Oldham, Mark; Guo, Pengyi; Gluckman, Gary; Adamovics, John
2006-12-01
This work represents our first experiences relating to IMRT verification using a relatively new 3D dosimetry system consisting of a PRESAGETM dosimeter (Heuris Inc, Pharma LLC) and an optical-CT scanning system (OCTOPUSTM TM MGS Inc). This work builds in a step-wise manner on prior work in our lab.
Equilibrium Moisture Content of Common Fine Fuels in Southeastern Forests
W.H. Blackmarr
1971-01-01
Nine different kinds of forest litter found in ground fuel complexes of southeastern forests were subjected to step-wise changes in relative humidity to determine their equilibrium moisture content (EMC) at different levels of relative humidity. The adsorption and desorption EMC curves for these fuels exhibited the typical hysteresis loop...
NASA Astrophysics Data System (ADS)
Brocks, Sebastian; Bendig, Juliane; Bareth, Georg
2016-10-01
Crop surface models (CSMs) representing plant height above ground level are a useful tool for monitoring in-field crop growth variability and enabling precision agriculture applications. A semiautomated system for generating CSMs was implemented. It combines an Android application running on a set of smart cameras for image acquisition and transmission and a set of Python scripts automating the structure-from-motion (SfM) software package Agisoft Photoscan and ArcGIS. Only ground-control-point (GCP) marking was performed manually. This system was set up on a barley field experiment with nine different barley cultivars in the growing period of 2014. Images were acquired three times a day for a period of two months. CSMs were successfully generated for 95 out of 98 acquisitions between May 2 and June 30. The best linear regressions of the CSM-derived plot-wise averaged plant-heights compared to manual plant height measurements taken at four dates resulted in a coefficient of determination R2 of 0.87 and a root-mean-square error (RMSE) of 0.08 m, with Willmott's refined index of model performance dr equaling 0.78. In total, 103 mean plot heights were used in the regression based on the noon acquisition time. The presented system succeeded in semiautomatedly monitoring crop height on a plot scale to field scale.
Process for producing a high emittance coating and resulting article
NASA Technical Reports Server (NTRS)
Le, Huong G. (Inventor); O'Brien, Dudley L. (Inventor)
1993-01-01
Process for anodizing aluminum or its alloys to obtain a surface particularly having high infrared emittance by anodizing an aluminum or aluminum alloy substrate surface in an aqueous sulfuric acid solution at elevated temperature and by a step-wise current density procedure, followed by sealing the resulting anodized surface. In a preferred embodiment the aluminum or aluminum alloy substrate is first alkaline cleaned and then chemically brightened in an acid bath The resulting cleaned substrate is anodized in a 15% by weight sulfuric acid bath maintained at a temperature of 30.degree. C. Anodizing is carried out by a step-wise current density procedure at 19 amperes per square ft. (ASF) for 20 minutes, 15 ASF for 20 minutes and 10 ASF for 20 minutes. After anodizing the sample is sealed by immersion in water at 200.degree. F. and then air dried. The resulting coating has a high infrared emissivity of about 0.92 and a solar absorptivity of about 0.2, for a 5657 aluminum alloy, and a relatively thick anodic coating of about 1 mil.
Lin, Johnson; Sharma, Vikas; Milase, Ridwaan; Mbhense, Ntuthuko
2016-06-01
Phenol degradation enhancement of Acinetobacter strain V2 by a step-wise continuous acclimation process was investigated. At the end of 8 months, three stable adapted strains, designated as R, G, and Y, were developed with the sub-lethal concentration of phenol at 800, 1100, and 1400 mg/L, respectively, from 400 mg/L of V2 parent strain. All strains degraded phenol at their sub-lethal level within 24 h, their growth rate increased as the acclimation process continued and retained their degradation properties even after storing at -80 °C for more than 3 years. All adapted strains appeared coccoid with an ungranulated surface under electron microscope compared to typical rod-shaped parental strain V2 . The adapted Y strain also possessed superior degradation ability against aniline, benzoate, and toluene. This study demonstrated the use of long term acclimation process to develop efficient and better pollutant degrading bacterial strains with potentials in industrial and environmental bioremediation. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Komorowski, A.; James, G. M.; Philippe, C.; Gryglewski, G.; Bauer, A.; Hienert, M.; Spies, M.; Kautzky, A.; Vanicek, T.; Hahn, A.; Traub-Weidinger, T.; Winkler, D.; Wadsak, W.; Mitterhauser, M.; Hacker, M.; Kasper, S.; Lanzenberger, R.
2017-01-01
Abstract Regional differences in posttranscriptional mechanisms may influence in vivo protein densities. The association of positron emission tomography (PET) imaging data from 112 healthy controls and gene expression values from the Allen Human Brain Atlas, based on post-mortem brains, was investigated for key serotonergic proteins. PET binding values and gene expression intensities were correlated for the main inhibitory (5-HT1A) and excitatory (5-HT2A) serotonin receptor, the serotonin transporter (SERT) as well as monoamine oxidase-A (MAO-A), using Spearman's correlation coefficients (rs) in a voxel-wise and region-wise analysis. Correlations indicated a strong linear relationship between gene and protein expression for both the 5-HT1A (voxel-wise rs = 0.71; region-wise rs = 0.93) and the 5-HT2A receptor (rs = 0.66; 0.75), but only a weak association for MAO-A (rs = 0.26; 0.66) and no clear correlation for SERT (rs = 0.17; 0.29). Additionally, region-wise correlations were performed using mRNA expression from the HBT, yielding comparable results (5-HT1Ars = 0.82; 5-HT2Ars = 0.88; MAO-A rs = 0.50; SERT rs = −0.01). The SERT and MAO-A appear to be regulated in a region-specific manner across the whole brain. In contrast, the serotonin-1A and -2A receptors are presumably targeted by common posttranscriptional processes similar in all brain areas suggesting the applicability of mRNA expression as surrogate parameter for density of these proteins. PMID:27909009
Water Clouds in the Atmosphere of a Jupiter-Like Brown Dwarf
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2016-07-01
Lying a mere 7.2 light-years away, WISE 0855 is the nearest known planetary-mass object. This brown dwarf, a failed star just slightly more massive than Jupiter, is also the coldest known compact body outside of our solar system and new observations have now provided us with a first look at its atmosphere.Temperaturepressure profiles of Jupiter, WISE 0855, and what was previously the coldest extrasolar object with a 5-m spectrum, Gl 570D. Thicker lines show the location of each objects 5-m photospheres. WISE 0855s and Jupiters photospheres are near the point where water starts to condense out into clouds (dashed line). [Skemer et al. 2016]Challenging ObservationsWith a chilly temperature of 250 K, the brown dwarf WISE 0855 is the closest thing weve been able to observe to a body resembling Jupiters ~130 K. WISE 0855 therefore presents an intriguing opportunity to directly study the atmosphere of an object whose physical characteristics are similar to our own gas giants.But studying the atmospheric characteristics of such a body is tricky. WISE 0855 is too cold and faint to be able to obtain traditional optical or near-infrared ( 2.5 m) spectroscopy of it. Luckily, like Jupiter, the opacity of its gas allows thermal emission from its deep atmosphere to escape through an atmospheric window around ~5 m.A team of scientists led by Andrew Skemer (UC Santa Cruz) set out to observe WISE 0855 in this window with the Gemini-North telescope and the Gemini Near-Infrared Spectrograph. Though WISE 0855 is five times fainter than the faintest object previously detected with ground-based 5-m spectroscopy, the dry air of Mauna Kea (and a lot of patience!) allowed the team to obtain unprecedented spectra of this object.WISE 0855s spectrum shows absorption features consistent with water vapor, and its best fit by a cloudy brown-dwarf model. [Skemer et al. 2016]Water Clouds FoundExoplanets and brown dwarfs cooler than ~350 K are expected to form water ice clouds in upper atmosphere and these clouds should be thick enough to alter the emergent spectrum that we observe. Does WISE 0855 fit this picture?Yes! By modeling the spectrum of WISE 0855, Skemer and collaborators demonstrate that its completely dominated by water absorption lines. This represents the first evidence of water clouds in a body outside of our solar system.Atmospheric TurbulenceWISE 0855s water absorption profile bears a striking resemblance to Jupiters. Where the spectra differ, however, is in the lower-wavelength end of observations: Jupiter also shows absorption by a molecule called phosphine, whereas WISE 0855 doesnt.Jupiters spectrum is strikingly similar to WISE 0855s from 4.8 to 5.2 m, where both objects are dominated by water absorption. But from 4.5 to 4.8 m, Jupiters spectrum is dominated by phosphine absorption, indicating a turbulent atmosphere, while WISE 0855s is not. [Skemer et al. 2016]Interestingly, if the bodies were both in equilibrium, neither WISE 0855 nor Jupiter should contain detectable phosphine in their photospheres. The reason Jupiter does is because theres a significant amount of turbulent mixing in its atmosphere that dredges up phosphine from the planets hot interior. The fact that WISE 0855 has no sign of phosphine suggests its atmosphere may be much less turbulent than Jupiters.These observations represent an important step as we attempt to understand the atmospheres of extrasolar bodies that are similar to our own gas-giant planets. Observations of other such bodies in the future especially using new technology like the James Webb Space Telescope will allow us to learn more about the dynamical and chemical processes that occur in cold atmospheres.CitationAndrew J. Skemer et al 2016 ApJ 826 L17. doi:10.3847/2041-8205/826/2/L17
Ohno, Yoshiharu; Fujisawa, Yasuko; Takenaka, Daisuke; Kaminaga, Shigeo; Seki, Shinichiro; Sugihara, Naoki; Yoshikawa, Takeshi
2018-02-01
The objective of this study was to compare the capability of xenon-enhanced area-detector CT (ADCT) performed with a subtraction technique and coregistered 81m Kr-ventilation SPECT/CT for the assessment of pulmonary functional loss and disease severity in smokers. Forty-six consecutive smokers (32 men and 14 women; mean age, 67.0 years) underwent prospective unenhanced and xenon-enhanced ADCT, 81m Kr-ventilation SPECT/CT, and pulmonary function tests. Disease severity was evaluated according to the Global Initiative for Chronic Obstructive Lung Disease (GOLD) classification. CT-based functional lung volume (FLV), the percentage of wall area to total airway area (WA%), and ventilated FLV on xenon-enhanced ADCT and SPECT/CT were calculated for each smoker. All indexes were correlated with percentage of forced expiratory volume in 1 second (%FEV 1 ) using step-wise regression analyses, and univariate and multivariate logistic regression analyses were performed. In addition, the diagnostic accuracy of the proposed model was compared with that of each radiologic index by means of McNemar analysis. Multivariate logistic regression showed that %FEV 1 was significantly affected (r = 0.77, r 2 = 0.59) by two factors: the first factor, ventilated FLV on xenon-enhanced ADCT (p < 0.0001); and the second factor, WA% (p = 0.004). Univariate logistic regression analyses indicated that all indexes significantly affected GOLD classification (p < 0.05). Multivariate logistic regression analyses revealed that ventilated FLV on xenon-enhanced ADCT and CT-based FLV significantly influenced GOLD classification (p < 0.0001). The diagnostic accuracy of the proposed model was significantly higher than that of ventilated FLV on SPECT/CT (p = 0.03) and WA% (p = 0.008). Xenon-enhanced ADCT is more effective than 81m Kr-ventilation SPECT/CT for the assessment of pulmonary functional loss and disease severity.
Ecological validity of neuropsychological assessment and perceived employability.
Wen, Johnny H; Boone, Kyle; Kim, Kevin
2006-11-01
Ecological validity studies that have examined the relationship between cognitive abilities and employment in psychiatric and medical populations have found that a wide range of cognitive domains predict employability, although memory and executive skills appear to be the most important. However, no information is available regarding a patient's self-perceived work attributes and objective neuropsychological performance, and whether the same cognitive domains associated with successful employment are also related to a patient's self-perception of work competence. In the present study, 73 medical and psychiatric patients underwent comprehensive neuropsychological assessment. Step-wise multiple regression analyses revealed that the visual-spatial domain was the only significant predictor of self-perceived work attributes and work competence as measured by the Working Inventory (WI) and the Work Adjustment Inventory (WAI), accounting for 7% to 10% of inventory score variability. The results raise the intriguing possibility that targeting of visual spatial skills for remediation and development might play a separate and unique role in the vocational rehabilitation of a lower SES population, specifically, by leading to enhanced self-perception of work competence as these individuals attempt to enter the job market.
Dreaming in Spain: Parental Determinants of Immigrant Children’s Ambition1
Portes, Alejandro; Vickstrom, Erik; Haller, William; Aparicio, Rosa
2016-01-01
We examine determinants of educational and occupational aspirations and expectations among children of immigrants in Spain on the basis of a unique data set that includes statistically representative data for foreign-origin secondary students in Madrid and Barcelona plus a sample of one-fourth of their parents. Independently collected data for both generations allow us to establish effects of parental characteristics on children’s orientations without the confounding potential inherent in children’s reports about parents. We analyze first determinants of parental ambition and, through a series of step-wise regressions, the effects of these goals and other parental and family characteristics on children’s aspirations and expectations. A structural equations model synthesizes results of the analysis. The model confirms predictions from the research literature, especially those based on the Wisconsin status attainment model, but rejects others, including the predicted significance of private vs. public school attendance. Parental ambition, knowledge of Spanish by parents and children, gender, and children’s age are major determinants of youths’ educational and occupational goals. These results have direct implications for policy; these are discussed in the conclusion. PMID:27761056
Analysis of patient load data from the 2002 FIFA World Cup Korea/Japan.
Morimura, Naoto; Katsumi, Atsushi; Koido, Yuichi; Sugimoto, Katsuhiko; Fuse, Akira; Asai, Yasfumi; Ishii, Noboru; Ishihara, Toru; Fujii, Chiho; Sugiyama, Mitsugi; Henmi, Hiroshi; Yamamoto, Yasuhiro
2004-01-01
Past history of mass casualties related to international football games brought the importance of practical planning, preparedness, simulation training, and analysis of potential patient presentations to the forefront of emergency research. The Japanese Ministry of Health, Labor, and Welfare established the Health Research Team (HRT-MHLW) for the 2002 FIFA World Cup game (FIFAWC). The HRT-MHLW collected patient data related to the games and analyzed the related factors regarding patient presentations. A total of 1661 patients presented for evaluation and care from all 32 games in Japan. The patient presentation rate per 1000 spectators per game was 1.21 and the transport-to-hospital rate was 0.05. The step-wise regression analysis identified that the patient presentations rate increased where access was difficult. As the number of total spectators increased, the patient presentation rate decreased. (p < 0.0001, r = 0.823, r2 = 0.677). In order to develop mass-gathering medical-care plans in accordance with the types and sizes of mass gatherings, it is necessary to collect data and examine risk factors for patient presentations for a variety of events.
Examining Arguments Generated by Year 5, 7, and 10 Students in Science Classrooms
NASA Astrophysics Data System (ADS)
Choi, Aeran; Notebaert, Andrew; Diaz, Juan; Hand, Brian
2010-03-01
A critical component of science is the role of inquiry and argument in moving scientific knowledge forward. However, while students are expected to engage in inquiry activities in science classrooms, there is not always a similar emphasis on the role of argument within the inquiry activities. Building from previous studies on the Science Writing Heuristic (SWH), we were keen to find out if the writing structure used in the SWH approach helped students in Year 5, 7, and 10 to create well constructed arguments. We were also interested in examining which argument components were important for the quality of arguments generated by these students. Two hundred and ninety six writing samples were scored using an analysis framework to evaluate the quality of arguments. Step-wise multiple regression analyses were conducted to determine important argument components. The results of this study suggest that the SWH approach is useful in assisting students to develop reasonable arguments. The critical element determining the quality of the arguments is the relationship between the student’s written claims and his or her evidence.
ERIC Educational Resources Information Center
Cheadle, Jacob E.
2008-01-01
Drawing on longitudinal data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-1999, this study used IRT modeling to operationalize a measure of parental educational investments based on Lareau's notion of concerted cultivation. It used multilevel piece-wise growth models regressing children's math and reading achievement…
Reconstruction of initial pressure from limited view photoacoustic images using deep learning
NASA Astrophysics Data System (ADS)
Waibel, Dominik; Gröhl, Janek; Isensee, Fabian; Kirchner, Thomas; Maier-Hein, Klaus; Maier-Hein, Lena
2018-02-01
Quantification of tissue properties with photoacoustic (PA) imaging typically requires a highly accurate representation of the initial pressure distribution in tissue. Almost all PA scanners reconstruct the PA image only from a partial scan of the emitted sound waves. Especially handheld devices, which have become increasingly popular due to their versatility and ease of use, only provide limited view data because of their geometry. Owing to such limitations in hardware as well as to the acoustic attenuation in tissue, state-of-the-art reconstruction methods deliver only approximations of the initial pressure distribution. To overcome the limited view problem, we present a machine learning-based approach to the reconstruction of initial pressure from limited view PA data. Our method involves a fully convolutional deep neural network based on a U-Net-like architecture with pixel-wise regression loss on the acquired PA images. It is trained and validated on in silico data generated with Monte Carlo simulations. In an initial study we found an increase in accuracy over the state-of-the-art when reconstructing simulated linear-array scans of blood vessels.
Mortamais, Marion; Chevrier, Cécile; Philippat, Claire; Petit, Claire; Calafat, Antonia M; Ye, Xiaoyun; Silva, Manori J; Brambilla, Christian; Eijkemans, Marinus J C; Charles, Marie-Aline; Cordier, Sylvaine; Slama, Rémy
2012-04-26
Environmental epidemiology and biomonitoring studies typically rely on biological samples to assay the concentration of non-persistent exposure biomarkers. Between-participant variations in sampling conditions of these biological samples constitute a potential source of exposure misclassification. Few studies attempted to correct biomarker levels for this error. We aimed to assess the influence of sampling conditions on concentrations of urinary biomarkers of select phenols and phthalates, two widely-produced families of chemicals, and to standardize biomarker concentrations on sampling conditions. Urine samples were collected between 2002 and 2006 among 287 pregnant women from Eden and Pélagie cohorts, from which phthalates and phenols metabolites levels were assayed. We applied a 2-step standardization method based on regression residuals. First, the influence of sampling conditions (including sampling hour, duration of storage before freezing) and of creatinine levels on biomarker concentrations were characterized using adjusted linear regression models. In the second step, the model estimates were used to remove the variability in biomarker concentrations due to sampling conditions and to standardize concentrations as if all samples had been collected under the same conditions (e.g., same hour of urine collection). Sampling hour was associated with concentrations of several exposure biomarkers. After standardization for sampling conditions, median concentrations differed by--38% for 2,5-dichlorophenol to +80 % for a metabolite of diisodecyl phthalate. However, at the individual level, standardized biomarker levels were strongly correlated (correlation coefficients above 0.80) with unstandardized measures. Sampling conditions, such as sampling hour, should be systematically collected in biomarker-based studies, in particular when the biomarker half-life is short. The 2-step standardization method based on regression residuals that we proposed in order to limit the impact of heterogeneity in sampling conditions could be further tested in studies describing levels of biomarkers or their influence on health.
Step-wise supercritical extraction of carbonaceous residua
Warzinski, Robert P.
1987-01-01
A method of fractionating a mixture containing high boiling carbonaceous material and normally solid mineral matter includes processing with a plurality of different supercritical solvents. The mixture is treated with a first solvent of high critical temperature and solvent capacity to extract a large fraction as solute. The solute is released as liquid from solvent and successively treated with other supercritical solvents of different critical values to extract fractions of differing properties. Fractionation can be supplemented by solute reflux over a temperature gradient, pressure let down in steps and extractions at varying temperature and pressure values.
Wang, X; Jiao, Y; Tang, T; Wang, H; Lu, Z
2013-12-19
Intrinsic connectivity networks (ICNs) are composed of spatial components and time courses. The spatial components of ICNs were discovered with moderate-to-high reliability. So far as we know, few studies focused on the reliability of the temporal patterns for ICNs based their individual time courses. The goals of this study were twofold: to investigate the test-retest reliability of temporal patterns for ICNs, and to analyze these informative univariate metrics. Additionally, a correlation analysis was performed to enhance interpretability. Our study included three datasets: (a) short- and long-term scans, (b) multi-band echo-planar imaging (mEPI), and (c) eyes open or closed. Using dual regression, we obtained the time courses of ICNs for each subject. To produce temporal patterns for ICNs, we applied two categories of univariate metrics: network-wise complexity and network-wise low-frequency oscillation. Furthermore, we validated the test-retest reliability for each metric. The network-wise temporal patterns for most ICNs (especially for default mode network, DMN) exhibited moderate-to-high reliability and reproducibility under different scan conditions. Network-wise complexity for DMN exhibited fair reliability (ICC<0.5) based on eyes-closed sessions. Specially, our results supported that mEPI could be a useful method with high reliability and reproducibility. In addition, these temporal patterns were with physiological meanings, and certain temporal patterns were correlated to the node strength of the corresponding ICN. Overall, network-wise temporal patterns of ICNs were reliable and informative and could be complementary to spatial patterns of ICNs for further study. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Cohen, B. E.; Cassata, W.; Mark, D. F.; Tomkinson, T.; Lee, M. R.; Smith, C. L.
2015-12-01
All meteorites contain variable amounts of cosmogenic 38Ar and 36Ar produced during extraterrestrial exposure, and in order to calculate reliable 40Ar/39Ar ages this cosmogenic Ar must be removed from the total Ar budget. The amount of cosmogenic Ar has usually been calculated from the step-wise 38Ar/36Ar, minimum 36Ar/37Ar, or average 38Arcosmogenic/37Ar from the irradiated meteorite fragment. However, if Cl is present in the meteorite, then these values will be disturbed by Ar produced during laboratory neutron irradiation of Cl. Chlorine is likely to be a particular issue for the Nakhlite group of Martian meteorites, which can contain over 1000 ppm Cl [1]. An alternative method for the cosmogenic Ar correction uses the meteorite's exposure age as calculated from an un-irradiated fragment and step-wise production rates based on the measured Ca/K [2]. This calculation is independent of the Cl concentration. We applied this correction method to seven Nakhlites, analyzed in duplicate or triplicate. Selected samples were analyzed at both Lawrence Livermore National Laboratory and SUERC to ensure inter-laboratory reproducibility. We find that the cosmogenic argon correction of [2] has a significant influence on the ages calculated for individual steps, particularly for those at lower temperatures (i.e., differences of several tens of million years for some steps). The lower-temperature steps are more influenced by the alternate cosmogenic correction method of [2], as these analyses yielded higher concentrations of Cl-derived 38Ar. As a result, the Nakhlite data corrected using [2] yields step-heating spectra that are flat or nearly so across >70% of the release spectra (in contrast to downward-stepping spectra often reported for Nakhlite samples), allowing for the calculation of precise emplacement ages for these meteorites. [1] Cartwright J. A. et al. (2013) GCA, 105, 255-293. [2] Cassata W. S., and Borg L. E. (2015) 46th LPSC, Abstract #2742.
NASA Astrophysics Data System (ADS)
Boucher, Thomas F.; Ozanne, Marie V.; Carmosino, Marco L.; Dyar, M. Darby; Mahadevan, Sridhar; Breves, Elly A.; Lepore, Kate H.; Clegg, Samuel M.
2015-05-01
The ChemCam instrument on the Mars Curiosity rover is generating thousands of LIBS spectra and bringing interest in this technique to public attention. The key to interpreting Mars or any other types of LIBS data are calibrations that relate laboratory standards to unknowns examined in other settings and enable predictions of chemical composition. Here, LIBS spectral data are analyzed using linear regression methods including partial least squares (PLS-1 and PLS-2), principal component regression (PCR), least absolute shrinkage and selection operator (lasso), elastic net, and linear support vector regression (SVR-Lin). These were compared against results from nonlinear regression methods including kernel principal component regression (K-PCR), polynomial kernel support vector regression (SVR-Py) and k-nearest neighbor (kNN) regression to discern the most effective models for interpreting chemical abundances from LIBS spectra of geological samples. The results were evaluated for 100 samples analyzed with 50 laser pulses at each of five locations averaged together. Wilcoxon signed-rank tests were employed to evaluate the statistical significance of differences among the nine models using their predicted residual sum of squares (PRESS) to make comparisons. For MgO, SiO2, Fe2O3, CaO, and MnO, the sparse models outperform all the others except for linear SVR, while for Na2O, K2O, TiO2, and P2O5, the sparse methods produce inferior results, likely because their emission lines in this energy range have lower transition probabilities. The strong performance of the sparse methods in this study suggests that use of dimensionality-reduction techniques as a preprocessing step may improve the performance of the linear models. Nonlinear methods tend to overfit the data and predict less accurately, while the linear methods proved to be more generalizable with better predictive performance. These results are attributed to the high dimensionality of the data (6144 channels) relative to the small number of samples studied. The best-performing models were SVR-Lin for SiO2, MgO, Fe2O3, and Na2O, lasso for Al2O3, elastic net for MnO, and PLS-1 for CaO, TiO2, and K2O. Although these differences in model performance between methods were identified, most of the models produce comparable results when p ≤ 0.05 and all techniques except kNN produced statistically-indistinguishable results. It is likely that a combination of models could be used together to yield a lower total error of prediction, depending on the requirements of the user.
NASA Astrophysics Data System (ADS)
Grotti, Marco; Abelmoschi, Maria Luisa; Soggia, Francesco; Tiberiade, Christian; Frache, Roberto
2000-12-01
The multivariate effects of Na, K, Mg and Ca as nitrates on the electrothermal atomisation of manganese, cadmium and iron were studied by multiple linear regression modelling. Since the models proved to efficiently predict the effects of the considered matrix elements in a wide range of concentrations, they were applied to correct the interferences occurring in the determination of trace elements in seawater after pre-concentration of the analytes. In order to obtain a statistically significant number of samples, a large volume of the certified seawater reference materials CASS-3 and NASS-3 was treated with Chelex-100 resin; then, the chelating resin was separated from the solution, divided into several sub-samples, each of them was eluted with nitric acid and analysed by electrothermal atomic absorption spectrometry (for trace element determinations) and inductively coupled plasma optical emission spectrometry (for matrix element determinations). To minimise any other systematic error besides that due to matrix effects, accuracy of the pre-concentration step and contamination levels of the procedure were checked by inductively coupled plasma mass spectrometric measurements. Analytical results obtained by applying the multiple linear regression models were compared with those obtained with other calibration methods, such as external calibration using acid-based standards, external calibration using matrix-matched standards and the analyte addition technique. Empirical models proved to efficiently reduce interferences occurring in the analysis of real samples, allowing an improvement of accuracy better than for other calibration methods.
Thieler, E. Robert; Himmelstoss, Emily A.; Zichichi, Jessica L.; Ergul, Ayhan
2009-01-01
The Digital Shoreline Analysis System (DSAS) version 4.0 is a software extension to ESRI ArcGIS v.9.2 and above that enables a user to calculate shoreline rate-of-change statistics from multiple historic shoreline positions. A user-friendly interface of simple buttons and menus guides the user through the major steps of shoreline change analysis. Components of the extension and user guide include (1) instruction on the proper way to define a reference baseline for measurements, (2) automated and manual generation of measurement transects and metadata based on user-specified parameters, and (3) output of calculated rates of shoreline change and other statistical information. DSAS computes shoreline rates of change using four different methods: (1) endpoint rate, (2) simple linear regression, (3) weighted linear regression, and (4) least median of squares. The standard error, correlation coefficient, and confidence interval are also computed for the simple and weighted linear-regression methods. The results of all rate calculations are output to a table that can be linked to the transect file by a common attribute field. DSAS is intended to facilitate the shoreline change-calculation process and to provide rate-of-change information and the statistical data necessary to establish the reliability of the calculated results. The software is also suitable for any generic application that calculates positional change over time, such as assessing rates of change of glacier limits in sequential aerial photos, river edge boundaries, land-cover changes, and so on.
Transient Growth Analysis of Compressible Boundary Layers with Parabolized Stability Equations
NASA Technical Reports Server (NTRS)
Paredes, Pedro; Choudhari, Meelan M.; Li, Fei; Chang, Chau-Lyan
2016-01-01
The linear form of parabolized linear stability equations (PSE) is used in a variational approach to extend the previous body of results for the optimal, non-modal disturbance growth in boundary layer flows. This methodology includes the non-parallel effects associated with the spatial development of boundary layer flows. As noted in literature, the optimal initial disturbances correspond to steady counter-rotating stream-wise vortices, which subsequently lead to the formation of stream-wise-elongated structures, i.e., streaks, via a lift-up effect. The parameter space for optimal growth is extended to the hypersonic Mach number regime without any high enthalpy effects, and the effect of wall cooling is studied with particular emphasis on the role of the initial disturbance location and the value of the span-wise wavenumber that leads to the maximum energy growth up to a specified location. Unlike previous predictions that used a basic state obtained from a self-similar solution to the boundary layer equations, mean flow solutions based on the full Navier-Stokes (NS) equations are used in select cases to help account for the viscous-inviscid interaction near the leading edge of the plate and also for the weak shock wave emanating from that region. These differences in the base flow lead to an increasing reduction with Mach number in the magnitude of optimal growth relative to the predictions based on self-similar mean-flow approximation. Finally, the maximum optimal energy gain for the favorable pressure gradient boundary layer near a planar stagnation point is found to be substantially weaker than that in a zero pressure gradient Blasius boundary layer.
The Space-Wise Global Gravity Model from GOCE Nominal Mission Data
NASA Astrophysics Data System (ADS)
Gatti, A.; Migliaccio, F.; Reguzzoni, M.; Sampietro, D.; Sanso, F.
2011-12-01
In the framework of the GOCE data analysis, the space-wise approach implements a multi-step collocation solution for the estimation of a global geopotential model in terms of spherical harmonic coefficients and their error covariance matrix. The main idea is to use the collocation technique to exploit the spatial correlation of the gravity field in the GOCE data reduction. In particular the method consists of an along-track Wiener filter, a collocation gridding at satellite altitude and a spherical harmonic analysis by integration. All these steps are iterated, also to account for the rotation between local orbital and gradiometer reference frame. Error covariances are computed by Montecarlo simulations. The first release of the space-wise approach was presented at the ESA Living Planet Symposium in July 2010. This model was based on only two months of GOCE data and partially contained a priori information coming from other existing gravity models, especially at low degrees and low orders. A second release was distributed after the 4th International GOCE User Workshop in May 2011. In this solution, based on eight months of GOCE data, all the dependencies from external gravity information were removed thus giving rise to a GOCE-only space-wise model. However this model showed an over-regularization at the highest degrees of the spherical harmonic expansion due to the combination technique of intermediate solutions (based on about two months of data). In this work a new space-wise solution is presented. It is based on all nominal mission data from November 2009 to mid April 2011, and its main novelty is that the intermediate solutions are now computed in such a way to avoid over-regularization in the final solution. Beyond the spherical harmonic coefficients of the global model and their error covariance matrix, the space-wise approach is able to deliver as by-products a set of spherical grids of potential and of its second derivatives at mean satellite altitude. These grids have an information content that is very similar to the original along-orbit data, but they are much easier to handle. In addition they are estimated by local least-squares collocation and therefore, although computed by a unique global covariance function, they could yield more information at local level than the spherical harmonic coefficients of the global model. For this reason these grids seem to be useful for local geophysical investigations. The estimated grids with their estimated errors are presented in this work together with proposals on possible future improvements. A test to compare the different information contents of the along-orbit data, the gridded data and the spherical harmonic coefficients is also shown.
A fast ergodic algorithm for generating ensembles of equilateral random polygons
NASA Astrophysics Data System (ADS)
Varela, R.; Hinson, K.; Arsuaga, J.; Diao, Y.
2009-03-01
Knotted structures are commonly found in circular DNA and along the backbone of certain proteins. In order to properly estimate properties of these three-dimensional structures it is often necessary to generate large ensembles of simulated closed chains (i.e. polygons) of equal edge lengths (such polygons are called equilateral random polygons). However finding efficient algorithms that properly sample the space of equilateral random polygons is a difficult problem. Currently there are no proven algorithms that generate equilateral random polygons with its theoretical distribution. In this paper we propose a method that generates equilateral random polygons in a 'step-wise uniform' way. We prove that this method is ergodic in the sense that any given equilateral random polygon can be generated by this method and we show that the time needed to generate an equilateral random polygon of length n is linear in terms of n. These two properties make this algorithm a big improvement over the existing generating methods. Detailed numerical comparisons of our algorithm with other widely used algorithms are provided.
A novel minimum cost maximum power algorithm for future smart home energy management.
Singaravelan, A; Kowsalya, M
2017-11-01
With the latest development of smart grid technology, the energy management system can be efficiently implemented at consumer premises. In this paper, an energy management system with wireless communication and smart meter are designed for scheduling the electric home appliances efficiently with an aim of reducing the cost and peak demand. For an efficient scheduling scheme, the appliances are classified into two types: uninterruptible and interruptible appliances. The problem formulation was constructed based on the practical constraints that make the proposed algorithm cope up with the real-time situation. The formulated problem was identified as Mixed Integer Linear Programming (MILP) problem, so this problem was solved by a step-wise approach. This paper proposes a novel Minimum Cost Maximum Power (MCMP) algorithm to solve the formulated problem. The proposed algorithm was simulated with input data available in the existing method. For validating the proposed MCMP algorithm, results were compared with the existing method. The compared results prove that the proposed algorithm efficiently reduces the consumer electricity consumption cost and peak demand to optimum level with 100% task completion without sacrificing the consumer comfort.
Better Living through Wise Use of Resources. Bulletin, 1950, No. 15
ERIC Educational Resources Information Center
Hatcher, Halene
1950-01-01
As never before, nations the world over are considering conservation a problem of vital concern to all peoples and an obligation which must be accepted by each person. It is becoming increasingly recognized that steps leading toward the establishment of harmonious relations between man and his environment will go a long way toward resolving the…
Streamline Basal Application of Herbicide for Small-Stem Hardwood Control
James H. Miller
1990-01-01
Abstract. The effectiveness of low-volume b a s a lapplication of herbicide - "streamline" application - was evaluated on 25 hardwood species and loblolly pine. Test mixtures were step-wise rates of Garlon 4 mixed in diesel fuel with a penatrant added. Most comparisons tested 1O%, 20%, and 30% mixtures of Garlon 4, while tests with...
PHAGE FORMATION IN STAPHYLOCOCCUS MUSCAE CULTURES
Price, Winston H.
1948-01-01
1. The release of S. muscae phage in veal infusion medium is correlated with lysis of the host. 2. The release of the bacterial virus in Fildes' synthetic medium occurs in a step-wise manner before observable lysis of the cells occurs. This result has been confirmed by both turbidimetric readings and direct microscopic examination of the infected cells. PMID:18891146
Cure-WISE: HETDEX Data Reduction with Astro-WISE
NASA Astrophysics Data System (ADS)
Snigula, J. M.; Drory, N.; Fabricius, M.; Landriau, M.; Montesano, F.; Hill, G. J.; Gebhardt, K.; Cornell, M. E.
2014-05-01
The Hobby-Eberly Telescope Dark Energy Experiment (HETDEX, Hill et al. 2012b) is a blind spectroscopic survey to map the evolution of dark energy using Lyman-alpha emitting galaxies at redshifts 1.9< ɀ <3.5 as tracers. The survey will use an array of 75 integral field spectrographs called the Visible Integral field Replicable Unit (IFU) Spectrograph (VIRUS, Hill et al. 2012c). The 10m HET (Ramsey et al. 1998) currently receives a wide-field upgrade (Hill et al. 2012a) to accomodate the spectrographs and to provide the needed field of view. Over the projected five year run of the survey we expect to obtain approximately 170 GB of data each night. For the data reduction we developed the Cure pipeline, to automatically find and calibrate the observed spectra, subtract the sky background, and detect and classify different types of sources. Cure employs rigorous statistical methods and complete pixel-level error propagation throughout the reduction process to ensure Poisson-limited performance and meaningful significance values. To automate the reduction of the whole dataset we implemented the Cure pipeline in the Astro-WISE framework. This integration provides for HETDEX a database backend with complete dependency tracking of the various reduction steps, automated checks, and a searchable interface to the detected sources and user management. It can be used to create various web interfaces for data access and quality control. Astro-WISE allows us to reduce the data from all the IFUs in parallel on a compute cluster. This cluster allows us to reduce the observed data in quasi real time and still have excess capacity for rerunning parts of the reduction. Finally, the Astro-WISE interface will be used to provide access to reduced data products to the general community.
Rahaman, Obaidur; Estrada, Trilce P.; Doren, Douglas J.; Taufer, Michela; Brooks, Charles L.; Armen, Roger S.
2011-01-01
The performance of several two-step scoring approaches for molecular docking were assessed for their ability to predict binding geometries and free energies. Two new scoring functions designed for “step 2 discrimination” were proposed and compared to our CHARMM implementation of the linear interaction energy (LIE) approach using the Generalized-Born with Molecular Volume (GBMV) implicit solvation model. A scoring function S1 was proposed by considering only “interacting” ligand atoms as the “effective size” of the ligand, and extended to an empirical regression-based pair potential S2. The S1 and S2 scoring schemes were trained and five-fold cross validated on a diverse set of 259 protein-ligand complexes from the Ligand Protein Database (LPDB). The regression-based parameters for S1 and S2 also demonstrated reasonable transferability in the CSARdock 2010 benchmark using a new dataset (NRC HiQ) of diverse protein-ligand complexes. The ability of the scoring functions to accurately predict ligand geometry was evaluated by calculating the discriminative power (DP) of the scoring functions to identify native poses. The parameters for the LIE scoring function with the optimal discriminative power (DP) for geometry (step 1 discrimination) were found to be very similar to the best-fit parameters for binding free energy over a large number of protein-ligand complexes (step 2 discrimination). Reasonable performance of the scoring functions in enrichment of active compounds in four different protein target classes established that the parameters for S1 and S2 provided reasonable accuracy and transferability. Additional analysis was performed to definitively separate scoring function performance from molecular weight effects. This analysis included the prediction of ligand binding efficiencies for a subset of the CSARdock NRC HiQ dataset where the number of ligand heavy atoms ranged from 17 to 35. This range of ligand heavy atoms is where improved accuracy of predicted ligand efficiencies is most relevant to real-world drug design efforts. PMID:21644546
Rahaman, Obaidur; Estrada, Trilce P; Doren, Douglas J; Taufer, Michela; Brooks, Charles L; Armen, Roger S
2011-09-26
The performances of several two-step scoring approaches for molecular docking were assessed for their ability to predict binding geometries and free energies. Two new scoring functions designed for "step 2 discrimination" were proposed and compared to our CHARMM implementation of the linear interaction energy (LIE) approach using the Generalized-Born with Molecular Volume (GBMV) implicit solvation model. A scoring function S1 was proposed by considering only "interacting" ligand atoms as the "effective size" of the ligand and extended to an empirical regression-based pair potential S2. The S1 and S2 scoring schemes were trained and 5-fold cross-validated on a diverse set of 259 protein-ligand complexes from the Ligand Protein Database (LPDB). The regression-based parameters for S1 and S2 also demonstrated reasonable transferability in the CSARdock 2010 benchmark using a new data set (NRC HiQ) of diverse protein-ligand complexes. The ability of the scoring functions to accurately predict ligand geometry was evaluated by calculating the discriminative power (DP) of the scoring functions to identify native poses. The parameters for the LIE scoring function with the optimal discriminative power (DP) for geometry (step 1 discrimination) were found to be very similar to the best-fit parameters for binding free energy over a large number of protein-ligand complexes (step 2 discrimination). Reasonable performance of the scoring functions in enrichment of active compounds in four different protein target classes established that the parameters for S1 and S2 provided reasonable accuracy and transferability. Additional analysis was performed to definitively separate scoring function performance from molecular weight effects. This analysis included the prediction of ligand binding efficiencies for a subset of the CSARdock NRC HiQ data set where the number of ligand heavy atoms ranged from 17 to 35. This range of ligand heavy atoms is where improved accuracy of predicted ligand efficiencies is most relevant to real-world drug design efforts.
Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne
2012-01-01
In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models. PMID:23275882
Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne
2012-12-01
In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.
A Comparative Study of Pairwise Learning Methods Based on Kernel Ridge Regression.
Stock, Michiel; Pahikkala, Tapio; Airola, Antti; De Baets, Bernard; Waegeman, Willem
2018-06-12
Many machine learning problems can be formulated as predicting labels for a pair of objects. Problems of that kind are often referred to as pairwise learning, dyadic prediction, or network inference problems. During the past decade, kernel methods have played a dominant role in pairwise learning. They still obtain a state-of-the-art predictive performance, but a theoretical analysis of their behavior has been underexplored in the machine learning literature. In this work we review and unify kernel-based algorithms that are commonly used in different pairwise learning settings, ranging from matrix filtering to zero-shot learning. To this end, we focus on closed-form efficient instantiations of Kronecker kernel ridge regression. We show that independent task kernel ridge regression, two-step kernel ridge regression, and a linear matrix filter arise naturally as a special case of Kronecker kernel ridge regression, implying that all these methods implicitly minimize a squared loss. In addition, we analyze universality, consistency, and spectral filtering properties. Our theoretical results provide valuable insights into assessing the advantages and limitations of existing pairwise learning methods.
Gap-filling meteorological variables with Empirical Orthogonal Functions
NASA Astrophysics Data System (ADS)
Graf, Alexander
2017-04-01
Gap-filling or modelling surface-atmosphere fluxes critically depends on an, ideally continuous, availability of their meteorological driver variables, such as e.g. air temperature, humidity, radiation, wind speed and precipitation. Unlike for eddy-covariance-based fluxes, data gaps are not unavoidable for these measurements. Nevertheless, missing or erroneous data can occur in practice due to instrument or power failures, disturbance, and temporary sensor or station dismounting for e.g. agricultural management or maintenance. If stations with similar measurements are available nearby, using their data for imputation (i.e. estimating missing data) either directly, after an elevation correction or via linear regression, is usually preferred over linear interpolation or monthly mean diurnal cycles. The popular implementation of regional networks of (partly low-cost) stations increases both, the need and the potential, for such neighbour-based imputation methods. For repeated satellite imagery, Beckers and Rixen (2003) suggested an imputation method based on empirical orthogonal functions (EOFs). While exploiting the same linear relations between time series at different observation points as regression, it is able to use information from all observation points to simultaneously estimate missing data at all observation points, provided that never all observations are missing at the same time. Briefly, the method uses the ability of the first few EOFs of a data matrix to reconstruct a noise-reduced version of this matrix; iterating missing data points from an initial guess (the column-wise averages) to an optimal version determined by cross-validation. The poster presents and discusses lessons learned from adapting and applying this methodology to station data. Several years of 10-minute averages of air temperature, pressure and humidity, incoming shortwave, longwave and photosynthetically active radiation, wind speed and precipitation, measured by a regional (70 km by 20 km by 650 m elevation difference) network of 18 sites, were treated by various modifications of the method. The performance per variable and as a function of methodology, such as e.g. number of used EOFs and method to determine its optimum, period length and data transformation, is assessed by cross-validation. Beckers, J.-M., Rixen, M. (2003): EOF calculations and data filling from incomplete oceanographic datasets. J. Atmos. Ocean. Tech. 20, 1839-1856.
NASA Astrophysics Data System (ADS)
Uniyal, S.; Singh, S.; Rao, S. S.
2017-12-01
Trees Outside Forest (TOF) grow on a variety of landscapes , e.g. linear, scattered, block etc. and include unique range of species that are specific to the local environmental and socio-cultural conditions. TOF usefulness came into knowledge when the ongoing anthropogenic activities increases the CO2 concentration in the atmosphere and it has been understood that CO2 can also be sequestered by increasing rate of afforestation.This study illustrates a methodology to estimate individual tree phytomass, their contribution to the environment and microclimate, spatial distribution of TOF phytomass and their carbon storage in Gwalior and Sheopur districts of Madhya Pradesh using very high resolution satellite data. Attempt has been made to estimate phytomass at pixel level also using various regression models. Phytomass is an important parameter to assess the atmospheric carbon that is harvested by trees .More the amount of phytomass more will be the Carbon content of trees and in similar way more will be their contribution for regulation of CO2 and vice versa. Tree Canopies extraction was done using very high resolution satellite data within an area of 5´5 km grids using various remote sensing techniques. Field data were collected from different types of TOFs, e.g. linear, scattered, block etc. from varying plot shapes and sizes, and Stratum-wise phytomass was estimated. Findings of study reported here says that varying phytomass range has been observed for road, agriculture and settlement with varying number of individual trees.The Phytomass in scattered TOFs varied from 0.22 to 15.68 t/ha and carbon content 0.104 to 7.4tC whereas, in linear TOFs it varied from 5.26 to 156.71 t/ha with carbon content 2.49 to 74.43tC. Phytomass along the road-side varied from 20.75 to 879.8 t/ha and carbon content 9.85 to 417.90 tC. Stratum-wise total phytomass and carbon content in areas having TOF was estimated. Of the 10 grids considered, the maximum Phytomass of 1363 tonnes with carbon content 647.425tC was recorded around Gwalior airport and minimum of 367.55 tonnes and carbon content 174.325tC in the surroundings of Mohana town. Analysis has been performed on how climatic variables affect the growth and structure of these trees and vice versa.
Li, Shumei; Tian, Junzhang; Bauer, Andreas; Huang, Ruiwang; Wen, Hua; Li, Meng; Wang, Tianyue; Xia, Likun; Jiang, Guihua
2016-08-01
Purpose To analyze the integrity of white matter (WM) tracts in primary insomnia patients and provide better characterization of abnormal WM integrity and its relationship with disease duration and clinical features of primary insomnia. Materials and Methods This prospective study was approved by the ethics committee of the Guangdong No. 2 Provincial People's Hospital. Tract-based spatial statistics were used to compare changes in diffusion parameters of WM tracts from 23 primary insomnia patients and 30 healthy control (HC) participants, and the accuracy of these changes in distinguishing insomnia patients from HC participants was evaluated. Voxel-wise statistics across subjects was performed by using a 5000-permutation set with family-wise error correction (family-wise error, P < .05). Multiple regressions were used to analyze the associations between the abnormal fractional anisotropy (FA) in WM with disease duration, Pittsburgh Sleep Quality Index, insomnia severity index, self-rating anxiety scale, and the self-rating depression scale in primary insomnia. Characteristics for abnormal WM were also investigated in tract-level analyses. Results Primary insomnia patients had lower FA values mainly in the right anterior limb of the internal capsule, right posterior limb of the internal capsule, right anterior corona radiata, right superior corona radiata, right superior longitudinal fasciculus, body of the corpus callosum, and right thalamus (P < .05, family-wise error correction). The receiver operating characteristic areas for the seven regions were acceptable (range, 0.60-0.74; 60%-74%). Multiple regression models showed abnormal FA values in the thalamus and body corpus callosum were associated with the disease duration, self-rating depression scale, and Pittsburgh Sleep Quality Index scores. Tract-level analysis suggested that the reduced FA values might be related to greater radial diffusivity. Conclusion This study showed that WM tracts related to regulation of sleep and wakefulness, and limbic cognitive and sensorimotor regions, are disrupted in the right brain in patients with primary insomnia. The reduced integrity of these WM tracts may be because of loss of myelination. (©) RSNA, 2016.
Automatic stage identification of Drosophila egg chamber based on DAPI images
Jia, Dongyu; Xu, Qiuping; Xie, Qian; Mio, Washington; Deng, Wu-Min
2016-01-01
The Drosophila egg chamber, whose development is divided into 14 stages, is a well-established model for developmental biology. However, visual stage determination can be a tedious, subjective and time-consuming task prone to errors. Our study presents an objective, reliable and repeatable automated method for quantifying cell features and classifying egg chamber stages based on DAPI images. The proposed approach is composed of two steps: 1) a feature extraction step and 2) a statistical modeling step. The egg chamber features used are egg chamber size, oocyte size, egg chamber ratio and distribution of follicle cells. Methods for determining the on-site of the polytene stage and centripetal migration are also discussed. The statistical model uses linear and ordinal regression to explore the stage-feature relationships and classify egg chamber stages. Combined with machine learning, our method has great potential to enable discovery of hidden developmental mechanisms. PMID:26732176
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, H; Chen, J
Purpose: Metal objects create severe artifacts in kilo-voltage (kV) CT image reconstructions due to the high attenuation coefficients of high atomic number objects. Most of the techniques devised to reduce this artifact utilize a two-step approach, which do not reliably yield the qualified reconstructed images. Thus, for accuracy and simplicity, this work presents a one-step reconstruction method based on a modified penalized weighted least-squares (PWLS) technique. Methods: Existing techniques for metal artifact reduction mostly adopt a two-step approach, which conduct additional reconstruction with the modified projection data from the initial reconstruction. This procedure does not consistently perform well due tomore » the uncertainties in manipulating the metal-contaminated projection data by thresholding and linear interpolation. This study proposes a one-step reconstruction process using a new PWLS operation with total-variation (TV) minimization, while not manipulating the projection. The PWLS for CT reconstruction has been investigated using a pre-defined weight, based on the variance of the projection datum at each detector bin. It works well when reconstructing CT images from metal-free projection data, which does not appropriately penalize metal-contaminated projection data. The proposed work defines the weight at each projection element under the assumption of a Poisson random variable. This small modification using element-wise penalization has a large impact in reducing metal artifacts. For evaluation, the proposed technique was assessed with two noisy, metal-contaminated digital phantoms, against the existing PWLS with TV minimization and the two-step approach. Result: The proposed PWLS with TV minimization greatly improved the metal artifact reduction, relative to the other techniques, by watching the results. Numerically, the new approach lowered the normalized root-mean-square error about 30 and 60% for the two cases, respectively, compared to the two-step method. Conclusion: A new PWLS operation shows promise for improving metal artifact reduction in CT imaging, as well as simplifying the reconstructing procedure.« less
Freesmeyer, Martin; Kühnel, Christian; Westphal, Julian G
2015-01-01
Benign thyroid diseases are widely common in western societies. However, the volumetry of the thyroid gland, especially when enlarged or abnormally formed, proves to be a challenge in clinical routine. The aim of this study was to develop a simple and rapid threshold-based isocontour extraction method for thyroid volumetry from (124)I-PET/CT data in patients scheduled for radioactive iodine therapy. PET/CT data from 45 patients were analysed 30 h after 1 MBq (124)I administration. Anatomical reference volume was calculated using manually contoured data from low-dose CT images of the neck (MC). In addition, we applied an automatic isocontour extraction method (IC0.2/1.0), with two different threshold values (0.2 and 1.0 kBq/ml), for volumetry of the PET data-set. IC0.2/1.0 shape data that showed significant variation from MC data were excluded. Subsequently, a mathematical correlation using a model of linear regression with multiple variables and step-wise elimination (mIC0.2/1.0), between IC0.2/1.0 and MC, was established. Data from 41 patients (IC0.2), and 32 patients (IC1.0) were analysed. The mathematically calculated volume, mIC, showed a median deviation from the reference (MC), of ±9 % (1-54 %) for mIC0.2 and of ±8.2 % (1-50 %) for mIC1.0 CONCLUSION: Contour extraction with both, mIC1.0 and mIC0.2 gave rapid and reliable results. However, mIC0.2 can be applied to significantly more patients (>90 %) and is, therefore, deemed to be more suitable for clinical routine, keeping in mind the potential advantages of using (124)I-PET/CT for the preparation of patients scheduled for radioactive iodine therapy.
Time-elapsed screw insertion with microCT imaging.
Ryan, M K; Mohtar, A A; Cleek, T M; Reynolds, K J
2016-01-25
Time-elapsed analysis of bone is an innovative technique that uses sequential image data to analyze bone mechanics under a given loading regime. This paper presents the development of a novel device capable of performing step-wise screw insertion into excised bone specimens, within the microCT environment, whilst simultaneously recording insertion torque, compression under the screw head and rotation angle. The system is computer controlled and screw insertion is performed in incremental steps of insertion torque. A series of screw insertion tests to failure were performed (n=21) to establish a relationship between the torque at head contact and stripping torque (R(2)=0.89). The test-device was then used to perform step-wise screw insertion, stopping at intervals of 20%, 40%, 60% and 80% between screw head contact and screw stripping. Image data-sets were acquired at each of these time-points as well as at head contact and post-failure. Examination of the image data revealed the trabecular deformation as a result of increased insertion torque was restricted to within 1mm of the outer diameter of the screw thread. Minimal deformation occurred prior to the step between the 80% time-point and post-failure. The device presented has allowed, for the first time, visualization of the micro-mechanical response in the peri-implant bone with increased tightening torque. Further testing on more samples is expected to increase our understanding of the effects of increased tightening torque at the micro-structural level, and the failure mechanisms of trabeculae. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Walawender, Jakub; Kothe, Steffen; Trentmann, Jörg; Pfeifroth, Uwe; Cremer, Roswitha
2017-04-01
The purpose of this study is to create a 1 km2 gridded daily sunshine duration data record for Germany covering the period from 1983 to 2015 (33 years) based on satellite estimates of direct normalised surface solar radiation and in situ sunshine duration observations using a geostatistical approach. The CM SAF SARAH direct normalized irradiance (DNI) satellite climate data record and in situ observations of sunshine duration from 121 weather stations operated by DWD are used as input datasets. The selected period of 33 years is associated with the availability of satellite data. The number of ground stations is limited to 121 as there are only time series with less than 10% of missing observations over the selected period included to keep the long-term consistency of the output sunshine duration data record. In the first step, DNI data record is used to derive sunshine hours by applying WMO threshold of 120 W/m2 (SDU = DNI ≥ 120 W/m2) and weighting of sunny slots to correct the sunshine length between two instantaneous image data due to cloud movement. In the second step, linear regression between SDU and in situ sunshine duration is calculated to adjust the satellite product to the ground observations and the output regression coefficients are applied to create a regression grid. In the last step regression residuals are interpolated with ordinary kriging and added to the regression grid. A comprehensive accuracy assessment of the gridded sunshine duration data record is performed by calculating prediction errors (cross-validation routine). "R" is used for data processing. A short analysis of the spatial distribution and temporal variability of sunshine duration over Germany based on the created dataset will be presented. The gridded sunshine duration data are useful for applications in various climate-related studies, agriculture and solar energy potential calculations.
Malliou, P; Rokka, S; Beneka, A; Gioftsidou, A; Mavromoustakos, S; Godolias, G
2014-01-01
There is limited information on injury patterns in Step Aerobic Instructors (SAI) who exclusively execute "step" aerobic classes. To record the type and the anatomical position in relation to diagnosis of muscular skeletal injuries in step aerobic instructors. Also, to analyse the days of absence due to chronic injury in relation to weekly working hours, height of the step platform, working experience and working surface and footwear during the step class. The Step Aerobic Instructors Injuries Questionnaire was developed, and then validity and reliability indices were calculated. 63 SAI completed the questionnaire. For the statistical analysis of the data, the method used was the analysis of frequencies, the non-parametric test χ
Inferring gene regression networks with model trees
2010-01-01
Background Novel strategies are required in order to handle the huge amount of data produced by microarray technologies. To infer gene regulatory networks, the first step is to find direct regulatory relationships between genes building the so-called gene co-expression networks. They are typically generated using correlation statistics as pairwise similarity measures. Correlation-based methods are very useful in order to determine whether two genes have a strong global similarity but do not detect local similarities. Results We propose model trees as a method to identify gene interaction networks. While correlation-based methods analyze each pair of genes, in our approach we generate a single regression tree for each gene from the remaining genes. Finally, a graph from all the relationships among output and input genes is built taking into account whether the pair of genes is statistically significant. For this reason we apply a statistical procedure to control the false discovery rate. The performance of our approach, named REGNET, is experimentally tested on two well-known data sets: Saccharomyces Cerevisiae and E.coli data set. First, the biological coherence of the results are tested. Second the E.coli transcriptional network (in the Regulon database) is used as control to compare the results to that of a correlation-based method. This experiment shows that REGNET performs more accurately at detecting true gene associations than the Pearson and Spearman zeroth and first-order correlation-based methods. Conclusions REGNET generates gene association networks from gene expression data, and differs from correlation-based methods in that the relationship between one gene and others is calculated simultaneously. Model trees are very useful techniques to estimate the numerical values for the target genes by linear regression functions. They are very often more precise than linear regression models because they can add just different linear regressions to separate areas of the search space favoring to infer localized similarities over a more global similarity. Furthermore, experimental results show the good performance of REGNET. PMID:20950452
Liu, Feng; Tian, Hongjun; Li, Jie; Li, Shen; Zhuo, Chuanjun
2018-05-04
Previous seed- and atlas-based structural covariance/connectivity analyses have demonstrated that patients with schizophrenia is accompanied by aberrant structural connection and abnormal topological organization. However, it remains unclear whether this disruption is present in unbiased whole-brain voxel-wise structural covariance networks (SCNs) and whether brain genetic expression variations are linked with network alterations. In this study, ninety-five patients with schizophrenia and 95 matched healthy controls were recruited and gray matter volumes were extracted from high-resolution structural magnetic resonance imaging scans. Whole-brain voxel-wise gray matter SCNs were constructed at the group level and were further analyzed by using graph theory method. Nonparametric permutation tests were employed for group comparisons. In addition, regression modes along with random effect analysis were utilized to explore the associations between structural network changes and gene expression from the Allen Human Brain Atlas. Compared with healthy controls, the patients with schizophrenia showed significantly increased structural covariance strength (SCS) in the right orbital part of superior frontal gyrus and bilateral middle frontal gyrus, while decreased SCS in the bilateral superior temporal gyrus and precuneus. The altered SCS showed reproducible correlations with the expression profiles of the gene classes involved in therapeutic targets and neurodevelopment. Overall, our findings not only demonstrate that the topological architecture of whole-brain voxel-wise SCNs is impaired in schizophrenia, but also provide evidence for the possible role of therapeutic targets and neurodevelopment-related genes in gray matter structural brain networks in schizophrenia.
Gierlinger, Notburga; Luss, Saskia; König, Christian; Konnerth, Johannes; Eder, Michaela; Fratzl, Peter
2010-01-01
The functional characteristics of plant cell walls depend on the composition of the cell wall polymers, as well as on their highly ordered architecture at scales from a few nanometres to several microns. Raman spectra of wood acquired with linear polarized laser light include information about polymer composition as well as the alignment of cellulose microfibrils with respect to the fibre axis (microfibril angle). By changing the laser polarization direction in 3 degrees steps, the dependency between cellulose and laser orientation direction was investigated. Orientation-dependent changes of band height ratios and spectra were described by quadratic linear regression and partial least square regressions, respectively. Using the models and regressions with high coefficients of determination (R(2) > 0.99) microfibril orientation was predicted in the S1 and S2 layers distinguished by the Raman imaging approach in cross-sections of spruce normal, opposite, and compression wood. The determined microfibril angle (MFA) in the different S2 layers ranged from 0 degrees to 49.9 degrees and was in coincidence with X-ray diffraction determination. With the prerequisite of geometric sample and laser alignment, exact MFA prediction can complete the picture of the chemical cell wall design gained by the Raman imaging approach at the micron level in all plant tissues.
Extraction of object skeletons in multispectral imagery by the orthogonal regression fitting
NASA Astrophysics Data System (ADS)
Palenichka, Roman M.; Zaremba, Marek B.
2003-03-01
Accurate and automatic extraction of skeletal shape of objects of interest from satellite images provides an efficient solution to such image analysis tasks as object detection, object identification, and shape description. The problem of skeletal shape extraction can be effectively solved in three basic steps: intensity clustering (i.e. segmentation) of objects, extraction of a structural graph of the object shape, and refinement of structural graph by the orthogonal regression fitting. The objects of interest are segmented from the background by a clustering transformation of primary features (spectral components) with respect to each pixel. The structural graph is composed of connected skeleton vertices and represents the topology of the skeleton. In the general case, it is a quite rough piecewise-linear representation of object skeletons. The positions of skeleton vertices on the image plane are adjusted by means of the orthogonal regression fitting. It consists of changing positions of existing vertices according to the minimum of the mean orthogonal distances and, eventually, adding new vertices in-between if a given accuracy if not yet satisfied. Vertices of initial piecewise-linear skeletons are extracted by using a multi-scale image relevance function. The relevance function is an image local operator that has local maximums at the centers of the objects of interest.
Saucedo-Reyes, Daniela; Carrillo-Salazar, José A; Román-Padilla, Lizbeth; Saucedo-Veloz, Crescenciano; Reyes-Santamaría, María I; Ramírez-Gilly, Mariana; Tecante, Alberto
2018-03-01
High hydrostatic pressure inactivation kinetics of Escherichia coli ATCC 25922 and Salmonella enterica subsp. enterica serovar Typhimurium ATCC 14028 ( S. typhimurium) in a low acid mamey pulp at four pressure levels (300, 350, 400, and 450 MPa), different exposure times (0-8 min), and temperature of 25 ± 2℃ were obtained. Survival curves showed deviations from linearity in the form of a tail (upward concavity). The primary models tested were the Weibull model, the modified Gompertz equation, and the biphasic model. The Weibull model gave the best goodness of fit ( R 2 adj > 0.956, root mean square error < 0.290) in the modeling and the lowest Akaike information criterion value. Exponential-logistic and exponential decay models, and Bigelow-type and an empirical models for b'( P) and n( P) parameters, respectively, were tested as alternative secondary models. The process validation considered the two- and one-step nonlinear regressions for making predictions of the survival fraction; both regression types provided an adequate goodness of fit and the one-step nonlinear regression clearly reduced fitting errors. The best candidate model according to the Akaike theory information, with better accuracy and more reliable predictions was the Weibull model integrated by the exponential-logistic and exponential decay secondary models as a function of time and pressure (two-step procedure) or incorporated as one equation (one-step procedure). Both mathematical expressions were used to determine the t d parameter, where the desired reductions ( 5D) (considering d = 5 ( t 5 ) as the criterion of 5 Log 10 reduction (5 D)) in both microorganisms are attainable at 400 MPa for 5.487 ± 0.488 or 5.950 ± 0.329 min, respectively, for the one- or two-step nonlinear procedure.
Very-short-term wind power prediction by a hybrid model with single- and multi-step approaches
NASA Astrophysics Data System (ADS)
Mohammed, E.; Wang, S.; Yu, J.
2017-05-01
Very-short-term wind power prediction (VSTWPP) has played an essential role for the operation of electric power systems. This paper aims at improving and applying a hybrid method of VSTWPP based on historical data. The hybrid method is combined by multiple linear regressions and least square (MLR&LS), which is intended for reducing prediction errors. The predicted values are obtained through two sub-processes:1) transform the time-series data of actual wind power into the power ratio, and then predict the power ratio;2) use the predicted power ratio to predict the wind power. Besides, the proposed method can include two prediction approaches: single-step prediction (SSP) and multi-step prediction (MSP). WPP is tested comparatively by auto-regressive moving average (ARMA) model from the predicted values and errors. The validity of the proposed hybrid method is confirmed in terms of error analysis by using probability density function (PDF), mean absolute percent error (MAPE) and means square error (MSE). Meanwhile, comparison of the correlation coefficients between the actual values and the predicted values for different prediction times and window has confirmed that MSP approach by using the hybrid model is the most accurate while comparing to SSP approach and ARMA. The MLR&LS is accurate and promising for solving problems in WPP.
Wu, F; Callisaya, M; Laslett, L L; Wills, K; Zhou, Y; Jones, G; Winzenberg, T
2016-07-01
This was the first study investigating both linear associations between lower limb muscle strength and balance in middle-aged women and the potential for thresholds for the associations. There was strong evidence that even in middle-aged women, poorer LMS was associated with reduced balance. However, no evidence was found for thresholds. Decline in balance begins in middle age, yet, the role of muscle strength in balance is rarely examined in this age group. We aimed to determine the association between lower limb muscle strength (LMS) and balance in middle-aged women and investigate whether cut-points of LMS exist that might identify women at risk of poorer balance. Cross-sectional analysis of 345 women aged 36-57 years was done. Associations between LMS and balance tests (timed up and go (TUG), step test (ST), functional reach test (FRT), and lateral reach test (LRT)) were assessed using linear regression. Nonlinear associations were explored using locally weighted regression smoothing (LOWESS) and potential cut-points identified using nonlinear least-squares estimation. Segmented regression was used to estimate associations above and below the identified cut-points. Weaker LMS was associated with poorer performance on the TUG (β -0.008 (95 % CI: -0.010, -0.005) second/kg), ST (β 0.031 (0.011, 0.051) step/kg), FRT (β 0.071 (0.047, 0.096) cm/kg), and LRT (β 0.028 (0.011, 0.044) cm/kg), independent of confounders. Potential nonlinear associations were evident from LOWESS results; significant cut-points of LMS were identified for all balance tests (29-50 kg). However, excepting ST, cut-points did not persist after excluding potentially influential data points. In middle-aged women, poorer LMS is associated with reduced balance. Therefore, improving muscle strength in middle-age may be a useful strategy to improve balance and reduce falls risk in later life. Middle-aged women with low muscle strength may be an effective target group for future randomized controlled trials. Australian New Zealand Clinical Trials Registry (ANZCTR) NCT00273260.
Discrete post-processing of total cloud cover ensemble forecasts
NASA Astrophysics Data System (ADS)
Hemri, Stephan; Haiden, Thomas; Pappenberger, Florian
2017-04-01
This contribution presents an approach to post-process ensemble forecasts for the discrete and bounded weather variable of total cloud cover. Two methods for discrete statistical post-processing of ensemble predictions are tested. The first approach is based on multinomial logistic regression, the second involves a proportional odds logistic regression model. Applying them to total cloud cover raw ensemble forecasts from the European Centre for Medium-Range Weather Forecasts improves forecast skill significantly. Based on station-wise post-processing of raw ensemble total cloud cover forecasts for a global set of 3330 stations over the period from 2007 to early 2014, the more parsimonious proportional odds logistic regression model proved to slightly outperform the multinomial logistic regression model. Reference Hemri, S., Haiden, T., & Pappenberger, F. (2016). Discrete post-processing of total cloud cover ensemble forecasts. Monthly Weather Review 144, 2565-2577.
Subtasks affecting step-length asymmetry in post-stroke hemiparetic walking.
Kim, Woo-Sub
2016-10-01
This study was performed to investigate whether components from trunk progression (TP) and step length were related to step length asymmetry in walking in patients with hemiparesis. Gait analysis was performed for participants with hemiparesis and healthy controls. The distance between the pelvis and foot in the anterior-posterior axis was calculated at initial-contact. Step length was partitioned into anterior foot placement (AFP) and posterior foot placement (PFP). TP was partitioned into anterior trunk progression (ATP) and posterior trunk progression (PTP). The TP pattern and step length pattern were defined to represent intra-TP and intra-step spatial balance, respectively. Of 29 participants with hemiparesis, nine participants showed longer paretic step length, eight participants showed symmetric step length, and 12 participants showed shorter paretic step length. For the hemiparesis group, linear regression analysis showed that ATP asymmetry, AFP asymmetry, and TP patterns had significant predictability regarding step length asymmetry. Prolonged paretic ATP and shortened paretic AFP was the predominant pattern in the hemiparesis group, even in participants with symmetric step length. However, some participants showed same direction of ATP and AFP asymmetry. These findings indicate the following: (1) ATP asymmetries should be observed to determine individual characteristics of step length asymmetry, and (2) TP patterns can provide complementary information for non-paretic limb compensation. Copyright © 2016 Elsevier B.V. All rights reserved.
How Much Support Is Enough? 3 Tools Help Us Know When to Step In and When to Back Off
ERIC Educational Resources Information Center
Patterson, Leslie; Wickstrom, Carol
2017-01-01
Responsive professional development is about watching learners closely, interpreting observations to make nuanced decisions, and taking action to support learners at particular moments. What might they be ready to do next? What instructional moves will best provide "just enough" support? In other words, what is the next wise action?…
Using steady-state equations for transient flow calculation in natural gas pipelines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maddox, R.N.; Zhou, P.
1984-04-02
Maddox and Zhou have extended their technique for calculating the unsteady-state behavior of straight gas pipelines to complex pipeline systems and networks. After developing the steady-state flow rate and pressure profile for each pipe in the network, analysts can perform the transient-state analysis in the real-time step-wise manner described for this technique.
Evaluation of Indoor Air Quality Screening Strategies: A Step-Wise Approach for IAQ Screening.
Wong, Ling-Tim; Mui, Kwok-Wai; Tsang, Tsz-Wun
2016-12-14
Conducting a full indoor air quality (IAQ) assessment in air-conditioned offices requires large-scale material and manpower resources. However, an IAQ index can be adopted as a handy screening tool to identify any premises (with poor IAQ) that need more comprehensive IAQ assessments to prioritize IAQ improvements. This study proposes a step-wise IAQ screening protocol to facilitate its cost-effective management among building owners and managers. The effectiveness of three IAQ indices, namely θ₁ (with one parameter: CO₂), θ₂ (with two parameters: CO₂ and respirable suspended particulates, RSP) and θ₃ (with three parameters: CO₂, RSP, and total volatile organic compounds, TVOC) are evaluated. Compared in a pairwise manner with respect to the minimum satisfaction levels as stated in the IAQ Certification Scheme by the Hong Kong Environmental Protection Department, the results show that a screening test with more surrogate IAQ parameters is good at identifying both lower and higher risk groups for unsatisfactory IAQ, and thus offers higher resolution. Through the sensitivity and specificity for identifying IAQ problems, the effectiveness of alternative IAQ screening methods with different monitoring parameters is also reported.
Wang, Yaqiong; Ma, Hong
2015-09-01
Proteins often function as complexes, yet little is known about the evolution of dissimilar subunits of complexes. DNA-directed RNA polymerases (RNAPs) are multisubunit complexes, with distinct eukaryotic types for different classes of transcripts. In addition to Pol I-III, common in eukaryotes, plants have Pol IV and V for epigenetic regulation. Some RNAP subunits are specific to one type, whereas other subunits are shared by multiple types. We have conducted extensive phylogenetic and sequence analyses, and have placed RNAP gene duplication events in land plant history, thereby reconstructing the subunit compositions of the novel RNAPs during land plant evolution. We found that Pol IV/V have experienced step-wise duplication and diversification of various subunits, with increasingly distinctive subunit compositions. Also, lineage-specific duplications have further increased RNAP complexity with distinct copies in different plant families and varying divergence for subunits of different RNAPs. Further, the largest subunits of Pol IV/V probably originated from a gene fusion in the ancestral land plants. We propose a framework of plant RNAP evolution, providing an excellent model for protein complex evolution. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
van Anken, Eelco; Pincus, David; Coyle, Scott; Aragón, Tomás; Osman, Christof; Lari, Federica; Gómez Puerta, Silvia; Korennykh, Alexei V; Walter, Peter
2014-12-30
Insufficient protein-folding capacity in the endoplasmic reticulum (ER) induces the unfolded protein response (UPR). In the ER lumen, accumulation of unfolded proteins activates the transmembrane ER-stress sensor Ire1 and drives its oligomerization. In the cytosol, Ire1 recruits HAC1 mRNA, mediating its non-conventional splicing. The spliced mRNA is translated into Hac1, the key transcription activator of UPR target genes that mitigate ER-stress. In this study, we report that oligomeric assembly of the ER-lumenal domain is sufficient to drive Ire1 clustering. Clustering facilitates Ire1's cytosolic oligomeric assembly and HAC1 mRNA docking onto a positively charged motif in Ire1's cytosolic linker domain that tethers the kinase/RNase to the transmembrane domain. By the use of a synthetic bypass, we demonstrate that mRNA docking per se is a pre-requisite for initiating Ire1's RNase activity and, hence, splicing. We posit that such step-wise engagement between Ire1 and its mRNA substrate contributes to selectivity and efficiency in UPR signaling.
Evaluation of Indoor Air Quality Screening Strategies: A Step-Wise Approach for IAQ Screening
Wong, Ling-tim; Mui, Kwok-wai; Tsang, Tsz-wun
2016-01-01
Conducting a full indoor air quality (IAQ) assessment in air-conditioned offices requires large-scale material and manpower resources. However, an IAQ index can be adopted as a handy screening tool to identify any premises (with poor IAQ) that need more comprehensive IAQ assessments to prioritize IAQ improvements. This study proposes a step-wise IAQ screening protocol to facilitate its cost-effective management among building owners and managers. The effectiveness of three IAQ indices, namely θ1 (with one parameter: CO2), θ2 (with two parameters: CO2 and respirable suspended particulates, RSP) and θ3 (with three parameters: CO2, RSP, and total volatile organic compounds, TVOC) are evaluated. Compared in a pairwise manner with respect to the minimum satisfaction levels as stated in the IAQ Certification Scheme by the Hong Kong Environmental Protection Department, the results show that a screening test with more surrogate IAQ parameters is good at identifying both lower and higher risk groups for unsatisfactory IAQ, and thus offers higher resolution. Through the sensitivity and specificity for identifying IAQ problems, the effectiveness of alternative IAQ screening methods with different monitoring parameters is also reported. PMID:27983667
Precommitting to choose wisely about low-value services: a stepped wedge cluster randomised trial.
Kullgren, Jeffrey Todd; Krupka, Erin; Schachter, Abigail; Linden, Ariel; Miller, Jacquelyn; Acharya, Yubraj; Alford, James; Duffy, Richard; Adler-Milstein, Julia
2018-05-01
Little is known about how to discourage clinicians from ordering low-value services. Our objective was to test whether clinicians committing their future selves (ie, precommitting) to follow Choosing Wisely recommendations with decision supports could decrease potentially low-value orders. We conducted a 12-month stepped wedge cluster randomised trial among 45 primary care physicians and advanced practice providers in six adult primary care clinics of a US community group practice.Clinicians were invited to precommit to Choosing Wisely recommendations against imaging for uncomplicated low back pain, imaging for uncomplicated headaches and unnecessary antibiotics for acute sinusitis. Clinicians who precommitted received 1-6 months of point-of-care precommitment reminders as well as patient education handouts and weekly emails with resources to support communication about low-value services.The primary outcome was the difference between control and intervention period percentages of visits with potentially low-value orders. Secondary outcomes were differences between control and intervention period percentages of visits with possible alternate orders, and differences between control and 3-month postintervention follow-up period percentages of visits with potentially low-value orders. The intervention was not associated with a change in the percentage of visits with potentially low-value orders overall, for headaches or for acute sinusitis, but was associated with a 1.7% overall increase in alternate orders (p=0.01). For low back pain, the intervention was associated with a 1.2% decrease in the percentage of visits with potentially low-value orders (p=0.001) and a 1.9% increase in the percentage of visits with alternate orders (p=0.007). No changes were sustained in follow-up. Clinician precommitment to follow Choosing Wisely recommendations was associated with a small, unsustained decrease in potentially low-value orders for only one of three targeted conditions and may have increased alternate orders. NCT02247050; Pre-results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Wit, Jan M.; Himes, John H.; van Buuren, Stef; Denno, Donna M.; Suchdev, Parminder S.
2017-01-01
Background/Aims Childhood stunting is a prevalent problem in low- and middle-income countries and is associated with long-term adverse neurodevelopment and health outcomes. In this review, we define indicators of growth, discuss key challenges in their analysis and application, and offer suggestions for indicator selection in clinical research contexts. Methods Critical review of the literature. Results Linear growth is commonly expressed as length-for-age or height-for-age z-score (HAZ) in comparison to normative growth standards. Conditional HAZ corrects for regression to the mean where growth changes relate to previous status. In longitudinal studies, growth can be expressed as ΔHAZ at 2 time points. Multilevel modeling is preferable when more measurements per individual child are available over time. Height velocity z-score reference standards are available for children under the age of 2 years. Adjusting for covariates or confounders (e.g., birth weight, gestational age, sex, parental height, maternal education, socioeconomic status) is recommended in growth analyses. Conclusion The most suitable indicator(s) for linear growth can be selected based on the number of available measurements per child and the child's age. By following a step-by-step algorithm, growth analyses can be precisely and accurately performed to allow for improved comparability within and between studies. PMID:28196362
Kumar, K Vasanth
2007-04-02
Kinetic experiments were carried out for the sorption of safranin onto activated carbon particles. The kinetic data were fitted to pseudo-second order model of Ho, Sobkowsk and Czerwinski, Blanchard et al. and Ritchie by linear and non-linear regression methods. Non-linear method was found to be a better way of obtaining the parameters involved in the second order rate kinetic expressions. Both linear and non-linear regression showed that the Sobkowsk and Czerwinski and Ritchie's pseudo-second order models were the same. Non-linear regression analysis showed that both Blanchard et al. and Ho have similar ideas on the pseudo-second order model but with different assumptions. The best fit of experimental data in Ho's pseudo-second order expression by linear and non-linear regression method showed that Ho pseudo-second order model was a better kinetic expression when compared to other pseudo-second order kinetic expressions.
Sun, Yu; Reynolds, Hayley M; Wraith, Darren; Williams, Scott; Finnegan, Mary E; Mitchell, Catherine; Murphy, Declan; Haworth, Annette
2018-04-26
There are currently no methods to estimate cell density in the prostate. This study aimed to develop predictive models to estimate prostate cell density from multiparametric magnetic resonance imaging (mpMRI) data at a voxel level using machine learning techniques. In vivo mpMRI data were collected from 30 patients before radical prostatectomy. Sequences included T2-weighted imaging, diffusion-weighted imaging and dynamic contrast-enhanced imaging. Ground truth cell density maps were computed from histology and co-registered with mpMRI. Feature extraction and selection were performed on mpMRI data. Final models were fitted using three regression algorithms including multivariate adaptive regression spline (MARS), polynomial regression (PR) and generalised additive model (GAM). Model parameters were optimised using leave-one-out cross-validation on the training data and model performance was evaluated on test data using root mean square error (RMSE) measurements. Predictive models to estimate voxel-wise prostate cell density were successfully trained and tested using the three algorithms. The best model (GAM) achieved a RMSE of 1.06 (± 0.06) × 10 3 cells/mm 2 and a relative deviation of 13.3 ± 0.8%. Prostate cell density can be quantitatively estimated non-invasively from mpMRI data using high-quality co-registered data at a voxel level. These cell density predictions could be used for tissue classification, treatment response evaluation and personalised radiotherapy.
Wakshlag, Joseph J; Struble, Angela M; Warren, Barbour S; Maley, Mary; Panasevich, Matthew R; Cummings, Kevin J; Long, Grace M; Laflamme, Dorothy E
2012-02-15
To quantify physical activity and dietary energy intake in dogs enrolled in a controlled weight-loss program and assess relationships between energy intake and physical activity, sex, age, body weight, and body condition score (BCS). Prospective clinical study. 35 client-owned obese dogs (BCS > 7/9). Dogs were fed a therapeutic diet with energy intake restrictions to maintain weight loss of approximately 2%/wk. Collar-mounted pedometers were used to record the number of steps taken daily as a measure of activity. Body weight and BCS were assessed at the beginning of the weight-loss program and every 2 weeks thereafter throughout the study. Relationships between energy intake and sex, age, activity, BCS, and body weight at the end of the study were assessed via multivariable linear regression. Variables were compared among dogs stratified post hoc into inactive and active groups on the basis of mean number of steps taken (< or > 7,250 steps/d, respectively). Mean ± SD daily energy intake per unit of metabolic body weight (kg(0.75)) of active dogs was significantly greater than that of inactive dogs (53.6 ± 15.2 kcal/kg(0.75) vs 42.2 ± 9.7 kcal/kg(0.75), respectively) while maintaining weight-loss goals. In regression analysis, only the number of steps per day was significantly associated with energy intake. Increased physical activity was associated with higher energy intake while maintaining weight-loss goals. Each 1,000-step interval was associated with a 1 kcal/kg(0.75) increase in energy intake.
Validation of the FAST skating protocol to predict aerobic power in ice hockey players.
Petrella, Nicholas J; Montelpare, William J; Nystrom, Murray; Plyley, Michael; Faught, Brent E
2007-08-01
Few studies have reported a sport-specific protocol to measure the aerobic power of ice hockey players using a predictive process. The purpose of our study was to validate an ice hockey aerobic field test on players of varying ages, abilities, and levels. The Faught Aerobic Skating Test (FAST) uses an on-ice continuous skating protocol on a course measuring 160 feet (48.8 m) using a CD to pace the skater with a beep signal to cross the starting line at each end of the course. The FAST incorporates the principle of increasing workload at measured time intervals during a continuous skating exercise. Step-wise multiple regression modelling was used to determine the estimate of aerobic power. Participants completed a maximal aerobic power test using a modified Bruce incremental treadmill protocol, as well as the on-ice FAST. Normative data were collected on 406 ice hockey players (291 males, 115 females) ranging in age from 9 to 25 y. A regression to predict maximum aerobic power was developed using body mass (kg), height (m), age (y), and maximum completed lengths of the FAST as the significant predictors of skating aerobic power (adjusted R2 = 0.387, SEE = 7.25 mL.kg-1.min-1, p < 0.0001). These results support the application of the FAST in estimating aerobic power among male and female competitive ice hockey players between the ages of 9 and 25 years.
Improving chlorophyll-a retrievals and cross-sensor consistency through the OCI algorithm concept
NASA Astrophysics Data System (ADS)
Feng, L.; Hu, C.; Lee, Z.; Franz, B. A.
2016-02-01
Abstract: The recently developed band-subtraction based OCI chlorophyll-a algorithm is more tolerant than the band-ratio OCx algorithms to errors from atmospheric correction and other sources in oligotrophic oceans (Chl ≤ 0.25 mg m-3), and it has been implemented by NASA as the default algorithm to produce global Chl data from all ocean color missions. However, two areas still require improvements in its current implementation. Firstly, the originally proposed algorithm switch between oligotrophic and more productive waters has been changed from 0.25 - 0.3 mg m-3 to 0.15 - 0.2 mg m-3 to account for the observed discontinuity in data statistics. Additionally, the algorithm does not account for variable proportions of colored dissolved organic matter (CDOM) in different ocean basins. Here, new step-wise regression equations with fine-tuned regression coefficients are used to improve raise the algorithm switch zone and to improve data statistics as well as retrieval accuracy. A new CDOM index (CDI) based on three spectral bands (412, 443 and 490 nm) is used as a weighting factor to adjust the algorithm for the optical disparities between different oceans. The updated Chl OCI algorithm is then evaluated for its overall accuracy using field observations through the SeaBASS data archive, and for its cross-sensor consistency using multi-sensor observations over the global oceans. Keywords: Chlorophyll-a, Remote sensing, Ocean color, OCI, OCx, CDOM, MODIS, SeaWiFS, VIIRS
Dinato, Roberto C; Ribeiro, Ana P; Butugan, Marco K; Pereira, Ivye L R; Onodera, Andrea N; Sacco, Isabel C N
2015-01-01
To investigate the relationships between the perception of comfort and biomechanical parameters (plantar pressure and ground reaction force) during running with four different types of cushioning technology in running shoes. Randomized repeated measures. Twenty-two men, recreational runners (18-45 years) ran 12km/h with running shoes with four different cushioning systems. Outcome measures included nine items related to perception of comfort and 12 biomechanical measures related to the ground reaction forces and plantar pressures. Repeated measure ANOVAs, Pearson correlation coefficients, and step-wise multiple regression analyses were employed (p≤0.05). No significant correlations were found between the perception of comfort and the biomechanical parameters for the four types of investigated shoes. Regression analysis revealed that 56% of the perceived general comfort can be explained by the variables push-off rate and pressure integral over the forefoot (p=0.015) and that 33% of the perception of comfort over the forefoot can be explained by second peak force and push-off rate (p=0.016). The results did not demonstrate significant relationships between the perception of comfort and the biomechanical parameters for the three types of shoes investigated (Gel, Air, and ethylene-vinyl acetate). Only the shoe with Adiprene+ technology had its general comfort and cushioning perception predicted by the loads over the forefoot. Thus, in general, one cannot predict the perception of comfort of a running shoe through impact and plantar pressure received. Copyright © 2013 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Structured chaos in a devil's staircase of the Josephson junction.
Shukrinov, Yu M; Botha, A E; Medvedeva, S Yu; Kolahchi, M R; Irie, A
2014-09-01
The phase dynamics of Josephson junctions (JJs) under external electromagnetic radiation is studied through numerical simulations. Current-voltage characteristics, Lyapunov exponents, and Poincaré sections are analyzed in detail. It is found that the subharmonic Shapiro steps at certain parameters are separated by structured chaotic windows. By performing a linear regression on the linear part of the data, a fractal dimension of D = 0.868 is obtained, with an uncertainty of ±0.012. The chaotic regions exhibit scaling similarity, and it is shown that the devil's staircase of the system can form a backbone that unifies and explains the highly correlated and structured chaotic behavior. These features suggest a system possessing multiple complete devil's staircases. The onset of chaos for subharmonic steps occurs through the Feigenbaum period doubling scenario. Universality in the sequence of periodic windows is also demonstrated. Finally, the influence of the radiation and JJ parameters on the structured chaos is investigated, and it is concluded that the structured chaos is a stable formation over a wide range of parameter values.
Structured chaos in a devil's staircase of the Josephson junction
NASA Astrophysics Data System (ADS)
Shukrinov, Yu. M.; Botha, A. E.; Medvedeva, S. Yu.; Kolahchi, M. R.; Irie, A.
2014-09-01
The phase dynamics of Josephson junctions (JJs) under external electromagnetic radiation is studied through numerical simulations. Current-voltage characteristics, Lyapunov exponents, and Poincaré sections are analyzed in detail. It is found that the subharmonic Shapiro steps at certain parameters are separated by structured chaotic windows. By performing a linear regression on the linear part of the data, a fractal dimension of D = 0.868 is obtained, with an uncertainty of ±0.012. The chaotic regions exhibit scaling similarity, and it is shown that the devil's staircase of the system can form a backbone that unifies and explains the highly correlated and structured chaotic behavior. These features suggest a system possessing multiple complete devil's staircases. The onset of chaos for subharmonic steps occurs through the Feigenbaum period doubling scenario. Universality in the sequence of periodic windows is also demonstrated. Finally, the influence of the radiation and JJ parameters on the structured chaos is investigated, and it is concluded that the structured chaos is a stable formation over a wide range of parameter values.
Structured chaos in a devil's staircase of the Josephson junction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shukrinov, Yu. M.; Botha, A. E., E-mail: bothaae@unisa.ac.za; Medvedeva, S. Yu.
2014-09-01
The phase dynamics of Josephson junctions (JJs) under external electromagnetic radiation is studied through numerical simulations. Current-voltage characteristics, Lyapunov exponents, and Poincaré sections are analyzed in detail. It is found that the subharmonic Shapiro steps at certain parameters are separated by structured chaotic windows. By performing a linear regression on the linear part of the data, a fractal dimension of D = 0.868 is obtained, with an uncertainty of ±0.012. The chaotic regions exhibit scaling similarity, and it is shown that the devil's staircase of the system can form a backbone that unifies and explains the highly correlated and structured chaotic behavior.more » These features suggest a system possessing multiple complete devil's staircases. The onset of chaos for subharmonic steps occurs through the Feigenbaum period doubling scenario. Universality in the sequence of periodic windows is also demonstrated. Finally, the influence of the radiation and JJ parameters on the structured chaos is investigated, and it is concluded that the structured chaos is a stable formation over a wide range of parameter values.« less
Constructing a self: the role of self-structure and self-certainty in social anxiety.
Stopa, Lusia; Brown, Mike A; Luke, Michelle A; Hirsch, Colette R
2010-10-01
Current cognitive models stress the importance of negative self-perceptions in maintaining social anxiety, but focus predominantly on content rather than structure. Two studies examine the role of self-structure (self-organisation, self-complexity, and self-concept clarity) in social anxiety. In study one, self-organisation and self-concept clarity were correlated with social anxiety, and a step-wise multiple regression showed that after controlling for depression and self-esteem, which explained 35% of the variance in social anxiety scores, self-concept clarity uniquely predicted social anxiety and accounted for an additional 7% of the variance in social anxiety scores in an undergraduate sample (N=95) and the interaction between self-concept clarity and compartmentalisation (an aspect of evaluative self-organisation) at step 3 of the multiple regression accounted for a further 3% of the variance in social anxiety scores. In study two, high (n=26) socially anxious participants demonstrated less self-concept clarity than low socially anxious participants (n=26) on both self-report (used in study one) and on computerised measures of self-consistency and confidence in self-related judgments. The high socially anxious group had more compartmentalised self-organisation than the low anxious group, but there were no differences between the two groups on any of the other measures of self-organisation. Self-complexity did not contribute to social anxiety in either study, although this may have been due to the absence of a stressor. Overall, the results suggest that self-structure has a potentially important role in understanding social anxiety and that self-concept clarity and other aspects of self-structure such as compartmentalisation interact with each other and could be potential maintaining factors in social anxiety. Cognitive therapy for social phobia might influence self-structure, and understanding the role of structural variables in maintenance and treatment could eventually help to improve treatment outcome. Copyright 2010 Elsevier Ltd. All rights reserved.
Constructing a self: The role of self-structure and self-certainty in social anxiety
Stopa, Lusia; Brown, Mike A.; Luke, Michelle A.; Hirsch, Colette R.
2010-01-01
Current cognitive models stress the importance of negative self-perceptions in maintaining social anxiety, but focus predominantly on content rather than structure. Two studies examine the role of self-structure (self-organisation, self-complexity, and self-concept clarity) in social anxiety. In study one, self-organisation and self-concept clarity were correlated with social anxiety, and a step-wise multiple regression showed that after controlling for depression and self-esteem, which explained 35% of the variance in social anxiety scores, self-concept clarity uniquely predicted social anxiety and accounted for an additional 7% of the variance in social anxiety scores in an undergraduate sample (N = 95) and the interaction between self-concept clarity and compartmentalisation (an aspect of evaluative self-organisation) at step 3 of the multiple regression accounted for a further 3% of the variance in social anxiety scores. In study two, high (n = 26) socially anxious participants demonstrated less self-concept clarity than low socially anxious participants (n = 26) on both self-report (used in study one) and on computerised measures of self-consistency and confidence in self-related judgments. The high socially anxious group had more compartmentalised self-organisation than the low anxious group, but there were no differences between the two groups on any of the other measures of self-organisation. Self-complexity did not contribute to social anxiety in either study, although this may have been due to the absence of a stressor. Overall, the results suggest that self-structure has a potentially important role in understanding social anxiety and that self-concept clarity and other aspects of self-structure such as compartmentalisation interact with each other and could be potential maintaining factors in social anxiety. Cognitive therapy for social phobia might influence self-structure, and understanding the role of structural variables in maintenance and treatment could eventually help to improve treatment outcome. PMID:20800751
MIDAS: Regionally linear multivariate discriminative statistical mapping.
Varol, Erdem; Sotiras, Aristeidis; Davatzikos, Christos
2018-07-01
Statistical parametric maps formed via voxel-wise mass-univariate tests, such as the general linear model, are commonly used to test hypotheses about regionally specific effects in neuroimaging cross-sectional studies where each subject is represented by a single image. Despite being informative, these techniques remain limited as they ignore multivariate relationships in the data. Most importantly, the commonly employed local Gaussian smoothing, which is important for accounting for registration errors and making the data follow Gaussian distributions, is usually chosen in an ad hoc fashion. Thus, it is often suboptimal for the task of detecting group differences and correlations with non-imaging variables. Information mapping techniques, such as searchlight, which use pattern classifiers to exploit multivariate information and obtain more powerful statistical maps, have become increasingly popular in recent years. However, existing methods may lead to important interpretation errors in practice (i.e., misidentifying a cluster as informative, or failing to detect truly informative voxels), while often being computationally expensive. To address these issues, we introduce a novel efficient multivariate statistical framework for cross-sectional studies, termed MIDAS, seeking highly sensitive and specific voxel-wise brain maps, while leveraging the power of regional discriminant analysis. In MIDAS, locally linear discriminative learning is applied to estimate the pattern that best discriminates between two groups, or predicts a variable of interest. This pattern is equivalent to local filtering by an optimal kernel whose coefficients are the weights of the linear discriminant. By composing information from all neighborhoods that contain a given voxel, MIDAS produces a statistic that collectively reflects the contribution of the voxel to the regional classifiers as well as the discriminative power of the classifiers. Critically, MIDAS efficiently assesses the statistical significance of the derived statistic by analytically approximating its null distribution without the need for computationally expensive permutation tests. The proposed framework was extensively validated using simulated atrophy in structural magnetic resonance imaging (MRI) and further tested using data from a task-based functional MRI study as well as a structural MRI study of cognitive performance. The performance of the proposed framework was evaluated against standard voxel-wise general linear models and other information mapping methods. The experimental results showed that MIDAS achieves relatively higher sensitivity and specificity in detecting group differences. Together, our results demonstrate the potential of the proposed approach to efficiently map effects of interest in both structural and functional data. Copyright © 2018. Published by Elsevier Inc.
Prevalence of gestational diabetes mellitus according to the different criterias
Akgöl, Evren; Abuşoğlu, Sedat; Gün, Faik Deniz; Ünlü, Ali
2017-01-01
Objective: The two-step approach recommended by the National Diabetes Data Group (NDDG), Carpenter and Coustan (C&C), and O’Sullivan, and the single-step approach recommended by the International Association of Diabetes and Pregnancy Study Group (IADPSG) are used to diagnose gestational diabetes mellitus (GDM). We aimed to determine GDM prevalence and to compare the two-step and single-step approaches used in the southeastern region of Turkey. Materials and Methods: In total, 3048 records of pregnant women screened for GDM between 2008 and 2014 were retrospectively extracted from our laboratory information system. GDM was defined according to the criteria of NDDG, C&C, and O’Sullivan between in 2008 and 2011, and according to those of the IADPSG between 2012 and 2014. Demographic variables were compared using student’s t-test. The linear trends in GDM prevalence with age were calculated using logistic regression. Results: GDM prevalence was found as 4.8%, 8%, and 13.4% using the NDDG, C&C, and O’Sullivan two-step approach, respectively, and 22.3% with the IADPSG single-step approach. GDM prevalence increased with increasing age in both approaches. Conclusion: GDM prevalence was higher using the single-step approach than with the two-step approach. There was a significant increase in GDM prevalence using the IADPSG criteria. PMID:28913130
Converting point-wise nuclear cross sections to pole representation using regularized vector fitting
NASA Astrophysics Data System (ADS)
Peng, Xingjie; Ducru, Pablo; Liu, Shichang; Forget, Benoit; Liang, Jingang; Smith, Kord
2018-03-01
Direct Doppler broadening of nuclear cross sections in Monte Carlo codes has been widely sought for coupled reactor simulations. One recent approach proposed analytical broadening using a pole representation of the commonly used resonance models and the introduction of a local windowing scheme to improve performance (Hwang, 1987; Forget et al., 2014; Josey et al., 2015, 2016). This pole representation has been achieved in the past by converting resonance parameters in the evaluation nuclear data library into poles and residues. However, cross sections of some isotopes are only provided as point-wise data in ENDF/B-VII.1 library. To convert these isotopes to pole representation, a recent approach has been proposed using the relaxed vector fitting (RVF) algorithm (Gustavsen and Semlyen, 1999; Gustavsen, 2006; Liu et al., 2018). This approach however needs to specify ahead of time the number of poles. This article addresses this issue by adding a poles and residues filtering step to the RVF procedure. This regularized VF (ReV-Fit) algorithm is shown to efficiently converge the poles close to the physical ones, eliminating most of the superfluous poles, and thus enabling the conversion of point-wise nuclear cross sections.
Effects of walking speed on the step-by-step control of step width.
Stimpson, Katy H; Heitkamp, Lauren N; Horne, Joscelyn S; Dean, Jesse C
2018-02-08
Young, healthy adults walking at typical preferred speeds use step-by-step adjustments of step width to appropriately redirect their center of mass motion and ensure mediolateral stability. However, it is presently unclear whether this control strategy is retained when walking at the slower speeds preferred by many clinical populations. We investigated whether the typical stabilization strategy is influenced by walking speed. Twelve young, neurologically intact participants walked on a treadmill at a range of prescribed speeds (0.2-1.2 m/s). The mediolateral stabilization strategy was quantified as the proportion of step width variance predicted by the mechanical state of the pelvis throughout a step (calculated as R 2 magnitude from a multiple linear regression). Our ability to accurately predict the upcoming step width increased over the course of a step. The strength of the relationship between step width and pelvis mechanics at the start of a step was reduced at slower speeds. However, these speed-dependent differences largely disappeared by the end of a step, other than at the slowest walking speed (0.2 m/s). These results suggest that mechanics-dependent adjustments in step width are a consistent component of healthy gait across speeds and contexts. However, slower walking speeds may ease this control by allowing mediolateral repositioning of the swing leg to occur later in a step, thus encouraging slower walking among clinical populations with limited sensorimotor control. Published by Elsevier Ltd.
A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield
NASA Astrophysics Data System (ADS)
Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan
2018-04-01
In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.
Anderson, Carl A; McRae, Allan F; Visscher, Peter M
2006-07-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.
Lin, Zi-Jing; Li, Lin; Cazzell, Mary; Liu, Hanli
2014-08-01
Diffuse optical tomography (DOT) is a variant of functional near infrared spectroscopy and has the capability of mapping or reconstructing three dimensional (3D) hemodynamic changes due to brain activity. Common methods used in DOT image analysis to define brain activation have limitations because the selection of activation period is relatively subjective. General linear model (GLM)-based analysis can overcome this limitation. In this study, we combine the atlas-guided 3D DOT image reconstruction with GLM-based analysis (i.e., voxel-wise GLM analysis) to investigate the brain activity that is associated with risk decision-making processes. Risk decision-making is an important cognitive process and thus is an essential topic in the field of neuroscience. The Balloon Analog Risk Task (BART) is a valid experimental model and has been commonly used to assess human risk-taking actions and tendencies while facing risks. We have used the BART paradigm with a blocked design to investigate brain activations in the prefrontal and frontal cortical areas during decision-making from 37 human participants (22 males and 15 females). Voxel-wise GLM analysis was performed after a human brain atlas template and a depth compensation algorithm were combined to form atlas-guided DOT images. In this work, we wish to demonstrate the excellence of using voxel-wise GLM analysis with DOT to image and study cognitive functions in response to risk decision-making. Results have shown significant hemodynamic changes in the dorsal lateral prefrontal cortex (DLPFC) during the active-choice mode and a different activation pattern between genders; these findings correlate well with published literature in functional magnetic resonance imaging (fMRI) and fNIRS studies. Copyright © 2014 The Authors. Human Brain Mapping Published by Wiley Periodicals, Inc.
Konietschke, Frank; Libiger, Ondrej; Hothorn, Ludwig A
2012-01-01
Statistical association between a single nucleotide polymorphism (SNP) genotype and a quantitative trait in genome-wide association studies is usually assessed using a linear regression model, or, in the case of non-normally distributed trait values, using the Kruskal-Wallis test. While linear regression models assume an additive mode of inheritance via equi-distant genotype scores, Kruskal-Wallis test merely tests global differences in trait values associated with the three genotype groups. Both approaches thus exhibit suboptimal power when the underlying inheritance mode is dominant or recessive. Furthermore, these tests do not perform well in the common situations when only a few trait values are available in a rare genotype category (disbalance), or when the values associated with the three genotype categories exhibit unequal variance (variance heterogeneity). We propose a maximum test based on Marcus-type multiple contrast test for relative effect sizes. This test allows model-specific testing of either dominant, additive or recessive mode of inheritance, and it is robust against variance heterogeneity. We show how to obtain mode-specific simultaneous confidence intervals for the relative effect sizes to aid in interpreting the biological relevance of the results. Further, we discuss the use of a related all-pairwise comparisons contrast test with range preserving confidence intervals as an alternative to Kruskal-Wallis heterogeneity test. We applied the proposed maximum test to the Bogalusa Heart Study dataset, and gained a remarkable increase in the power to detect association, particularly for rare genotypes. Our simulation study also demonstrated that the proposed non-parametric tests control family-wise error rate in the presence of non-normality and variance heterogeneity contrary to the standard parametric approaches. We provide a publicly available R library nparcomp that can be used to estimate simultaneous confidence intervals or compatible multiplicity-adjusted p-values associated with the proposed maximum test.
Anderson, Emma L; Tilling, Kate; Fraser, Abigail; Macdonald-Wallis, Corrie; Emmett, Pauline; Cribb, Victoria; Northstone, Kate; Lawlor, Debbie A; Howe, Laura D
2013-07-01
Methods for the assessment of changes in dietary intake across the life course are underdeveloped. We demonstrate the use of linear-spline multilevel models to summarize energy-intake trajectories through childhood and adolescence and their application as exposures, outcomes, or mediators. The Avon Longitudinal Study of Parents and Children assessed children's dietary intake several times between ages 3 and 13 years, using both food frequency questionnaires (FFQs) and 3-day food diaries. We estimated energy-intake trajectories for 12,032 children using linear-spline multilevel models. We then assessed the associations of these trajectories with maternal body mass index (BMI), and later offspring BMI, and also their role in mediating the relation between maternal and offspring BMIs. Models estimated average and individual energy intake at 3 years, and linear changes in energy intake from age 3 to 7 years and from age 7 to 13 years. By including the exposure (in this example, maternal BMI) in the multilevel model, we were able to estimate the average energy-intake trajectories across levels of the exposure. When energy-intake trajectories are the exposure for a later outcome (in this case offspring BMI) or a mediator (between maternal and offspring BMI), results were similar, whether using a two-step process (exporting individual-level intercepts and slopes from multilevel models and using these in linear regression/path analysis), or a single-step process (multivariate multilevel models). Trajectories were similar when FFQs and food diaries were assessed either separately, or when combined into one model. Linear-spline multilevel models provide useful summaries of trajectories of dietary intake that can be used as an exposure, outcome, or mediator.
Optimal control of parametric oscillations of compressed flexible bars
NASA Astrophysics Data System (ADS)
Alesova, I. M.; Babadzanjanz, L. K.; Pototskaya, I. Yu.; Pupysheva, Yu. Yu.; Saakyan, A. T.
2018-05-01
In this paper the problem of damping of the linear systems oscillations with piece-wise constant control is solved. The motion of bar construction is reduced to the form described by Hill's differential equation using the Bubnov-Galerkin method. To calculate switching moments of the one-side control the method of sequential linear programming is used. The elements of the fundamental matrix of the Hill's equation are approximated by trigonometric series. Examples of the optimal control of the systems for various initial conditions and different number of control stages have been calculated. The corresponding phase trajectories and transient processes are represented.
Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi; ...
2015-11-12
Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi
Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less
Linear regression crash prediction models : issues and proposed solutions.
DOT National Transportation Integrated Search
2010-05-01
The paper develops a linear regression model approach that can be applied to : crash data to predict vehicle crashes. The proposed approach involves novice data aggregation : to satisfy linear regression assumptions; namely error structure normality ...
NASA Astrophysics Data System (ADS)
Park, Kyungjeen
This study aims to develop an objective hurricane initialization scheme which incorporates not only forecast model constraints but also observed features such as the initial intensity and size. It is based on the four-dimensional variational (4D-Var) bogus data assimilation (BDA) scheme originally proposed by Zou and Xiao (1999). The 4D-Var BDA consists of two steps: (i) specifying a bogus sea level pressure (SLP) field based on parameters observed by the Tropical Prediction Center (TPC) and (ii) assimilating the bogus SLP field under a forecast model constraint to adjust all model variables. This research focuses on improving the specification of the bogus SLP indicated in the first step. Numerical experiments are carried out for Hurricane Bonnie (1998) and Hurricane Gordon (2000) to test the sensitivity of hurricane track and intensity forecasts to specification of initial vortex. Major results are listed below: (1) A linear regression model is developed for determining the size of initial vortex based on the TPC observed radius of 34kt. (2) A method is proposed to derive a radial profile of SLP from QuikSCAT surface winds. This profile is shown to be more realistic than ideal profiles derived from Fujita's and Holland's formulae. (3) It is found that it takes about 1 h for hurricane prediction model to develop a conceptually correct hurricane structure, featuring a dominant role of hydrostatic balance at the initial time and a dynamic adjustment in less than 30 minutes. (4) Numerical experiments suggest that track prediction is less sensitive to the specification of initial vortex structure than intensity forecast. (5) Hurricane initialization using QuikSCAT-derived initial vortex produced a reasonably good forecast for hurricane landfall, with a position error of 25 km and a 4-h delay at landfalling. (6) Numerical experiments using the linear regression model for the size specification considerably outperforms all the other formulations tested in terms of the intensity prediction for both Hurricanes. For examples, the maximum track error is less than 110 km during the entire three-day forecasts for both hurricanes. The simulated Hurricane Gordon using the linear regression model made a nearly perfect landfall, with no position error and only 1-h error in landfalling time. (7) Diagnosis of model output indicates that the initial vortex specified by the linear regression model produces larger surface fluxes of sensible heat, latent heat and moisture, as well as stronger downward angular momentum transport than all the other schemes do. These enhanced energy supplies offset the energy lost caused by friction and gravity wave propagation, allowing for the model to maintain a strong and realistic hurricane during the entire forward model integration.
Applicability of linear regression equation for prediction of chlorophyll content in rice leaves
NASA Astrophysics Data System (ADS)
Li, Yunmei
2005-09-01
A modeling approach is used to assess the applicability of the derived equations which are capable to predict chlorophyll content of rice leaves at a given view direction. Two radiative transfer models, including PROSPECT model operated at leaf level and FCR model operated at canopy level, are used in the study. The study is consisted of three steps: (1) Simulation of bidirectional reflectance from canopy with different leaf chlorophyll contents, leaf-area-index (LAI) and under storey configurations; (2) Establishment of prediction relations of chlorophyll content by stepwise regression; and (3) Assessment of the applicability of these relations. The result shows that the accuracy of prediction is affected by different under storey configurations and, however, the accuracy tends to be greatly improved with increase of LAI.
Comparison between Linear and Nonlinear Regression in a Laboratory Heat Transfer Experiment
ERIC Educational Resources Information Center
Gonçalves, Carine Messias; Schwaab, Marcio; Pinto, José Carlos
2013-01-01
In order to interpret laboratory experimental data, undergraduate students are used to perform linear regression through linearized versions of nonlinear models. However, the use of linearized models can lead to statistically biased parameter estimates. Even so, it is not an easy task to introduce nonlinear regression and show for the students…
NASA Astrophysics Data System (ADS)
Ahmadian, Radin
2010-09-01
This study investigated the relationship of anthocyanin concentration from different organic fruit species and output voltage and current in a TiO2 dye-sensitized solar cell (DSSC) and hypothesized that fruits with greater anthocyanin concentration produce higher maximum power point (MPP) which would lead to higher current and voltage. Anthocyanin dye solution was made with crushing of a group of fresh fruits with different anthocyanin content in 2 mL of de-ionized water and filtration. Using these test fruit dyes, multiple DSSCs were assembled such that light enters through the TiO2 side of the cell. The full current-voltage (I-V) co-variations were measured using a 500 Ω potentiometer as a variable load. Point-by point current and voltage data pairs were measured at various incremental resistance values. The maximum power point (MPP) generated by the solar cell was defined as a dependent variable and the anthocyanin concentration in the fruit used in the DSSC as the independent variable. A regression model was used to investigate the linear relationship between study variables. Regression analysis showed a significant linear relationship between MPP and anthocyanin concentration with a p-value of 0.007. Fruits like blueberry and black raspberry with the highest anthocyanin content generated higher MPP. In a DSSC, a linear model may predict MPP based on the anthocyanin concentration. This model is the first step to find organic anthocyanin sources in the nature with the highest dye concentration to generate energy.
Fast randomization of large genomic datasets while preserving alteration counts.
Gobbi, Andrea; Iorio, Francesco; Dawson, Kevin J; Wedge, David C; Tamborero, David; Alexandrov, Ludmil B; Lopez-Bigas, Nuria; Garnett, Mathew J; Jurman, Giuseppe; Saez-Rodriguez, Julio
2014-09-01
Studying combinatorial patterns in cancer genomic datasets has recently emerged as a tool for identifying novel cancer driver networks. Approaches have been devised to quantify, for example, the tendency of a set of genes to be mutated in a 'mutually exclusive' manner. The significance of the proposed metrics is usually evaluated by computing P-values under appropriate null models. To this end, a Monte Carlo method (the switching-algorithm) is used to sample simulated datasets under a null model that preserves patient- and gene-wise mutation rates. In this method, a genomic dataset is represented as a bipartite network, to which Markov chain updates (switching-steps) are applied. These steps modify the network topology, and a minimal number of them must be executed to draw simulated datasets independently under the null model. This number has previously been deducted empirically to be a linear function of the total number of variants, making this process computationally expensive. We present a novel approximate lower bound for the number of switching-steps, derived analytically. Additionally, we have developed the R package BiRewire, including new efficient implementations of the switching-algorithm. We illustrate the performances of BiRewire by applying it to large real cancer genomics datasets. We report vast reductions in time requirement, with respect to existing implementations/bounds and equivalent P-value computations. Thus, we propose BiRewire to study statistical properties in genomic datasets, and other data that can be modeled as bipartite networks. BiRewire is available on BioConductor at http://www.bioconductor.org/packages/2.13/bioc/html/BiRewire.html. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.
Stochastic Surface Mesh Reconstruction
NASA Astrophysics Data System (ADS)
Ozendi, M.; Akca, D.; Topan, H.
2018-05-01
A generic and practical methodology is presented for 3D surface mesh reconstruction from the terrestrial laser scanner (TLS) derived point clouds. It has two main steps. The first step deals with developing an anisotropic point error model, which is capable of computing the theoretical precisions of 3D coordinates of each individual point in the point cloud. The magnitude and direction of the errors are represented in the form of error ellipsoids. The following second step is focused on the stochastic surface mesh reconstruction. It exploits the previously determined error ellipsoids by computing a point-wise quality measure, which takes into account the semi-diagonal axis length of the error ellipsoid. The points only with the least errors are used in the surface triangulation. The remaining ones are automatically discarded.
S-World: A high resolution global soil database for simulation modelling (Invited)
NASA Astrophysics Data System (ADS)
Stoorvogel, J. J.
2013-12-01
There is an increasing call for high resolution soil information at the global level. A good example for such a call is the Global Gridded Crop Model Intercomparison carried out within AgMIP. While local studies can make use of surveying techniques to collect additional techniques this is practically impossible at the global level. It is therefore important to rely on legacy data like the Harmonized World Soil Database. Several efforts do exist that aim at the development of global gridded soil property databases. These estimates of the variation of soil properties can be used to assess e.g., global soil carbon stocks. However, they do not allow for simulation runs with e.g., crop growth simulation models as these models require a description of the entire pedon rather than a few soil properties. This study provides the required quantitative description of pedons at a 1 km resolution for simulation modelling. It uses the Harmonized World Soil Database (HWSD) for the spatial distribution of soil types, the ISRIC-WISE soil profile database to derive information on soil properties per soil type, and a range of co-variables on topography, climate, and land cover to further disaggregate the available data. The methodology aims to take stock of these available data. The soil database is developed in five main steps. Step 1: All 148 soil types are ordered on the basis of their expected topographic position using e.g., drainage, salinization, and pedogenesis. Using the topographic ordering and combining the HWSD with a digital elevation model allows for the spatial disaggregation of the composite soil units. This results in a new soil map with homogeneous soil units. Step 2: The ranges of major soil properties for the topsoil and subsoil of each of the 148 soil types are derived from the ISRIC-WISE soil profile database. Step 3: A model of soil formation is developed that focuses on the basic conceptual question where we are within the range of a particular soil property at a particular location given a specific soil type. The soil properties are predicted for each grid cell based on the soil type, the corresponding ranges of soil properties, and the co-variables. Step 4: Standard depth profiles are developed for each of the soil types using the diagnostic criteria of the soil types and soil profile information from the ISRIC-WISE database. The standard soil profiles are combined with the the predicted values for the topsoil and subsoil yielding unique soil profiles at each location. Step 5: In a final step, additional soil properties are added to the database using averages for the soil types and pedo-transfer functions. The methodology, denominated S-World (Soils of the World), results in readily available global maps with quantitative pedon data for modelling purposes. It forms the basis for the Global Gridded Crop Model Intercomparison carried out within AgMIP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, Thomas C.; Davies, James F.; Wilson, Kevin R.
A new method for measuring diffusion in the condensed phase of single aerosol particles is proposed and demonstrated. The technique is based on the frequency-dependent response of a binary particle to oscillations in the vapour phase of one of its chemical components. Here, we discuss how this physical situation allows for what would typically be a non-linear boundary value problem to be approximately reduced to a linear boundary value problem. For the case of aqueous aerosol particles, we investigate the accuracy of the closed-form analytical solution to this linear problem through a comparison with the numerical solution of the fullmore » problem. Then, using experimentally measured whispering gallery modes to track the frequency-dependent response of aqueous particles to relative humidity oscillations, we determine diffusion coefficients as a function of water activity. The measured diffusion coefficients are compared to previously reported values found using the two common experiments: (i) the analysis of the sorption/desorption of water from a particle after a step-wise change to the surrounding relative humidity and (ii) the isotopic exchange of water between a particle and the vapour phase. The technique presented here has two main strengths: first, when compared to the sorption/desorption experiment, it does not require the numerical evaluation of a boundary value problem during the fitting process as a closed-form expression is available. Second, when compared to the isotope exchange experiment, it does not require the use of labeled molecules. Therefore, the frequency-dependent experiment retains the advantages of these two commonly used methods but does not suffer from their drawbacks.« less
Preston, Thomas C.; Davies, James F.; Wilson, Kevin R.
2017-01-13
A new method for measuring diffusion in the condensed phase of single aerosol particles is proposed and demonstrated. The technique is based on the frequency-dependent response of a binary particle to oscillations in the vapour phase of one of its chemical components. Here, we discuss how this physical situation allows for what would typically be a non-linear boundary value problem to be approximately reduced to a linear boundary value problem. For the case of aqueous aerosol particles, we investigate the accuracy of the closed-form analytical solution to this linear problem through a comparison with the numerical solution of the fullmore » problem. Then, using experimentally measured whispering gallery modes to track the frequency-dependent response of aqueous particles to relative humidity oscillations, we determine diffusion coefficients as a function of water activity. The measured diffusion coefficients are compared to previously reported values found using the two common experiments: (i) the analysis of the sorption/desorption of water from a particle after a step-wise change to the surrounding relative humidity and (ii) the isotopic exchange of water between a particle and the vapour phase. The technique presented here has two main strengths: first, when compared to the sorption/desorption experiment, it does not require the numerical evaluation of a boundary value problem during the fitting process as a closed-form expression is available. Second, when compared to the isotope exchange experiment, it does not require the use of labeled molecules. Therefore, the frequency-dependent experiment retains the advantages of these two commonly used methods but does not suffer from their drawbacks.« less
Structure and Dynamics of the tRNA-like Structure Domain of Brome Mosaic Virus
NASA Astrophysics Data System (ADS)
Vieweger, Mario; Nesbitt, David
2014-03-01
Conformational switching is widely accepted as regulatory mechanism in gene expression in bacterial systems. More recently, similar regulation mechanisms are emerging for viral systems. One of the most abundant and best studied systems is the tRNA-like structure domain that is found in a number of plant viruses across eight genera. In this work, the folding dynamics of the tRNA-like structure domain of Brome Mosaic Virus are investigated using single-molecule Fluorescence Resonance Energy Transfer techniques. In particular, Burst fluorescence is applied to observe metal-ion induced folding in freely diffusing RNA constructs resembling the 3'-terminal 169nt of BMV RNA3. Histograms of EFRET probabilities reveal a complex equilibrium of three distinct populations. A step-wise kinetic model for TLS folding is developed in accord with the evolution of conformational populations and structural information in the literature. In this mechanism, formation of functional TLS domains from unfolded RNAs requires two consecutive steps; 1) hybridization of a long-range stem interaction followed by 2) formation of a 3' pseudoknot. This three-state equilibrium is well described by step-wise dissociation constants K1(328(30) μM) and K2(1092(183) μM) for [Mg2+] and K1(74(6) mM) and K2(243(52) mM) for [Na+]-induced folding. The kinetic model is validated by oligo competition with the STEM interaction. Implications of this conformational folding mechanism are discussed in regards to regulation of virus replication.
ERIC Educational Resources Information Center
Phillips, Deborah A.
2017-01-01
Preschool education is now firmly linked to two aspirational purposes: as the first step on a trajectory of academic and life success for all children and as wise economic policy for the nation. Both purposes are grounded in an assumption that the early developmental boost children receive from preschool will produce lasting impacts. However,…
An Integrated Modeling and Simulation Methodology for Intelligent Systems Design and Testing
2002-08-01
simulation and actual execution. KEYWORDS: Model Continuity, Modeling, Simulation, Experimental Frame, Real Time Systems , Intelligent Systems...the methodology for a stand-alone real time system. Then it will scale up to distributed real time systems . For both systems, step-wise simulation...MODEL CONTINUITY Intelligent real time systems monitor, respond to, or control, an external environment. This environment is connected to the digital
A Step-Wise Approach to Elicit Triangular Distributions
NASA Technical Reports Server (NTRS)
Greenberg, Marc W.
2013-01-01
Adapt/combine known methods to demonstrate an expert judgment elicitation process that: 1.Models expert's inputs as a triangular distribution, 2.Incorporates techniques to account for expert bias and 3.Is structured in a way to help justify expert's inputs. This paper will show one way of "extracting" expert opinion for estimating purposes. Nevertheless, as with most subjective methods, there are many ways to do this.
The Application of the Cumulative Logistic Regression Model to Automated Essay Scoring
ERIC Educational Resources Information Center
Haberman, Shelby J.; Sinharay, Sandip
2010-01-01
Most automated essay scoring programs use a linear regression model to predict an essay score from several essay features. This article applied a cumulative logit model instead of the linear regression model to automated essay scoring. Comparison of the performances of the linear regression model and the cumulative logit model was performed on a…
Does my step look big in this? A visual illusion leads to safer stepping behaviour.
Elliott, David B; Vale, Anna; Whitaker, David; Buckley, John G
2009-01-01
Tripping is a common factor in falls and a typical safety strategy to avoid tripping on steps or stairs is to increase foot clearance over the step edge. In the present study we asked whether the perceived height of a step could be increased using a visual illusion and whether this would lead to the adoption of a safer stepping strategy, in terms of greater foot clearance over the step edge. The study also addressed the controversial question of whether motor actions are dissociated from visual perception. 21 young, healthy subjects perceived the step to be higher in a configuration of the horizontal-vertical illusion compared to a reverse configuration (p = 0.01). During a simple stepping task, maximum toe elevation changed by an amount corresponding to the size of the visual illusion (p<0.001). Linear regression analyses showed highly significant associations between perceived step height and maximum toe elevation for all conditions. The perceived height of a step can be manipulated using a simple visual illusion, leading to the adoption of a safer stepping strategy in terms of greater foot clearance over a step edge. In addition, the strong link found between perception of a visual illusion and visuomotor action provides additional support to the view that the original, controversial proposal by Goodale and Milner (1992) of two separate and distinct visual streams for perception and visuomotor action should be re-evaluated.
Three-step method for menstrual and oral contraceptive cycle verification.
Schaumberg, Mia A; Jenkins, David G; Janse de Jonge, Xanne A K; Emmerton, Lynne M; Skinner, Tina L
2017-11-01
Fluctuating endogenous and exogenous ovarian hormones may influence exercise parameters; yet control and verification of ovarian hormone status is rarely reported and limits current exercise science and sports medicine research. The purpose of this study was to determine the effectiveness of an individualised three-step method in identifying the mid-luteal or high hormone phase in endogenous and exogenous hormone cycles in recreationally-active women and determine hormone and demographic characteristics associated with unsuccessful classification. Cross-sectional study design. Fifty-four recreationally-active women who were either long-term oral contraceptive users (n=28) or experiencing regular natural menstrual cycles (n=26) completed step-wise menstrual mapping, urinary ovulation prediction testing and venous blood sampling for serum/plasma hormone analysis on two days, 6-12days after positive ovulation prediction to verify ovarian hormone concentrations. Mid-luteal phase was successfully verified in 100% of oral contraceptive users, and 70% of naturally-menstruating women. Thirty percent of participants were classified as luteal phase deficient; when excluded, the success of the method was 89%. Lower age, body fat and longer menstrual cycles were significantly associated with luteal phase deficiency. A step-wise method including menstrual cycle mapping, urinary ovulation prediction and serum/plasma hormone measurement was effective at verifying ovarian hormone status. Additional consideration of age, body fat and cycle length enhanced identification of luteal phase deficiency in physically-active women. These findings enable the development of stricter exclusion criteria for female participants in research studies and minimise the influence of ovarian hormone variations within sports and exercise science and medicine research. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nguyen, Linh V.; Warren-Smith, Stephen C.; Ebendorff-Heidepriem, Heike; Monro, Tanya M.
2016-04-01
We report a high temperature fiber sensor based on the multimode interference effect within a suspended core microstructured optical fiber (SCF). By splicing a short section of SCF with a lead-in single-mode fiber (SMF), the sensor head was formed. A complex interference pattern was obtained in the reflection spectrum as the result of the multiple excited modes in the SCF. The complexity of the interference indicates that there are more than two dominantly excited modes in the SCF, as resolved by Fast Fourier Transform (FFT) analysis of the interference. The proposed sensor was subjected to temperature variation from 20°C to 1100°C. The fringe of the filtered spectrum red-shifted linearly with respect to temperature varying between 20°C and 1100°C, with similar temperature sensitivity for increasing and decreasing temperature. Phase monitoring was used for an extended temperature experiment (80 hours) in which the sensor was subjected to several different temperature variation conditions namely (i) step-wise increase/decrease with 100°C steps between 20°C and 1100°C, (ii) dwelling overnight at 400°C, (iii) free fall from 1100°C to 132°C, and (iv) continuous increase of temperature from 132°C to 1100°C. Our approach serves as a simple and cost-effective alternative to the better-known high temperature fiber sensors such as the fiber Bragg grating (FBG) in sapphire fibers or regenerated FBG in photosensitive optical fibers.
The Architecture of the Golfer's Brain
Jäncke, Lutz; Koeneke, Susan; Hoppe, Ariana; Rominger, Christina; Hänggi, Jürgen
2009-01-01
Background Several recent studies have shown practice-dependent structural alterations in humans. Cross-sectional studies of intensive practice of specific tasks suggest associated long-term structural adaptations. Playing golf at a high level of performance is one of the most demanding sporting activities. In this study, we report the relationship between a particular level of proficiency in playing golf (indicated by golf handicap level) and specific neuroanatomical features. Principal Findings Using voxel-based morphometry (VBM) of grey (GM) and white matter (WM) volumes and fractional anisotropy (FA) measures of the fibre tracts, we identified differences between skilled (professional golfers and golfers with an handicap from 1–14) and less-skilled golfers (golfers with an handicap from 15–36 and non-golfer). Larger GM volumes were found in skilled golfers in a fronto-parietal network including premotor and parietal areas. Skilled golfers revealed smaller WM volume and FA values in the vicinity of the corticospinal tract at the level of the internal and external capsule and in the parietal operculum. However, there was no structural difference within the skilled and less-skilled golfer group. Conclusion There is no linear relationship between the anatomical findings and handicap level, amount of practice, and practice hours per year. There was however a strong difference between highly-practiced golfers (at least 800–3,000 hours) and those who have practised less or non-golfers without any golfing practise, thus indicating a step-wise structural and not a linear change. PMID:19277116
Utilization of Dental Services in Public Health Center: Dental Attendance, Awareness and Felt Needs.
Pewa, Preksha; Garla, Bharath K; Dagli, Rushabh; Bhateja, Geetika Arora; Solanki, Jitendra
2015-10-01
In rural India, dental diseases occur due to many factors, which includes inadequate or improper use of fluoride and a lack of knowledge regarding oral health and oral hygiene, which prevent proper screening and dental care of oral diseases. The objective of the study was to evaluate the dental attendance, awareness and utilization of dental services in public health center. A cross-sectional study was conducted among 251 study subjects who were visiting dental outpatient department (OPD) of public health centre (PHC), Guda Bishnoi, and Jodhpur using a pretested proforma from month of July 2014 to October 2014. A pretested questionnaire was used to collect the data regarding socioeconomic status and demographic factors affecting the utilization of dental services. Pearson's Chi-square test and step-wise logistic regression were applied for the analysis. Statistically significant results were found in relation to age, educational status, socioeconomic status and gender with dental attendance, dental awareness and felt needs. p-value <0.05 was kept as statistically significant. The services provided in public health center should be based on the felt need of the population to increase attendance as well as utilization of dental services, thereby increasing the oral health status of the population.
Depression is an independent determinant of life satisfaction early after stroke.
Oosterveer, Daniëlla M; Mishre, Radha Rambaran; van Oort, Andrea; Bodde, Karin; Aerden, Leo A M
2017-03-06
Life satisfaction is reduced in stroke patients. However, as a rule, rehabilitation goals are not aimed at life satisfaction, but at activities and participation. In order to optimize life satisfaction in stroke patients, rehabilitation should take into account the determinants of life satisfaction. The aim of this study was therefore to determine what factors are independent determinants of life satisfaction in a large group of patients early after stroke. Stroke-surviving patients were examined by a specialized nurse 6 weeks after discharge from hospital or rehabilitation setting. A standardized history and several screening lists, including the Lisat-9, were completed. Step-wise regression was used to identify independent determinants of life satisfaction. A total of 284 stroke-surviving patients were included in the study. Of these, 117 answered all of the Lisat-9 questions. Most patients (66.5%) rated their life as a whole as "satisfying" or "very satisfying". More depressive symptoms were independently associated with lower life satisfaction (p < 0.001). Most stroke-surviving patients are satisfied with their life early after a stroke. The score on the Hospital Anxiety and Depression Scale depression items is independently associated with life satisfaction. Physicians should therefore pay close attention to the mood of these patients.
An accurate and rapid radiographic method of determining total lung capacity
Reger, R. B.; Young, A.; Morgan, W. K. C.
1972-01-01
The accuracy and reliability of Barnhard's radiographic method of determining total lung capacity have been confirmed by several groups of investigators. Despite its simplicity and general reliability, it has several shortcomings, especially when used in large-scale epidemiological surveys. Of these, the most serious is related to film technique; thus, when the cardiac and diaphragmatic shadows are poorly defined, the appropriate measurements cannot be made accurately. A further drawback involves the time needed to measure the segments and to perform the necessary calculations. We therefore set out to develop an abbreviated and simpler radiographic method for determining total lung capacity. This uses a step-wise multiple regression model which allows total lung capacity to be derived as follows: posteroanterior and lateral films are divided into the standard sections as described in the text, the width, depth, and height of sections 1 and 4 are measured in centimetres, finally the necessary derivations and substitutions are made and applied to the formula Ŷ = −1·41148 + (0·00479 X1) + (0·00097 X4), where Ŷ is the total lung capacity. In our hands this method has provided a simple, rapid, and acceptable method of determining total lung capacity. PMID:5034594
Estimating Dungeness crab (Cancer magister) abundance: Crab pots and dive transects compared
Taggart, S. James; O'Clair, Charles E.; Shirley, Thomas C.; Mondragon, Jennifer
2004-01-01
Dungeness crabs (Cancer magister) were sampled with commercial pots and counted by scuba divers on benthic transects at eight sites near Glacier Bay, Alaska. Catch per unit of effort (CPUE) from pots was compared to the density estimates from dives to evaluate the bias and power of the two techniques. Yearly sampling was conducted in two seasons: April and September, from 1992 to 2000. Male CPUE estimates from pots were significantly lower in April than in the following September; a step-wise regression demonstrated that season accounted for more of the variation in male CPUE than did temperature. In both April and September, pot sampling was significantly biased against females. When females were categorized as ovigerous and nonovigerous, it was clear that ovigerous females accounted for the majority of the bias because pots were not biased against nonovigerous females. We compared the power of pots and dive transects in detecting trends in populations and found that pots had much higher power than dive transects. Despite their low power, the dive transects were very useful for detecting bias in our pot sampling and in identifying the optimal times of year to sample so that pot bias could be avoided.
NASA Astrophysics Data System (ADS)
Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong; Ding, Yinghui
2014-07-01
The linear regression parameters between two time series can be different under different lengths of observation period. If we study the whole period by the sliding window of a short period, the change of the linear regression parameters is a process of dynamic transmission over time. We tackle fundamental research that presents a simple and efficient computational scheme: a linear regression patterns transmission algorithm, which transforms linear regression patterns into directed and weighted networks. The linear regression patterns (nodes) are defined by the combination of intervals of the linear regression parameters and the results of the significance testing under different sizes of the sliding window. The transmissions between adjacent patterns are defined as edges, and the weights of the edges are the frequency of the transmissions. The major patterns, the distance, and the medium in the process of the transmission can be captured. The statistical results of weighted out-degree and betweenness centrality are mapped on timelines, which shows the features of the distribution of the results. Many measurements in different areas that involve two related time series variables could take advantage of this algorithm to characterize the dynamic relationships between the time series from a new perspective.
Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong; Ding, Yinghui
2014-07-01
The linear regression parameters between two time series can be different under different lengths of observation period. If we study the whole period by the sliding window of a short period, the change of the linear regression parameters is a process of dynamic transmission over time. We tackle fundamental research that presents a simple and efficient computational scheme: a linear regression patterns transmission algorithm, which transforms linear regression patterns into directed and weighted networks. The linear regression patterns (nodes) are defined by the combination of intervals of the linear regression parameters and the results of the significance testing under different sizes of the sliding window. The transmissions between adjacent patterns are defined as edges, and the weights of the edges are the frequency of the transmissions. The major patterns, the distance, and the medium in the process of the transmission can be captured. The statistical results of weighted out-degree and betweenness centrality are mapped on timelines, which shows the features of the distribution of the results. Many measurements in different areas that involve two related time series variables could take advantage of this algorithm to characterize the dynamic relationships between the time series from a new perspective.
Forced smoking abstinence: not enough for smoking cessation.
Clarke, Jennifer G; Stein, L A R; Martin, Rosemarie A; Martin, Stephen A; Parker, Donna; Lopes, Cheryl E; McGovern, Arthur R; Simon, Rachel; Roberts, Mary; Friedman, Peter; Bock, Beth
2013-05-13
Millions of Americans are forced to quit smoking as they enter tobacco-free prisons and jails, but most return to smoking within days of release. Interventions are needed to sustain tobacco abstinence after release from incarceration. To evaluate the extent to which the WISE intervention (Working Inside for Smoking Elimination), based on motivational interviewing (MI) and cognitive behavioral therapy (CBT), decreases relapse to smoking after release from a smoke-free prison. Participants were recruited approximately 8 weeks prior to their release from a smoke-free prison and randomized to 6 weekly sessions of either education videos (control) or the WISE intervention. A tobacco-free prison in the United States. A total of 262 inmates (35% female). Continued smoking abstinence was defined as 7-day point-prevalence abstinence validated by urine cotinine measurement. At the 3-week follow-up, 25% of participants in the WISE intervention (31 of 122) and 7% of the control participants (9 of 125) continued to be tobacco abstinent (odds ratio [OR], 4.4; 95% CI, 2.0-9.7). In addition to the intervention, Hispanic ethnicity, a plan to remain abstinent, and being incarcerated for more than 6 months were all associated with increased likelihood of remaining abstinent. In the logistic regression analysis, participants randomized to the WISE intervention were 6.6 times more likely to remain tobacco abstinent at the 3-week follow up than those randomized to the control condition (95% CI, 2.5-17.0). Nonsmokers at the 3-week follow-up had an additional follow-up 3 months after release, and overall 12% of the participants in the WISE intervention (14 of 122) and 2% of the control participants (3 of 125) were tobacco free at 3 months, as confirmed by urine cotinine measurement (OR, 5.3; 95% CI, 1.4-23.8). Forced tobacco abstinence alone during incarceration has little impact on postrelease smoking status. A behavioral intervention provided prior to release greatly improves cotinine-confirmed smoking cessation in the community. clinicaltrials.gov Identifier: NCT01122589.
Wilke, Marko
2018-02-01
This dataset contains the regression parameters derived by analyzing segmented brain MRI images (gray matter and white matter) from a large population of healthy subjects, using a multivariate adaptive regression splines approach. A total of 1919 MRI datasets ranging in age from 1-75 years from four publicly available datasets (NIH, C-MIND, fCONN, and IXI) were segmented using the CAT12 segmentation framework, writing out gray matter and white matter images normalized using an affine-only spatial normalization approach. These images were then subjected to a six-step DARTEL procedure, employing an iterative non-linear registration approach and yielding increasingly crisp intermediate images. The resulting six datasets per tissue class were then analyzed using multivariate adaptive regression splines, using the CerebroMatic toolbox. This approach allows for flexibly modelling smoothly varying trajectories while taking into account demographic (age, gender) as well as technical (field strength, data quality) predictors. The resulting regression parameters described here can be used to generate matched DARTEL or SHOOT templates for a given population under study, from infancy to old age. The dataset and the algorithm used to generate it are publicly available at https://irc.cchmc.org/software/cerebromatic.php.
Measurement, time-stamping, and analysis of electrodermal activity in fMRI
NASA Astrophysics Data System (ADS)
Smyser, Christopher; Grabowski, Thomas J.; Rainville, Pierre; Bechara, Antione; Razavi, Mehrdad; Mehta, Sonya; Eaton, Brent L.; Bolinger, Lizann
2002-04-01
A low cost fMRI-compatible system was developed for detecting electrodermal activity without inducing image artifact. Subject electrodermal activity was measured on the plantar surface of the foot using a standard recording circuit. Filtered analog skin conductance responses (SCR) were recorded with a general purpose, time-stamping data acquisition system. A conditioning paradigm involving painful thermal stimulation was used to demonstrate SCR detection and investigate neural correlates of conditioned autonomic activity. 128x128 pixel EPI-BOLD images were acquired with a GE 1.5T Signa scanner. Image analysis was performed using voxel-wise multiple linear regression. The covariate of interest was generated by convolving stimulus event onset with a standard hemodynamic response function. The function was time-shifted to determine optimal activation. Significance was tested using the t-statistic. Image quality was unaffected by the device, and conditioned and unconditioned SCRs were successfully detected. Conditioned SCRs correlated significantly with activity in the right anterior insular cortex. The effect was more robust when responses were scaled by SCR amplitude. The ability to measure and time register SCRs during fMRI acquisition enables studies of cognitive processes marked by autonomic activity, including those involving decision-making, pain, emotion, and addiction.
Maternal adiposity negatively influences infant brain white matter development.
Ou, Xiawei; Thakali, Keshari M; Shankar, Kartik; Andres, Aline; Badger, Thomas M
2015-05-01
To study potential effects of maternal body composition on central nervous system (CNS) development of newborn infants. Diffusion tensor imaging (DTI) was used to evaluate brain white matter development in 2-week-old, full-term, appropriate for gestational age (AGA) infants from uncomplicated pregnancies of normal-weight (BMI < 25 at conception) or obese ( BMI = 30 at conception) and otherwise healthy mothers. Tract-based spatial statistics (TBSS) analyses were used for voxel-wise group comparison of fractional anisotropy (FA), a sensitive measure of white matter integrity. DNA methylation analyses of umbilical cord tissue focused on genes known to be important in CNS development were also performed. Newborns from obese women had significantly lower FA values in multiple white matter regions than those born of normal-weight mothers. Global and regional FA values negatively correlated (P < 0.05) with maternal fat mass percentage. Linear regression analysis followed by gene ontology enrichment showed that methylation status of 68 CpG sites representing 57 genes with GO terms related to CNS development was significantly associated with maternal adiposity status. These results suggest a negative association between maternal adiposity and white matter development in offspring. © 2015 The Obesity Society.
Several steps/day indicators predict changes in anthropometric outcomes: HUB City Steps.
Thomson, Jessica L; Landry, Alicia S; Zoellner, Jamie M; Tudor-Locke, Catrine; Webster, Michael; Connell, Carol; Yadrick, Kathy
2012-11-15
Walking for exercise remains the most frequently reported leisure-time activity, likely because it is simple, inexpensive, and easily incorporated into most people's lifestyle. Pedometers are simple, convenient, and economical tools that can be used to quantify step-determined physical activity. Few studies have attempted to define the direct relationship between dynamic changes in pedometer-determined steps/day and changes in anthropometric and clinical outcomes. Hence, the objective of this secondary analysis was to evaluate the utility of several descriptive indicators of pedometer-determined steps/day for predicting changes in anthropometric and clinical outcomes using data from a community-based walking intervention, HUB City Steps, conducted in a southern, African American population. A secondary aim was to evaluate whether treating steps/day data for implausible values affected the ability of these data to predict intervention-induced changes in clinical and anthropometric outcomes. The data used in this secondary analysis were collected in 2010 from 269 participants in a six-month walking intervention targeting a reduction in blood pressure. Throughout the intervention, participants submitted weekly steps/day diaries based on pedometer self-monitoring. Changes (six-month minus baseline) in anthropometric (body mass index, waist circumference, percent body fat [%BF], fat mass) and clinical (blood pressure, lipids, glucose) outcomes were evaluated. Associations between steps/day indicators and changes in anthropometric and clinical outcomes were assessed using bivariate tests and multivariable linear regression analysis which controlled for demographic and baseline covariates. Significant negative bivariate associations were observed between steps/day indicators and the majority of anthropometric and clinical outcome changes (r = -0.3 to -0.2: P < 0.05). After controlling for covariates in the regression analysis, only the relationships between steps/day indicators and changes in anthropometric (not clinical) outcomes remained significant. For example, a 1,000 steps/day increase in intervention mean steps/day resulted in a 0.1% decrease in %BF. Results for the three pedometer datasets (full, truncated, and excluded) were similar and yielded few meaningful differences in interpretation of the findings. Several descriptive indicators of steps/day may be useful for predicting anthropometric outcome changes. Further, manipulating steps/day data to address implausible values has little overall effect on the ability to predict these anthropometric changes.
Korany, Mohamed A; Gazy, Azza A; Khamis, Essam F; Ragab, Marwa A A; Kamal, Miranda F
2018-06-01
This study outlines two robust regression approaches, namely least median of squares (LMS) and iteratively re-weighted least squares (IRLS) to investigate their application in instrument analysis of nutraceuticals (that is, fluorescence quenching of merbromin reagent upon lipoic acid addition). These robust regression methods were used to calculate calibration data from the fluorescence quenching reaction (∆F and F-ratio) under ideal or non-ideal linearity conditions. For each condition, data were treated using three regression fittings: Ordinary Least Squares (OLS), LMS and IRLS. Assessment of linearity, limits of detection (LOD) and quantitation (LOQ), accuracy and precision were carefully studied for each condition. LMS and IRLS regression line fittings showed significant improvement in correlation coefficients and all regression parameters for both methods and both conditions. In the ideal linearity condition, the intercept and slope changed insignificantly, but a dramatic change was observed for the non-ideal condition and linearity intercept. Under both linearity conditions, LOD and LOQ values after the robust regression line fitting of data were lower than those obtained before data treatment. The results obtained after statistical treatment indicated that the linearity ranges for drug determination could be expanded to lower limits of quantitation by enhancing the regression equation parameters after data treatment. Analysis results for lipoic acid in capsules, using both fluorimetric methods, treated by parametric OLS and after treatment by robust LMS and IRLS were compared for both linearity conditions. Copyright © 2018 John Wiley & Sons, Ltd.
Developing a dengue forecast model using machine learning: A case study in China.
Guo, Pi; Liu, Tao; Zhang, Qin; Wang, Li; Xiao, Jianpeng; Zhang, Qingying; Luo, Ganfeng; Li, Zhihao; He, Jianfeng; Zhang, Yonghui; Ma, Wenjun
2017-10-01
In China, dengue remains an important public health issue with expanded areas and increased incidence recently. Accurate and timely forecasts of dengue incidence in China are still lacking. We aimed to use the state-of-the-art machine learning algorithms to develop an accurate predictive model of dengue. Weekly dengue cases, Baidu search queries and climate factors (mean temperature, relative humidity and rainfall) during 2011-2014 in Guangdong were gathered. A dengue search index was constructed for developing the predictive models in combination with climate factors. The observed year and week were also included in the models to control for the long-term trend and seasonality. Several machine learning algorithms, including the support vector regression (SVR) algorithm, step-down linear regression model, gradient boosted regression tree algorithm (GBM), negative binomial regression model (NBM), least absolute shrinkage and selection operator (LASSO) linear regression model and generalized additive model (GAM), were used as candidate models to predict dengue incidence. Performance and goodness of fit of the models were assessed using the root-mean-square error (RMSE) and R-squared measures. The residuals of the models were examined using the autocorrelation and partial autocorrelation function analyses to check the validity of the models. The models were further validated using dengue surveillance data from five other provinces. The epidemics during the last 12 weeks and the peak of the 2014 large outbreak were accurately forecasted by the SVR model selected by a cross-validation technique. Moreover, the SVR model had the consistently smallest prediction error rates for tracking the dynamics of dengue and forecasting the outbreaks in other areas in China. The proposed SVR model achieved a superior performance in comparison with other forecasting techniques assessed in this study. The findings can help the government and community respond early to dengue epidemics.
An algebraic method for constructing stable and consistent autoregressive filters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, University Park, PA 16802; Hong, Hoon, E-mail: hong@ncsu.edu
2015-02-15
In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides amore » discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern.« less
Gierlinger, Notburga; Luss, Saskia; König, Christian; Konnerth, Johannes; Eder, Michaela; Fratzl, Peter
2010-01-01
The functional characteristics of plant cell walls depend on the composition of the cell wall polymers, as well as on their highly ordered architecture at scales from a few nanometres to several microns. Raman spectra of wood acquired with linear polarized laser light include information about polymer composition as well as the alignment of cellulose microfibrils with respect to the fibre axis (microfibril angle). By changing the laser polarization direction in 3° steps, the dependency between cellulose and laser orientation direction was investigated. Orientation-dependent changes of band height ratios and spectra were described by quadratic linear regression and partial least square regressions, respectively. Using the models and regressions with high coefficients of determination (R2 > 0.99) microfibril orientation was predicted in the S1 and S2 layers distinguished by the Raman imaging approach in cross-sections of spruce normal, opposite, and compression wood. The determined microfibril angle (MFA) in the different S2 layers ranged from 0° to 49.9° and was in coincidence with X-ray diffraction determination. With the prerequisite of geometric sample and laser alignment, exact MFA prediction can complete the picture of the chemical cell wall design gained by the Raman imaging approach at the micron level in all plant tissues. PMID:20007198
Pistonesi, Marcelo F; Di Nezio, María S; Centurión, María E; Lista, Adriana G; Fragoso, Wallace D; Pontes, Márcio J C; Araújo, Mário C U; Band, Beatriz S Fernández
2010-12-15
In this study, a novel, simple, and efficient spectrofluorimetric method to determine directly and simultaneously five phenolic compounds (hydroquinone, resorcinol, phenol, m-cresol and p-cresol) in air samples is presented. For this purpose, variable selection by the successive projections algorithm (SPA) is used in order to obtain simple multiple linear regression (MLR) models based on a small subset of wavelengths. For comparison, partial least square (PLS) regression is also employed in full-spectrum. The concentrations of the calibration matrix ranged from 0.02 to 0.2 mg L(-1) for hydroquinone, from 0.05 to 0.6 mg L(-1) for resorcinol, and from 0.05 to 0.4 mg L(-1) for phenol, m-cresol and p-cresol; incidentally, such ranges are in accordance with the Argentinean environmental legislation. To verify the accuracy of the proposed method a recovery study on real air samples of smoking environment was carried out with satisfactory results (94-104%). The advantage of the proposed method is that it requires only spectrofluorimetric measurements of samples and chemometric modeling for simultaneous determination of five phenols. With it, air is simply sampled and no pre-treatment sample is needed (i.e., separation steps and derivatization reagents are avoided) that means a great saving of time. Copyright © 2010 Elsevier B.V. All rights reserved.
Deriving Hounsfield units using grey levels in cone beam computed tomography
Mah, P; Reeves, T E; McDavid, W D
2010-01-01
Objectives An in vitro study was performed to investigate the relationship between grey levels in dental cone beam CT (CBCT) and Hounsfield units (HU) in CBCT scanners. Methods A phantom containing 8 different materials of known composition and density was imaged with 11 different dental CBCT scanners and 2 medical CT scanners. The phantom was scanned under three conditions: phantom alone and phantom in a small and large water container. The reconstructed data were exported as Digital Imaging and Communications in Medicine (DICOM) and analysed with On Demand 3D® by Cybermed, Seoul, Korea. The relationship between grey levels and linear attenuation coefficients was investigated. Results It was demonstrated that a linear relationship between the grey levels and the attenuation coefficients of each of the materials exists at some “effective” energy. From the linear regression equation of the reference materials, attenuation coefficients were obtained for each of the materials and CT numbers in HU were derived using the standard equation. Conclusions HU can be derived from the grey levels in dental CBCT scanners using linear attenuation coefficients as an intermediate step. PMID:20729181
Automated novelty detection in the WISE survey with one-class support vector machines
NASA Astrophysics Data System (ADS)
Solarz, A.; Bilicki, M.; Gromadzki, M.; Pollo, A.; Durkalec, A.; Wypych, M.
2017-10-01
Wide-angle photometric surveys of previously uncharted sky areas or wavelength regimes will always bring in unexpected sources - novelties or even anomalies - whose existence and properties cannot be easily predicted from earlier observations. Such objects can be efficiently located with novelty detection algorithms. Here we present an application of such a method, called one-class support vector machines (OCSVM), to search for anomalous patterns among sources preselected from the mid-infrared AllWISE catalogue covering the whole sky. To create a model of expected data we train the algorithm on a set of objects with spectroscopic identifications from the SDSS DR13 database, present also in AllWISE. The OCSVM method detects as anomalous those sources whose patterns - WISE photometric measurements in this case - are inconsistent with the model. Among the detected anomalies we find artefacts, such as objects with spurious photometry due to blending, but more importantly also real sources of genuine astrophysical interest. Among the latter, OCSVM has identified a sample of heavily reddened AGN/quasar candidates distributed uniformly over the sky and in a large part absent from other WISE-based AGN catalogues. It also allowed us to find a specific group of sources of mixed types, mostly stars and compact galaxies. By combining the semi-supervised OCSVM algorithm with standard classification methods it will be possible to improve the latter by accounting for sources which are not present in the training sample, but are otherwise well-represented in the target set. Anomaly detection adds flexibility to automated source separation procedures and helps verify the reliability and representativeness of the training samples. It should be thus considered as an essential step in supervised classification schemes to ensure completeness and purity of produced catalogues. The catalogues of outlier data are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/606/A39
Neural correlates of gait variability in people with multiple sclerosis with fall history.
Kalron, Alon; Allali, Gilles; Achiron, Anat
2018-05-28
Investigate the association between step time variability and related brain structures in accordance with fall status in people with multiple sclerosis (PwMS). The study included 225 PwMS. A whole-brain MRI was performed by a high-resolution 3.0-Telsa MR scanner in addition to volumetric analysis based on 3D T1-weighted images using the FreeSurfer image analysis suite. Step time variability was measured by an electronic walkway. Participants were defined as "fallers" (at least two falls during the previous year) and "non-fallers". One hundred and five PwMS were defined as fallers and had a greater step time variability compared to non-fallers (5.6% (S.D.=3.4) vs. 3.4% (S.D.=1.5); p=0.001). MS fallers exhibited a reduced volume in the left caudate and both cerebellum hemispheres compared to non-fallers. By using a linear regression analysis no association was found between gait variability and related brain structures in the total cohort and non-fallers group. However, the analysis found an association between the left hippocampus and left putamen volumes with step time variability in the faller group; p=0.031, 0.048, respectively, controlling for total cranial volume, walking speed, disability, age and gender. Nevertheless, according to the hierarchical regression model, the contribution of these brain measures to predict gait variability was relatively small compared to walking speed. An association between low left hippocampal, putamen volumes and step time variability was found in PwMS with a history of falls, suggesting brain structural characteristics may be related to falls and increased gait variability in PwMS. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Structured learning for robotic surgery utilizing a proficiency score: a pilot study.
Hung, Andrew J; Bottyan, Thomas; Clifford, Thomas G; Serang, Sarfaraz; Nakhoda, Zein K; Shah, Swar H; Yokoi, Hana; Aron, Monish; Gill, Inderbir S
2017-01-01
We evaluated feasibility and benefit of implementing structured learning in a robotics program. Furthermore, we assessed validity of a proficiency assessment tool for stepwise graduation. Teaching cases included robotic radical prostatectomy and partial nephrectomy. Procedure steps were categorized: basic, intermediate, and advanced. An assessment tool ["proficiency score" (PS)] was developed to evaluate ability to safely and autonomously complete a step. Graduation required a passing PS (PS ≥ 3) on three consecutive attempts. PS and validated global evaluative assessment of robotic skills (GEARS) were evaluated for completed steps. Linear regression was utilized to determine postgraduate year/PS relationship (construct validity). Spearman's rank correlation coefficient measured correlation between PS and GEARS evaluations (concurrent validity). Intraclass correlation (ICC) evaluated PS agreement between evaluator classes. Twenty-one robotic trainees participated within the pilot program, completing a median of 14 (2-69) cases each. Twenty-three study evaluators scored 14 (1-60) cases. Over 4 months, 229/294 (78 %) cases were designated "teaching" cases. Residents completed 91 % of possible evaluations; faculty completed 78 %. Verbal and quantitative feedback received by trainees increased significantly (p = 0.002, p < 0.001, respectively). Average PS increased with PGY (post-graduate year) for basic and intermediate steps (regression slopes: 0.402 (p < 0.0001), 0.323 (p < 0.0001), respectively) (construct validation). Overall, PS correlated highly with GEARS (ρ = 0.81, p < 0.0001) (concurrent validity). ICC was 0.77 (95 % CI 0.61-0.88) for resident evaluations. Structured learning can be implemented in an academic robotic program with high levels of trainee and evaluator participation, encouraging both quantitative and verbal feedback. A proficiency assessment tool developed for step-specific proficiency has construct and concurrent validity.
Inhibitory saccadic dysfunction is associated with cerebellar injury in multiple sclerosis.
Kolbe, Scott C; Kilpatrick, Trevor J; Mitchell, Peter J; White, Owen; Egan, Gary F; Fielding, Joanne
2014-05-01
Cognitive dysfunction is common in patients with multiple sclerosis (MS). Saccadic eye movement paradigms such as antisaccades (AS) can sensitively interrogate cognitive function, in particular, the executive and attentional processes of response selection and inhibition. Although we have previously demonstrated significant deficits in the generation of AS in MS patients, the neuropathological changes underlying these deficits were not elucidated. In this study, 24 patients with relapsing-remitting MS underwent testing using an AS paradigm. Rank correlation and multiple regression analyses were subsequently used to determine whether AS errors in these patients were associated with: (i) neurological and radiological abnormalities, as measured by standard clinical techniques, (ii) cognitive dysfunction, and (iii) regionally specific cerebral white and gray-matter damage. Although AS error rates in MS patients did not correlate with clinical disability (using the Expanded Disability Status Score), T2 lesion load or brain parenchymal fraction, AS error rate did correlate with performance on the Paced Auditory Serial Addition Task and the Symbol Digit Modalities Test, neuropsychological tests commonly used in MS. Further, voxel-wise regression analyses revealed associations between AS errors and reduced fractional anisotropy throughout most of the cerebellum, and increased mean diffusivity in the cerebellar vermis. Region-wise regression analyses confirmed that AS errors also correlated with gray-matter atrophy in the cerebellum right VI subregion. These results support the use of the AS paradigm as a marker for cognitive dysfunction in MS and implicate structural and microstructural changes to the cerebellum as a contributing mechanism for AS deficits in these patients. Copyright © 2013 Wiley Periodicals, Inc.
Literature search for research planning and identification of research problem
Grewal, Anju; Kataria, Hanish; Dhawan, Ira
2016-01-01
Literature search is a key step in performing good authentic research. It helps in formulating a research question and planning the study. The available published data are enormous; therefore, choosing the appropriate articles relevant to your study in question is an art. It can be time-consuming, tiring and can lead to disinterest or even abandonment of search in between if not carried out in a step-wise manner. Various databases are available for performing literature search. This article primarily stresses on how to formulate a research question, the various types and sources for literature search, which will help make your search specific and time-saving. PMID:27729689
Literature search for research planning and identification of research problem.
Grewal, Anju; Kataria, Hanish; Dhawan, Ira
2016-09-01
Literature search is a key step in performing good authentic research. It helps in formulating a research question and planning the study. The available published data are enormous; therefore, choosing the appropriate articles relevant to your study in question is an art. It can be time-consuming, tiring and can lead to disinterest or even abandonment of search in between if not carried out in a step-wise manner. Various databases are available for performing literature search. This article primarily stresses on how to formulate a research question, the various types and sources for literature search, which will help make your search specific and time-saving.
1974-01-01
REGRESSION MODEL - THE UNCONSTRAINED, LINEAR EQUALITY AND INEQUALITY CONSTRAINED APPROACHES January 1974 Nelson Delfino d’Avila Mascarenha;? Image...Report 520 DIGITAL IMAGE RESTORATION UNDER A REGRESSION MODEL THE UNCONSTRAINED, LINEAR EQUALITY AND INEQUALITY CONSTRAINED APPROACHES January...a two- dimensional form adequately describes the linear model . A dis- cretization is performed by using quadrature methods. By trans
Intense ionizing radiation from laser-induced processes in ultra-dense deuterium D(-1)
NASA Astrophysics Data System (ADS)
Olofson, Frans; Holmlid, Leif
2014-09-01
Nuclear fusion in ultra-dense deuterium D(-1) has been reported from our laboratory in a few studies using pulsed lasers with energy < 0.2 J. The direct observation of massive particles with energy 1-20 MeV u-1 is conclusive proof for fusion processes, either as a cause or as a result. Continuing the step-wise approach necessary for untangling a complex problem, the high-energy photons from the laser-induced plasma are now studied. The focus is here on the photoelectrons formed. The photons penetrating a copper foil have energy > 80 keV. The total charge created is up to 2 μC or 1 × 1013 photoelectrons per laser shot at 0.13 J pulse energy, assuming isotropic photon emission. The variation of the photoelectron current with laser intensity is faster than linear for some systems, which indicates rapid approach to volume ignition. On a permanent magnet at approximately 1 T, a laser pulse-energy threshold exists for the laser-induced processes probably due to the floating of most clusters of D(-1) in the magnetic field. This Meissner effect was reported previously.
NASA Astrophysics Data System (ADS)
Hamylton, S.
2011-12-01
This paper demonstrates a practical step-wise method for modelling wave energy at the landscape scale using GIS and remote sensing techniques at Alphonse Atoll, Seychelles. Inputs are a map of the benthic surface (seabed) cover, a detailed bathymetric model derived from remotely sensed Compact Airborne Spectrographic Imager (CASI) data and information on regional wave heights. Incident energy at the reef crest around the atoll perimeter is calculated as a function of its deepwater value with wave parameters (significant wave height and period) hindcast in the offshore zone using the WaveWatch III application developed by the National Oceanographic and Atmospheric Administration. Energy modifications are calculated at constant intervals as waves transform over the forereef platform along a series of reef profile transects running into the atoll centre. Factors for shoaling, refraction and frictional attenuation are calculated at each interval for given changes in bathymetry and benthic coverage type and a nominal reduction in absolute energy is incorporated at the reef crest to account for wave breaking. Overall energy estimates are derived for a period of 5 years and related to spatial patterning of reef flat surface cover (sand and seagrass patches).
NASA Astrophysics Data System (ADS)
Cho, Inhee; Huh, Keon; Kwak, Rhokyun; Lee, Hyomin; Kim, Sung Jae
2016-11-01
The first direct chronopotentiometric measurement was provided to distinguish the potential difference through the extended space charge (ESC) layer which is formed with the electrical double layer (EDL) near a perm-selective membrane. From this experimental result, the linear relationship was obtained between the resistance of ESC and the applied current density. Furthermore, we observed the step-wise distributions of relaxation time at the limiting current regime, confirming the existence of ESC capacitance other than EDL's. In addition, we proposed the equivalent electrokinetic circuit model inside ion concentration polarization (ICP) layer under rigorous consideration of EDL, ESC and electro-convection (EC). In order to elucidate the voltage configuration in chronopotentiometric measurement, the EC component was considered as the "dependent voltage source" which is serially connected to the ESC layer. This model successfully described the charging behavior of the ESC layer with or without EC, where both cases determined each relaxation time, respectively. Finally, we quantitatively verified their values utilizing the Poisson-Nernst-Planck equations. Therefore, this unified circuit model would provide a key insight of ICP system and potential energy-efficient applications.
Microfluidic strategy to investigate dynamics of small blood vessel function
NASA Astrophysics Data System (ADS)
Yasotharan, Sanjesh; Bolz, Steffen-Sebastian; Guenther, Axel
2010-11-01
Resistance arteries (RAs, 30-300 microns in diameter) that are located within the terminal part of the vascular tree regulate the laminar perfusion of tissue with blood, via the peripheral vascular resistance, and hence controls the systemic blood pressure. The structure of RAs is adapted to actively controlling flow resistance by dynamically changing their diameter, which is non-linearly dependent on the temporal variation of the transmural pressure, perfusion flow rate and spatiotemporal changes in the chemical environment. Increases in systemic blood pressure (hypertension) resulting from pathologic changes in the RA response represent the primary risk factor for cardiovascular diseases. We use a microfluidic strategy to investigate small blood vessels by quantifying structural variations within the arterial wall, RA outer contour and diameter over time. First, we document the artery response to vasomotor drugs that were homogeneously applied at step-wise increasing concentration. Second, we investigate the response in the presence of well-defined axial and circumferential heterogeneities. Artery per- and superfusion is discussed based on microscale PIV measurements of the fluid velocity on both sides of the arterial wall. Structural changes in the arterial wall are quantified using cross-correlation and proper orthogonal decomposition analyses of bright-field micrographs.
Valent, Peter; Akin, Cem; Arock, Michel; Bock, Christoph; George, Tracy I; Galli, Stephen J; Gotlib, Jason; Haferlach, Torsten; Hoermann, Gregor; Hermine, Olivier; Jäger, Ulrich; Kenner, Lukas; Kreipe, Hans; Majeti, Ravindra; Metcalfe, Dean D; Orfao, Alberto; Reiter, Andreas; Sperr, Wolfgang R; Staber, Philipp B; Sotlar, Karl; Schiffer, Charles; Superti-Furga, Giulio; Horny, Hans-Peter
2017-12-01
Cancer evolution is a step-wise non-linear process that may start early in life or later in adulthood, and includes pre-malignant (indolent) and malignant phases. Early somatic changes may not be detectable or are found by chance in apparently healthy individuals. The same lesions may be detected in pre-malignant clonal conditions. In some patients, these lesions may never become relevant clinically whereas in others, they act together with additional pro-oncogenic hits and thereby contribute to the formation of an overt malignancy. Although some pre-malignant stages of a malignancy have been characterized, no global system to define and to classify these conditions is available. To discuss open issues related to pre-malignant phases of neoplastic disorders, a working conference was organized in Vienna in August 2015. The outcomes of this conference are summarized herein and include a basic proposal for a nomenclature and classification of pre-malignant conditions. This proposal should assist in the communication among patients, physicians and scientists, which is critical as genome-sequencing will soon be offered widely for early cancer-detection. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Element enrichment factor calculation using grain-size distribution and functional data regression.
Sierra, C; Ordóñez, C; Saavedra, A; Gallego, J R
2015-01-01
In environmental geochemistry studies it is common practice to normalize element concentrations in order to remove the effect of grain size. Linear regression with respect to a particular grain size or conservative element is a widely used method of normalization. In this paper, the utility of functional linear regression, in which the grain-size curve is the independent variable and the concentration of pollutant the dependent variable, is analyzed and applied to detrital sediment. After implementing functional linear regression and classical linear regression models to normalize and calculate enrichment factors, we concluded that the former regression technique has some advantages over the latter. First, functional linear regression directly considers the grain-size distribution of the samples as the explanatory variable. Second, as the regression coefficients are not constant values but functions depending on the grain size, it is easier to comprehend the relationship between grain size and pollutant concentration. Third, regularization can be introduced into the model in order to establish equilibrium between reliability of the data and smoothness of the solutions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Spatial prediction of soil texture in region Centre (France) from summary data
NASA Astrophysics Data System (ADS)
Dobarco, Mercedes Roman; Saby, Nicolas; Paroissien, Jean-Baptiste; Orton, Tom G.
2015-04-01
Soil texture is a key controlling factor of important soil functions like water and nutrient holding capacity, retention of pollutants, drainage, soil biodiversity, and C cycling. High resolution soil texture maps enhance our understanding of the spatial distribution of soil properties and provide valuable information for decision making and crop management, environmental protection, and hydrological planning. We predicted the soil texture of agricultural topsoils in the Region Centre (France) combining regression and area-to-point kriging. Soil texture data was collected from the French soil-test database (BDAT), which is populated with soil analysis performed by farmers' demand. To protect the anonymity of the farms the data was treated by commune. In a first step, summary statistics of environmental covariates by commune were used to develop prediction models with Cubist, boosted regression trees, and random forests. In a second step the residuals of each individual observation were summarized by commune and kriged following the method developed by Orton et al. (2012). This approach allowed to include non-linear relationships among covariates and soil texture while accounting for the uncertainty on areal means in the area-to-point kriging step. Independent validation of the models was done using data from the systematic soil monitoring network of French soils. Future work will compare the performance of these models with a non-stationary variance geostatistical model using the most important covariates and summary statistics of texture data. The results will inform on whether the later and statistically more-challenging approach improves significantly texture predictions or whether the more simple area-to-point regression kriging can offer satisfactory results. The application of area-to-point regression kriging at national level using BDAT data has the potential to improve soil texture predictions for agricultural topsoils, especially when combined with existing maps (i.e., model ensemble).
Outpatient Infection Prevention: A Practical Primer
Steinkuller, Fozia; Harris, Kristofer; Vigil, Karen J; Ostrosky-Zeichner, Luis
2018-01-01
Abstract As more patients seek care in the outpatient setting, the opportunities for health care–acquired infections and associated outbreaks will increase. Without uptake of core infection prevention and control strategies through formal initiation of infection prevention programs, outbreaks and patient safety issues will surface. This review provides a step-wise approach for implementing an outpatient infection control program, highlighting some of the common pitfalls and high-priority areas. PMID:29740593
Who Will Win?: Predicting the Presidential Election Using Linear Regression
ERIC Educational Resources Information Center
Lamb, John H.
2007-01-01
This article outlines a linear regression activity that engages learners, uses technology, and fosters cooperation. Students generated least-squares linear regression equations using TI-83 Plus[TM] graphing calculators, Microsoft[C] Excel, and paper-and-pencil calculations using derived normal equations to predict the 2004 presidential election.…
Akita, Hidetaka; Kudo, Asako; Minoura, Arisa; Yamaguti, Masaya; Khalil, Ikramy A; Moriguchi, Rumiko; Masuda, Tomoya; Danev, Radostin; Nagayama, Kuniaki; Kogure, Kentaro; Harashima, Hideyoshi
2009-05-01
Efficient targeting of DNA to the nucleus is a prerequisite for effective gene therapy. The gene-delivery vehicle must penetrate through the plasma membrane, and the DNA-impermeable double-membraned nuclear envelope, and deposit its DNA cargo in a form ready for transcription. Here we introduce a concept for overcoming intracellular membrane barriers that involves step-wise membrane fusion. To achieve this, a nanotechnology was developed that creates a multi-layered nanoparticle, which we refer to as a Tetra-lamellar Multi-functional Envelope-type Nano Device (T-MEND). The critical structural elements of the T-MEND are a DNA-polycation condensed core coated with two nuclear membrane-fusogenic inner envelopes and two endosome-fusogenic outer envelopes, which are shed in stepwise fashion. A double-lamellar membrane structure is required for nuclear delivery via the stepwise fusion of double layered nuclear membrane structure. Intracellular membrane fusions to endosomes and nuclear membranes were verified by spectral imaging of fluorescence resonance energy transfer (FRET) between donor and acceptor fluorophores that had been dually labeled on the liposome surface. Coating the core with the minimum number of nucleus-fusogenic lipid envelopes (i.e., 2) is essential to facilitate transcription. As a result, the T-MEND achieves dramatic levels of transgene expression in non-dividing cells.
Francis, S T; Bowtell, R; Gowland, P A
2008-02-01
This work describes a new compartmental model with step-wise temporal analysis for a Look-Locker (LL)-flow-sensitive alternating inversion-recovery (FAIR) sequence, which combines the FAIR arterial spin labeling (ASL) scheme with a LL echo planar imaging (EPI) measurement, using a multireadout EPI sequence for simultaneous perfusion and T*(2) measurements. The new model highlights the importance of accounting for the transit time of blood through the arteriolar compartment, delta, in the quantification of perfusion. The signal expected is calculated in a step-wise manner to avoid discontinuities between different compartments. The optimal LL-FAIR pulse sequence timings for the measurement of perfusion with high signal-to-noise ratio (SNR), and high temporal resolution at 1.5, 3, and 7T are presented. LL-FAIR is shown to provide better SNR per unit time compared to standard FAIR. The sequence has been used experimentally for simultaneous monitoring of perfusion, transit time, and T*(2) changes in response to a visual stimulus in four subjects. It was found that perfusion increased by 83 +/- 4% on brain activation from a resting state value of 94 +/- 13 ml/100 g/min, while T*(2) increased by 3.5 +/- 0.5%. (c) 2008 Wiley-Liss, Inc.
Coenen, Pieter; Healy, Genevieve N; Winkler, Elisabeth A H; Dunstan, David W; Owen, Neville; Moodie, Marj; LaMontagne, Anthony D; Eakin, Elizabeth A; O'Sullivan, Peter B; Straker, Leon M
2018-04-22
We examined the association of musculoskeletal symptoms (MSS) with workplace sitting, standing and stepping time, as well as sitting and standing time accumulation (i.e. usual bout duration of these activities), measured objectively with the activPAL3 monitor. Using baseline data from the Stand Up Victoria trial (216 office workers, 14 workplaces), cross-sectional associations of occupational activities with self-reported MSS (low-back, upper and lower extremity symptoms in the last three months) were examined using probit regression, correcting for clustering and adjusting for confounders. Sitting bout duration was significantly (p < 0.05) associated, non-linearly, with MSS, such that those in the middle tertile displayed the highest prevalence of upper extremity symptoms. Other associations were non-significant but sometimes involved large differences in symptom prevalence (e.g. 38%) by activity. Though causation is unclear, these non-linear associations suggest that sitting and its alternatives (i.e. standing and stepping) interact with MSS and this should be considered when designing safe work systems. Practitioner summary: We studied associations of objectively assessed occupational activities with musculoskeletal symptoms in office workers. Workers who accumulated longer sitting bouts reported fewer upper extremity symptoms. Total activity duration was not significantly associated with musculoskeletal symptoms. We underline the importance of considering total volumes and patterns of activity time in musculoskeletal research.
Goldstein, Naomi E. S.; Kemp, Kathleen A.; Leff, Stephen S.; Lochman, John E.
2014-01-01
The use of manual-based interventions tends to improve client outcomes and promote replicability. With an increasingly strong link between funding and the use of empirically supported prevention and intervention programs, manual development and adaptation have become research priorities. As a result, researchers and scholars have generated guidelines for developing manuals from scratch, but there are no extant guidelines for adapting empirically supported, manualized prevention and intervention programs for use with new populations. Thus, this article proposes step-by-step guidelines for the manual adaptation process. It also describes two adaptations of an extensively researched anger management intervention to exemplify how an empirically supported program was systematically and efficiently adapted to achieve similar outcomes with vastly different populations in unique settings. PMID:25110403
Fries, M; Montavon, S; Spadavecchia, C; Levionnois, O L
2017-03-01
Methods of evaluating locomotor activity can be useful in efforts to quantify behavioural activity in horses objectively. To evaluate whether an accelerometric device would be adequate to quantify locomotor activity and step frequency in horses, and to distinguish between different levels of activity and different gaits. Observational study in an experimental setting. Dual-mode (activity and step count) piezo-electric accelerometric devices were placed at each of 4 locations (head, withers, forelimb and hindlimb) in each of 6 horses performing different controlled activities including grazing, walking at different speeds, trotting and cantering. Both the activity count and step count were recorded and compared by the various activities. Statistical analyses included analysis of variance for repeated measures, receiver operating characteristic curves, Bland-Altman analysis and linear regression. The accelerometric device was able to quantify locomotor activity at each of the 4 locations investigated and to distinguish between gaits and speeds. The activity count recorded by the accelerometer placed on the hindlimb was the most accurate, displaying a clear discrimination between the different levels of activity and a linear correlation to speed. The accelerometer placed on the head was the only one to distinguish specifically grazing behaviour from standing. The accelerometer placed on the withers was unable to differentiate different gaits and activity levels. The step count function measured at the hindlimb was reliable but the count was doubled at the walk. The dual-mode accelerometric device was sufficiently accurate to quantify and compare locomotor activity in horses moving at different speeds and gaits. Positioning the device on the hindlimb allowed for the most accurate results. The step count function can be useful but must be manually corrected, especially at the walk. © 2016 EVJ Ltd.
Nonlinear laminate analysis for metal matrix fiber composites
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Sinclair, J. H.
1981-01-01
A nonlinear laminate analysis is described for predicting the mechanical behavior (stress-strain relationships) of angleplied laminates in which the matrix is strained nonlinearly by both the residual stress and the mechanical load and in which additional nonlinearities are induced due to progressive fiber fractures and ply relative rotations. The nonlinear laminate analysis (NLA) is based on linear composite mechanics and a piece wise linear laminate analysis to handle the nonlinear responses. Results obtained by using this nonlinear analysis on boron fiber/aluminum matrix angleplied laminates agree well with experimental data. The results shown illustrate the in situ ply stress-strain behavior and synergistic strength enhancement.
The microcomputer scientific software series 2: general linear model--regression.
Harold M. Rauscher
1983-01-01
The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...
Real, Jordi; Forné, Carles; Roso-Llorach, Albert; Martínez-Sánchez, Jose M
2016-05-01
Controlling for confounders is a crucial step in analytical observational studies, and multivariable models are widely used as statistical adjustment techniques. However, the validation of the assumptions of the multivariable regression models (MRMs) should be made clear in scientific reporting. The objective of this study is to review the quality of statistical reporting of the most commonly used MRMs (logistic, linear, and Cox regression) that were applied in analytical observational studies published between 2003 and 2014 by journals indexed in MEDLINE.Review of a representative sample of articles indexed in MEDLINE (n = 428) with observational design and use of MRMs (logistic, linear, and Cox regression). We assessed the quality of reporting about: model assumptions and goodness-of-fit, interactions, sensitivity analysis, crude and adjusted effect estimate, and specification of more than 1 adjusted model.The tests of underlying assumptions or goodness-of-fit of the MRMs used were described in 26.2% (95% CI: 22.0-30.3) of the articles and 18.5% (95% CI: 14.8-22.1) reported the interaction analysis. Reporting of all items assessed was higher in articles published in journals with a higher impact factor.A low percentage of articles indexed in MEDLINE that used multivariable techniques provided information demonstrating rigorous application of the model selected as an adjustment method. Given the importance of these methods to the final results and conclusions of observational studies, greater rigor is required in reporting the use of MRMs in the scientific literature.
Linking brain-wide multivoxel activation patterns to behaviour: Examples from language and math.
Raizada, Rajeev D S; Tsao, Feng-Ming; Liu, Huei-Mei; Holloway, Ian D; Ansari, Daniel; Kuhl, Patricia K
2010-05-15
A key goal of cognitive neuroscience is to find simple and direct connections between brain and behaviour. However, fMRI analysis typically involves choices between many possible options, with each choice potentially biasing any brain-behaviour correlations that emerge. Standard methods of fMRI analysis assess each voxel individually, but then face the problem of selection bias when combining those voxels into a region-of-interest, or ROI. Multivariate pattern-based fMRI analysis methods use classifiers to analyse multiple voxels together, but can also introduce selection bias via data-reduction steps as feature selection of voxels, pre-selecting activated regions, or principal components analysis. We show here that strong brain-behaviour links can be revealed without any voxel selection or data reduction, using just plain linear regression as a classifier applied to the whole brain at once, i.e. treating each entire brain volume as a single multi-voxel pattern. The brain-behaviour correlations emerged despite the fact that the classifier was not provided with any information at all about subjects' behaviour, but instead was given only the neural data and its condition-labels. Surprisingly, more powerful classifiers such as a linear SVM and regularised logistic regression produce very similar results. We discuss some possible reasons why the very simple brain-wide linear regression model is able to find correlations with behaviour that are as strong as those obtained on the one hand from a specific ROI and on the other hand from more complex classifiers. In a manner which is unencumbered by arbitrary choices, our approach offers a method for investigating connections between brain and behaviour which is simple, rigorous and direct. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Jesus, Gilmar Mercês de; Assis, Maria Alice Altenburg de; Kupek, Emil; Dias, Lizziane Andrade
2017-01-01
The quality control of data entry in computerized questionnaires is an important step in the validation of new instruments. The study assessed the consistency of recorded weight and height on the Food Intake and Physical Activity of School Children (Web-CAAFE) between repeated measures and against directly measured data. Students from the 2nd to the 5th grade (n = 390) had their weight and height directly measured and then filled out the Web-CAAFE. A subsample (n = 92) filled out the Web-CAAFE twice, three hours apart. The analysis included hierarchical linear regression, mixed linear regression model, to evaluate the bias, and intraclass correlation coefficient (ICC), to assess consistency. Univariate linear regression assessed the effect of gender, reading/writing performance, and computer/internet use and possession on residuals of fixed and random effects. The Web-CAAFE showed high values of ICC between repeated measures (body weight = 0.996, height = 0.937, body mass index - BMI = 0.972), and regarding the checked measures (body weight = 0.962, height = 0.882, BMI = 0.828). The difference between means of body weight, height, and BMI directly measured and recorded was 208 g, -2 mm, and 0.238 kg/m², respectively, indicating slight BMI underestimation due to underestimation of weight and overestimation of height. This trend was related to body weight and age. Height and weight data entered in the Web-CAAFE by children were highly correlated with direct measurements and with the repeated entry. The bias found was similar to validation studies of self-reported weight and height in comparison to direct measurements.
Linking brain-wide multivoxel activation patterns to behaviour: Examples from language and math
Raizada, Rajeev D.S.; Tsao, Feng-Ming; Liu, Huei-Mei; Holloway, Ian D.; Ansari, Daniel; Kuhl, Patricia K.
2010-01-01
A key goal of cognitive neuroscience is to find simple and direct connections between brain and behaviour. However, fMRI analysis typically involves choices between many possible options, with each choice potentially biasing any brain–behaviour correlations that emerge. Standard methods of fMRI analysis assess each voxel individually, but then face the problem of selection bias when combining those voxels into a region-of-interest, or ROI. Multivariate pattern-based fMRI analysis methods use classifiers to analyse multiple voxels together, but can also introduce selection bias via data-reduction steps as feature selection of voxels, pre-selecting activated regions, or principal components analysis. We show here that strong brain–behaviour links can be revealed without any voxel selection or data reduction, using just plain linear regression as a classifier applied to the whole brain at once, i.e. treating each entire brain volume as a single multi-voxel pattern. The brain–behaviour correlations emerged despite the fact that the classifier was not provided with any information at all about subjects' behaviour, but instead was given only the neural data and its condition-labels. Surprisingly, more powerful classifiers such as a linear SVM and regularised logistic regression produce very similar results. We discuss some possible reasons why the very simple brain-wide linear regression model is able to find correlations with behaviour that are as strong as those obtained on the one hand from a specific ROI and on the other hand from more complex classifiers. In a manner which is unencumbered by arbitrary choices, our approach offers a method for investigating connections between brain and behaviour which is simple, rigorous and direct. PMID:20132896
NASA Astrophysics Data System (ADS)
Harpold, A. A.; Walter, M. T.
2009-12-01
The Neversink River Watershed (NRW) originates at the highest point in the Catskill Mountains and is sensitive to changing patterns in acidic deposition, precipitation, and air temperature. Despite reductions in fossil fuel emission since the Clean Air Act, past acidic deposition has accelerated the leaching of cations from the soil and reduced the stores of base cations necessary for buffering stream acidity. The goal of this study was to investigate connections between different watershed ‘features’ and the apparently complex spatial patterns of stream buffering chemistry (specifically, acid neutralizing capacity ANC and Ca concentrations) and aquatic biota (macroinvertebrate and fish populations). The ten nested NRW watersheds (2.0 km^2 to 176.0 km^2) have relatively homogeneous bedrock geology, forested cover, and soil series; therefore, we hypothesized that differing distributions of hydrological flowpaths between the watersheds control the variability in stream buffering chemistry and aquatic biota. However because the flowpath distributions are not directly measurable, this study used step-wise linear regression to develop relationships between watershed ‘features’ and buffering chemistry. The regression results showed that the mean ratio of precipitation to stream runoff (or runoff ratio) from twenty non-winter storm events explained more than 81% of the variability in mean summer ANC and Ca concentrations. The results also suggested that steeper (higher mean slope) more channelized watersheds (larger drainage density) are more susceptible to stream acidity and negative impacts on biota. A simple linear relationship (using no discharge or water chemistry measurements) was able to explain buffering chemistry and aquatic biota populations in 17 additional NRW watersheds (0.3 km^2 to 160.0 km^2), including 60-80% of the variability in macroinvertebrate populations (EPT richness and BAP) and 50-60% of the variability in fish density and species richness. These results have several important implications for understanding the effects of climate change on buffering chemistry and aquatic biota in this well-studied watershed. First, the results demonstrate that geomorphological and hydrogeological ‘features’ control the spatial variability of stream buffering chemistry, suggesting that acidification ‘hot-spots’ could be predicted a priori. Second, the connection between event-scale processes (runoff ratio) and average stream chemistry imply that changing precipitation patterns in the Catskills may have uneven effects on long-term buffering chemistry between ‘flashy’ and ‘damped’ watersheds. Specifically, an increasing trend in precipitation in the last 25 years in the Catskill Mountains makes it difficult to compare base cation recovery across NRW streams, even if the concentrations are normalized by discharge. The results of this study could improve the modeling of base cation recovery in surface waters in other mountainous Northeastern U.S. watersheds with future reductions in acidic deposition and differing climate scenarios.
Lescroart, Mark D.; Stansbury, Dustin E.; Gallant, Jack L.
2015-01-01
Perception of natural visual scenes activates several functional areas in the human brain, including the Parahippocampal Place Area (PPA), Retrosplenial Complex (RSC), and the Occipital Place Area (OPA). It is currently unclear what specific scene-related features are represented in these areas. Previous studies have suggested that PPA, RSC, and/or OPA might represent at least three qualitatively different classes of features: (1) 2D features related to Fourier power; (2) 3D spatial features such as the distance to objects in a scene; or (3) abstract features such as the categories of objects in a scene. To determine which of these hypotheses best describes the visual representation in scene-selective areas, we applied voxel-wise modeling (VM) to BOLD fMRI responses elicited by a set of 1386 images of natural scenes. VM provides an efficient method for testing competing hypotheses by comparing predictions of brain activity based on encoding models that instantiate each hypothesis. Here we evaluated three different encoding models that instantiate each of the three hypotheses listed above. We used linear regression to fit each encoding model to the fMRI data recorded from each voxel, and we evaluated each fit model by estimating the amount of variance it predicted in a withheld portion of the data set. We found that voxel-wise models based on Fourier power or the subjective distance to objects in each scene predicted much of the variance predicted by a model based on object categories. Furthermore, the response variance explained by these three models is largely shared, and the individual models explain little unique variance in responses. Based on an evaluation of previous studies and the data we present here, we conclude that there is currently no good basis to favor any one of the three alternative hypotheses about visual representation in scene-selective areas. We offer suggestions for further studies that may help resolve this issue. PMID:26594164
Gullo, Charles A
2016-01-01
Biomedical programs have a potential treasure trove of data they can mine to assist admissions committees in identification of students who are likely to do well and help educational committees in the identification of students who are likely to do poorly on standardized national exams and who may need remediation. In this article, we provide a step-by-step approach that schools can utilize to generate data that are useful when predicting the future performance of current students in any given program. We discuss the use of linear regression analysis as the means of generating that data and highlight some of the limitations. Finally, we lament on how the combination of these institution-specific data sets are not being fully utilized at the national level where these data could greatly assist programs at large.
Watkins, Stephanie; Jonsson-Funk, Michele; Brookhart, M Alan; Rosenberg, Steven A; O'Shea, T Michael; Daniels, Julie
2014-05-01
Children born very low birth weight (VLBW) are at an increased risk of delayed development of motor skills. Physical and occupational therapy services may reduce this risk. Among VLBW children, we evaluated whether receipt of physical or occupational therapy services between 9 months and 2 years of age is associated with improved preschool age motor ability. Using data from the Early Childhood Longitudinal Study Birth Cohort we estimated the association between receipt of therapy and the following preschool motor milestones: skipping eight consecutive steps, hopping five times, standing on one leg for 10 seconds, walking backwards six steps on a line, and jumping distance. We used propensity score methods to adjust for differences in baseline characteristics between children who did and did not receive physical or occupational therapy, since children receiving therapy may be at higher risk of impairment. We applied propensity score weights and modeled the estimated effect of therapy on the distance that the child jumped using linear regression. We modeled all other end points using logistic regression. Treated VLBW children were 1.70 times as likely to skip eight steps (RR 1.70, 95 % CI 0.84, 3.44) compared to the untreated group and 30 % more likely to walk six steps backwards (RR 1.30, 95 % CI 0.63, 2.71), although these differences were not statistically significant. We found little effect of therapy on other endpoints. Providing therapy to VLBW children during early childhood may improve select preschool motor skills involving complex motor planning.
Sex differences in knee joint loading: Cross-sectional study in geriatric population.
Ro, Du Hyun; Lee, Dong Yeon; Moon, Giho; Lee, Sahnghoon; Seo, Sang Gyo; Kim, Seong Hwan; Park, In Woong; Lee, Myung Chul
2017-06-01
This study investigated sex differences in knee biomechanics and investigated determinants for difference in a geriatric population. Age-matched healthy volunteers (42 males and 42 females, average age 65 years) without knee OA were included in the study. Subjects underwent physical examination on their knee and standing full-limb radiography for anthropometric measurements. Linear, kinetic, and kinematic parameters were compared using a three-dimensional, 12-camera motion capture system. Gait parameters were evaluated and determinants for sex difference were evaluated with multiple regression analysis. Females had a higher peak knee adduction moment (KAM) during gait (p = 0.004). Females had relatively wider pelvis and narrower step width (both p < 0.001). However, coronal knee alignment was not significantly different between the sexes. Multiple regression analysis revealed that coronal alignment (b = 0.014, p < 0.001), step width (b = -0.010, p = 0.011), and pelvic width/height ratio (b = 1.703, p = 0.046) were significant determinants of peak KAM. Because coronal alignment was not different between the sexes, narrow step width and high pelvic width/height ratio of female were the main contributors to higher peak KAM in females. Sex differences in knee biomechanics were present in the geriatric population. Increased mechanical loading on the female knee, which was associated with narrow step width and wide pelvis, may play an important role in future development and progression of OA. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 35:1283-1289, 2017. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
Evaluation of trends in wheat yield models
NASA Technical Reports Server (NTRS)
Ferguson, M. C.
1982-01-01
Trend terms in models for wheat yield in the U.S. Great Plains for the years 1932 to 1976 are evaluated. The subset of meteorological variables yielding the largest adjusted R(2) is selected using the method of leaps and bounds. Latent root regression is used to eliminate multicollinearities, and generalized ridge regression is used to introduce bias to provide stability in the data matrix. The regression model used provides for two trends in each of two models: a dependent model in which the trend line is piece-wise continuous, and an independent model in which the trend line is discontinuous at the year of the slope change. It was found that the trend lines best describing the wheat yields consisted of combinations of increasing, decreasing, and constant trend: four combinations for the dependent model and seven for the independent model.
Improving the Incoherence of a Learned Dictionary via Rank Shrinkage.
Ubaru, Shashanka; Seghouane, Abd-Krim; Saad, Yousef
2017-01-01
This letter considers the problem of dictionary learning for sparse signal representation whose atoms have low mutual coherence. To learn such dictionaries, at each step, we first update the dictionary using the method of optimal directions (MOD) and then apply a dictionary rank shrinkage step to decrease its mutual coherence. In the rank shrinkage step, we first compute a rank 1 decomposition of the column-normalized least squares estimate of the dictionary obtained from the MOD step. We then shrink the rank of this learned dictionary by transforming the problem of reducing the rank to a nonnegative garrotte estimation problem and solving it using a path-wise coordinate descent approach. We establish theoretical results that show that the rank shrinkage step included will reduce the coherence of the dictionary, which is further validated by experimental results. Numerical experiments illustrating the performance of the proposed algorithm in comparison to various other well-known dictionary learning algorithms are also presented.
A componential model of human interaction with graphs: 1. Linear regression modeling
NASA Technical Reports Server (NTRS)
Gillan, Douglas J.; Lewis, Robert
1994-01-01
Task analyses served as the basis for developing the Mixed Arithmetic-Perceptual (MA-P) model, which proposes (1) that people interacting with common graphs to answer common questions apply a set of component processes-searching for indicators, encoding the value of indicators, performing arithmetic operations on the values, making spatial comparisons among indicators, and repsonding; and (2) that the type of graph and user's task determine the combination and order of the components applied (i.e., the processing steps). Two experiments investigated the prediction that response time will be linearly related to the number of processing steps according to the MA-P model. Subjects used line graphs, scatter plots, and stacked bar graphs to answer comparison questions and questions requiring arithmetic calculations. A one-parameter version of the model (with equal weights for all components) and a two-parameter version (with different weights for arithmetic and nonarithmetic processes) accounted for 76%-85% of individual subjects' variance in response time and 61%-68% of the variance taken across all subjects. The discussion addresses possible modifications in the MA-P model, alternative models, and design implications from the MA-P model.
Wang, D Z; Wang, C; Shen, C F; Zhang, Y; Zhang, H; Song, G D; Xue, X D; Xu, Z L; Zhang, S; Jiang, G H
2017-05-10
We described the time trend of acute myocardial infarction (AMI) from 1999 to 2013 in Tianjin incidence rate with Cochran-Armitage trend (CAT) test and linear regression analysis, and the results were compared. Based on actual population, CAT test had much stronger statistical power than linear regression analysis for both overall incidence trend and age specific incidence trend (Cochran-Armitage trend P value
Deygout, François; Auburtin, Guy
2015-03-01
Variability in occupational exposure levels to bitumen emissions has been observed during road paving operations. This is due to recurrent field factors impacting the level of exposure experienced by workers during paving. The present study was undertaken in order to quantify the impact of such factors. Pre-identified variables currently encountered in the field were monitored and recorded during paving surveys, and were conducted randomly covering current applications performed by road crews. Multivariate variance analysis and regressions were then used on computerized field data. The statistical investigations were limited due to the relatively small size of the study (36 data). Nevertheless, the particular use of the step-wise regression tool enabled the quantification of the impact of several predictors despite the existing collinearity between variables. The two bitumen organic fractions (particulates and volatiles) are associated with different field factors. The process conditions (machinery used and delivery temperature) have a significant impact on the production of airborne particulates and explain up to 44% of variability. This confirms the outcomes described by previous studies. The influence of the production factors is limited though, and should be complemented by studying factors involving the worker such as work style and the mix of tasks. The residual volatile compounds, being part of the bituminous binder and released during paving operations, control the volatile emissions; 73% of the encountered field variability is explained by the composition of the bitumen batch. © The Author 2014. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Yang, Ruiqi; Wang, Fei; Zhang, Jialing; Zhu, Chonglei; Fan, Limei
2015-05-19
To establish the reference values of thalamus, caudate nucleus and lenticular nucleus diameters through fetal thalamic transverse section. A total of 265 fetuses at our hospital were randomly selected from November 2012 to August 2014. And the transverse and length diameters of thalamus, caudate nucleus and lenticular nucleus were measured. SPSS 19.0 statistical software was used to calculate the regression curve of fetal diameter changes and gestational weeks of pregnancy. P < 0.05 was considered as having statistical significance. The linear regression equation of fetal thalamic length diameter and gestational week was: Y = 0.051X+0.201, R = 0.876, linear regression equation of thalamic transverse diameter and fetal gestational week was: Y = 0.031X+0.229, R = 0.817, linear regression equation of fetal head of caudate nucleus length diameter and gestational age was: Y = 0.033X+0.101, R = 0.722, linear regression equation of fetal head of caudate nucleus transverse diameter and gestational week was: R = 0.025 - 0.046, R = 0.711, linear regression equation of fetal lentiform nucleus length diameter and gestational week was: Y = 0.046+0.229, R = 0.765, linear regression equation of fetal lentiform nucleus diameter and gestational week was: Y = 0.025 - 0.05, R = 0.772. Ultrasonic measurement of diameter of fetal thalamus caudate nucleus, and lenticular nucleus through thalamic transverse section is simple and convenient. And measurements increase with fetal gestational weeks and there is linear regression relationship between them.
Local Linear Regression for Data with AR Errors.
Li, Runze; Li, Yan
2009-07-01
In many statistical applications, data are collected over time, and they are likely correlated. In this paper, we investigate how to incorporate the correlation information into the local linear regression. Under the assumption that the error process is an auto-regressive process, a new estimation procedure is proposed for the nonparametric regression by using local linear regression method and the profile least squares techniques. We further propose the SCAD penalized profile least squares method to determine the order of auto-regressive process. Extensive Monte Carlo simulation studies are conducted to examine the finite sample performance of the proposed procedure, and to compare the performance of the proposed procedures with the existing one. From our empirical studies, the newly proposed procedures can dramatically improve the accuracy of naive local linear regression with working-independent error structure. We illustrate the proposed methodology by an analysis of real data set.
Orthogonal Regression: A Teaching Perspective
ERIC Educational Resources Information Center
Carr, James R.
2012-01-01
A well-known approach to linear least squares regression is that which involves minimizing the sum of squared orthogonal projections of data points onto the best fit line. This form of regression is known as orthogonal regression, and the linear model that it yields is known as the major axis. A similar method, reduced major axis regression, is…
Liu, Danping; Yeung, Edwina H; McLain, Alexander C; Xie, Yunlong; Buck Louis, Germaine M; Sundaram, Rajeshwari
2017-09-01
Imperfect follow-up in longitudinal studies commonly leads to missing outcome data that can potentially bias the inference when the missingness is nonignorable; that is, the propensity of missingness depends on missing values in the data. In the Upstate KIDS Study, we seek to determine if the missingness of child development outcomes is nonignorable, and how a simple model assuming ignorable missingness would compare with more complicated models for a nonignorable mechanism. To correct for nonignorable missingness, the shared random effects model (SREM) jointly models the outcome and the missing mechanism. However, the computational complexity and lack of software packages has limited its practical applications. This paper proposes a novel two-step approach to handle nonignorable missing outcomes in generalized linear mixed models. We first analyse the missing mechanism with a generalized linear mixed model and predict values of the random effects; then, the outcome model is fitted adjusting for the predicted random effects to account for heterogeneity in the missingness propensity. Extensive simulation studies suggest that the proposed method is a reliable approximation to SREM, with a much faster computation. The nonignorability of missing data in the Upstate KIDS Study is estimated to be mild to moderate, and the analyses using the two-step approach or SREM are similar to the model assuming ignorable missingness. The two-step approach is a computationally straightforward method that can be conducted as sensitivity analyses in longitudinal studies to examine violations to the ignorable missingness assumption and the implications relative to health outcomes. © 2017 John Wiley & Sons Ltd.
Practical Session: Simple Linear Regression
NASA Astrophysics Data System (ADS)
Clausel, M.; Grégoire, G.
2014-12-01
Two exercises are proposed to illustrate the simple linear regression. The first one is based on the famous Galton's data set on heredity. We use the lm R command and get coefficients estimates, standard error of the error, R2, residuals …In the second example, devoted to data related to the vapor tension of mercury, we fit a simple linear regression, predict values, and anticipate on multiple linear regression. This pratical session is an excerpt from practical exercises proposed by A. Dalalyan at EPNC (see Exercises 1 and 2 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_4.pdf).
Morse Code, Scrabble, and the Alphabet
ERIC Educational Resources Information Center
Richardson, Mary; Gabrosek, John; Reischman, Diann; Curtiss, Phyliss
2004-01-01
In this paper we describe an interactive activity that illustrates simple linear regression. Students collect data and analyze it using simple linear regression techniques taught in an introductory applied statistics course. The activity is extended to illustrate checks for regression assumptions and regression diagnostics taught in an…
Likhvantseva, V G; Sokolov, V A; Levanova, O N; Kovelenova, I V
2018-01-01
Prediction of the clinical course of primary open-angle glaucoma (POAG) is one of the main directions in solving the problem of vision loss prevention and stabilization of the pathological process. Simple statistical methods of correlation analysis show the extent of each risk factor's impact, but do not indicate the total impact of these factors in personalized combinations. The relationships between the risk factors is subject to correlation and regression analysis. The regression equation represents the dependence of the mathematical expectation of the resulting sign on the combination of factor signs. To develop a technique for predicting the probability of development and progression of primary open-angle glaucoma based on a personalized combination of risk factors by linear multivariate regression analysis. The study included 66 patients (23 female and 43 male; 132 eyes) with newly diagnosed primary open-angle glaucoma. The control group consisted of 14 patients (8 male and 6 female). Standard ophthalmic examination was supplemented with biochemical study of lacrimal fluid. Concentration of matrix metalloproteinase MMP-2 and MMP-9 in tear fluid in both eyes was determined using 'sandwich' enzyme-linked immunosorbent assay (ELISA) method. The study resulted in the development of regression equations and step-by-step multivariate logistic models that can help calculate the risk of development and progression of POAG. Those models are based on expert evaluation of clinical and instrumental indicators of hydrodynamic disturbances (coefficient of outflow ease - C, volume of intraocular fluid secretion - F, fluctuation of intraocular pressure), as well as personalized morphometric parameters of the retina (central retinal thickness in the macular area) and concentration of MMP-2 and MMP-9 in the tear film. The newly developed regression equations are highly informative and can be a reliable tool for studying of the influence vector and assessment of pathogenic potential of the independent risk factors in specific personalized combinations.
Cognitive Changes in Presymptomatic Parkinson’s Disease
2006-09-01
Education and the CVLT 5-trial total T -score ( r = 0.23, p = 0.048) achieved significance. With the exception of the correlation with Serial Order...International Neuropsychological Society 1996; 2: 383-91. Sprengelmeyer R , Young AW, Mahn K, Schroeder U, Woitalla D, Büttner T , Kuhn W, Przuntek H... R ., Grande, L., Womack, K., Riestra, A., Okun, M.S., Bowers, D., & Heilman, K.M. Memory in Parkinson’s Disease: A step-wise analysis. Poster
Advanced statistics: linear regression, part II: multiple linear regression.
Marill, Keith A
2004-01-01
The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.
NASA Astrophysics Data System (ADS)
Kang, Pilsang; Koo, Changhoi; Roh, Hokyu
2017-11-01
Since simple linear regression theory was established at the beginning of the 1900s, it has been used in a variety of fields. Unfortunately, it cannot be used directly for calibration. In practical calibrations, the observed measurements (the inputs) are subject to errors, and hence they vary, thus violating the assumption that the inputs are fixed. Therefore, in the case of calibration, the regression line fitted using the method of least squares is not consistent with the statistical properties of simple linear regression as already established based on this assumption. To resolve this problem, "classical regression" and "inverse regression" have been proposed. However, they do not completely resolve the problem. As a fundamental solution, we introduce "reversed inverse regression" along with a new methodology for deriving its statistical properties. In this study, the statistical properties of this regression are derived using the "error propagation rule" and the "method of simultaneous error equations" and are compared with those of the existing regression approaches. The accuracy of the statistical properties thus derived is investigated in a simulation study. We conclude that the newly proposed regression and methodology constitute the complete regression approach for univariate linear calibrations.
Li, He; Lv, Chenlong; Zhang, Ting; Chen, Kewei; Chen, Chuansheng; Gai, Guozhong; Hu, Liangping; Wang, Yongyan; Zhang, Zhanjun
2014-01-01
With a longer life expectancy and an increased prevalence of neurodegenerative diseases, investigations on trajectories of cognitive aging have become exciting and promising. This study aimed to estimate the patterns of age-related cognitive decline and the potential associated factors of cognitive function in community-dwelling residents of Beijing, China. In this study, 1248 older adults aged 52-88 years [including 175 mild cognitive impairment (MCI) subjects] completed a battery of neuropsychological scales. The personal information, including demographic information, medical history, eating habits, lifestyle regularity and leisure activities, was also collected. All cognitive function exhibited an agerelated decline in normal volunteers. Piece-wise linear fitting results suggested that performance on the Auditory Verbal Learning Test remained stable until 58 years of age and continued to decline thereafter. The decline in processing speed and executive function began during the early 50's. Scores on visual-spatial and language tests declined after 66 years of age. The decline stage of the general mental status ranged from 63 to 70 years of age. However, the MCI group did not exhibit an obvious age-related decline in most cognitive tests. Multivariate linear regression analyses indicated that education, gender, leisure activities, diabetes and eating habits were associated with cognitive abilities. These results indicated various trajectories of age-related decline across multiple cognitive domains. We also found different patterns of agerelated cognitive decline between MCI and normal elderly. These findings could help improve the guidance of cognitive intervention program and have implications for public policy issues.
Larson, Nicholas B; McDonnell, Shannon; Cannon Albright, Lisa; Teerlink, Craig; Stanford, Janet; Ostrander, Elaine A; Isaacs, William B; Xu, Jianfeng; Cooney, Kathleen A; Lange, Ethan; Schleutker, Johanna; Carpten, John D; Powell, Isaac; Bailey-Wilson, Joan E; Cussenot, Olivier; Cancel-Tassin, Geraldine; Giles, Graham G; MacInnis, Robert J; Maier, Christiane; Whittemore, Alice S; Hsieh, Chih-Lin; Wiklund, Fredrik; Catalona, William J; Foulkes, William; Mandal, Diptasri; Eeles, Rosalind; Kote-Jarai, Zsofia; Ackerman, Michael J; Olson, Timothy M; Klein, Christopher J; Thibodeau, Stephen N; Schaid, Daniel J
2017-05-01
Next-generation sequencing technologies have afforded unprecedented characterization of low-frequency and rare genetic variation. Due to low power for single-variant testing, aggregative methods are commonly used to combine observed rare variation within a single gene. Causal variation may also aggregate across multiple genes within relevant biomolecular pathways. Kernel-machine regression and adaptive testing methods for aggregative rare-variant association testing have been demonstrated to be powerful approaches for pathway-level analysis, although these methods tend to be computationally intensive at high-variant dimensionality and require access to complete data. An additional analytical issue in scans of large pathway definition sets is multiple testing correction. Gene set definitions may exhibit substantial genic overlap, and the impact of the resultant correlation in test statistics on Type I error rate control for large agnostic gene set scans has not been fully explored. Herein, we first outline a statistical strategy for aggregative rare-variant analysis using component gene-level linear kernel score test summary statistics as well as derive simple estimators of the effective number of tests for family-wise error rate control. We then conduct extensive simulation studies to characterize the behavior of our approach relative to direct application of kernel and adaptive methods under a variety of conditions. We also apply our method to two case-control studies, respectively, evaluating rare variation in hereditary prostate cancer and schizophrenia. Finally, we provide open-source R code for public use to facilitate easy application of our methods to existing rare-variant analysis results. © 2017 WILEY PERIODICALS, INC.
A comparison of methods for the analysis of binomial clustered outcomes in behavioral research.
Ferrari, Alberto; Comelli, Mario
2016-12-01
In behavioral research, data consisting of a per-subject proportion of "successes" and "failures" over a finite number of trials often arise. This clustered binary data are usually non-normally distributed, which can distort inference if the usual general linear model is applied and sample size is small. A number of more advanced methods is available, but they are often technically challenging and a comparative assessment of their performances in behavioral setups has not been performed. We studied the performances of some methods applicable to the analysis of proportions; namely linear regression, Poisson regression, beta-binomial regression and Generalized Linear Mixed Models (GLMMs). We report on a simulation study evaluating power and Type I error rate of these models in hypothetical scenarios met by behavioral researchers; plus, we describe results from the application of these methods on data from real experiments. Our results show that, while GLMMs are powerful instruments for the analysis of clustered binary outcomes, beta-binomial regression can outperform them in a range of scenarios. Linear regression gave results consistent with the nominal level of significance, but was overall less powerful. Poisson regression, instead, mostly led to anticonservative inference. GLMMs and beta-binomial regression are generally more powerful than linear regression; yet linear regression is robust to model misspecification in some conditions, whereas Poisson regression suffers heavily from violations of the assumptions when used to model proportion data. We conclude providing directions to behavioral scientists dealing with clustered binary data and small sample sizes. Copyright © 2016 Elsevier B.V. All rights reserved.
Vajargah, Kianoush Fathi; Sadeghi-Bazargani, Homayoun; Mehdizadeh-Esfanjani, Robab; Savadi-Oskouei, Daryoush; Farhoudi, Mehdi
2012-01-01
The objective of the present study was to assess the comparable applicability of orthogonal projections to latent structures (OPLS) statistical model vs traditional linear regression in order to investigate the role of trans cranial doppler (TCD) sonography in predicting ischemic stroke prognosis. The study was conducted on 116 ischemic stroke patients admitted to a specialty neurology ward. The Unified Neurological Stroke Scale was used once for clinical evaluation on the first week of admission and again six months later. All data was primarily analyzed using simple linear regression and later considered for multivariate analysis using PLS/OPLS models through the SIMCA P+12 statistical software package. The linear regression analysis results used for the identification of TCD predictors of stroke prognosis were confirmed through the OPLS modeling technique. Moreover, in comparison to linear regression, the OPLS model appeared to have higher sensitivity in detecting the predictors of ischemic stroke prognosis and detected several more predictors. Applying the OPLS model made it possible to use both single TCD measures/indicators and arbitrarily dichotomized measures of TCD single vessel involvement as well as the overall TCD result. In conclusion, the authors recommend PLS/OPLS methods as complementary rather than alternative to the available classical regression models such as linear regression.
Quality of life in breast cancer patients--a quantile regression analysis.
Pourhoseingholi, Mohamad Amin; Safaee, Azadeh; Moghimi-Dehkordi, Bijan; Zeighami, Bahram; Faghihzadeh, Soghrat; Tabatabaee, Hamid Reza; Pourhoseingholi, Asma
2008-01-01
Quality of life study has an important role in health care especially in chronic diseases, in clinical judgment and in medical resources supplying. Statistical tools like linear regression are widely used to assess the predictors of quality of life. But when the response is not normal the results are misleading. The aim of this study is to determine the predictors of quality of life in breast cancer patients, using quantile regression model and compare to linear regression. A cross-sectional study conducted on 119 breast cancer patients that admitted and treated in chemotherapy ward of Namazi hospital in Shiraz. We used QLQ-C30 questionnaire to assessment quality of life in these patients. A quantile regression was employed to assess the assocciated factors and the results were compared to linear regression. All analysis carried out using SAS. The mean score for the global health status for breast cancer patients was 64.92+/-11.42. Linear regression showed that only grade of tumor, occupational status, menopausal status, financial difficulties and dyspnea were statistically significant. In spite of linear regression, financial difficulties were not significant in quantile regression analysis and dyspnea was only significant for first quartile. Also emotion functioning and duration of disease statistically predicted the QOL score in the third quartile. The results have demonstrated that using quantile regression leads to better interpretation and richer inference about predictors of the breast cancer patient quality of life.
Interpretation of commonly used statistical regression models.
Kasza, Jessica; Wolfe, Rory
2014-01-01
A review of some regression models commonly used in respiratory health applications is provided in this article. Simple linear regression, multiple linear regression, logistic regression and ordinal logistic regression are considered. The focus of this article is on the interpretation of the regression coefficients of each model, which are illustrated through the application of these models to a respiratory health research study. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.
Collins, Sarah; Byrne, Michael; Hawe, James; O'Reilly, Gary
2018-06-01
To investigate the acceptability and utility of a newly developed computerized cognitive behavioural therapy (cCBT) programme, MindWise (2.0), for adults attending Irish primary care psychology services. Adult primary care psychology service users across four rural locations in Ireland were invited to participate in this study. A total of 60 service users participated in the MindWise (2.0) treatment group and compared to 22 people in a comparison waiting list control group. Participants completed pre- and post-intervention outcome measures of anxiety, depression, and work/social functioning. At post-intervention, 25 of 60 people in the MindWise (2.0) condition had fully completed the programme and 19 of 22 people in the waiting list condition provided time 2 data. Relative to those in the control group, the MindWise (2.0) participants reported significantly reduced symptoms of anxiety and no change in depression or work/social functioning. The newly developed cCBT programme, MindWise (2.0), resulted in significant improvements on a measure of anxiety and may address some barriers to accessing more traditional face-to-face mental health services for adults in a primary care setting. Further programme development and related research appears both warranted and needed to lower programme drop-out, establish if gains in anxiety management are maintained over time, and support people in a primary care context with depression. There is a growing evidence base that computerized self-help programmes can assist in a stepped-care approach to adult mental health service provision. These programmes require further development to address issues such as high dropout, the development of equally effective transdiagnostic content, and greater effectiveness in the country of origin. This study evaluated the acceptability and utility of a brief online CBT programme for adults referred due to anxiety or low mood to primary care psychology services in the national health service in Ireland. Results indicate that 42% of people completed the programme and experienced a significant reduction in anxiety but not depression and no improvement in work or social adjustment compared to similar adults on a waiting list for services. This study suggests the programme warrants further development and research and may in time become a useful and suitable intervention within the national health service in Ireland. © 2017 The British Psychological Society.
Threlkeld, Zachary D.; Jicha, Greg A.; Smith, Charles D.; Gold, Brian T.
2012-01-01
Reduced task deactivation within regions of the default mode network (DMN) has been frequently reported in Alzheimer’s disease (AD) and amnestic mild cognitive impairment (aMCI). As task deactivations reductions become increasingly used in the study of early AD states, it is important to understand their relationship to atrophy. To address this issue, the present study compared task deactivation reductions during a lexical decision task and atrophy in aMCI, using a series of parallel voxel-wise and region-wise analyses of fMRI and structural data. Our results identified multiple regions within parietal cortex as convergence areas of task deactivation and atrophy in aMCI. Relationships between parietal regions showing overlapping task deactivation reductions and atrophy in aMCI were then explored. Regression analyses demonstrated minimal correlation between task deactivation reductions and either local or global atrophy in aMCI. In addition, a logistic regression model which combined task deactivation reductions and atrophy in parietal DMN regions showed higher classificatory accuracy of aMCI than separate task deactivation or atrophy models. Results suggest that task deactivation reductions and atrophy in parietal regions provide complementary rather than redundant information in aMCI. Future longitudinal studies will be required to assess the utility of combining task deactivation reductions and atrophy in the detection of early AD. PMID:21860094
Keihaninejad, Shiva; Ryan, Natalie S; Malone, Ian B; Modat, Marc; Cash, David; Ridgway, Gerard R; Zhang, Hui; Fox, Nick C; Ourselin, Sebastien
2012-01-01
Tract-based spatial statistics (TBSS) is a popular method for the analysis of diffusion tensor imaging data. TBSS focuses on differences in white matter voxels with high fractional anisotropy (FA), representing the major fibre tracts, through registering all subjects to a common reference and the creation of a FA skeleton. This work considers the effect of choice of reference in the TBSS pipeline, which can be a standard template, an individual subject from the study, a study-specific template or a group-wise average. While TBSS attempts to overcome registration error by searching the neighbourhood perpendicular to the FA skeleton for the voxel with maximum FA, this projection step may not compensate for large registration errors that might occur in the presence of pathology such as atrophy in neurodegenerative diseases. This makes registration performance and choice of reference an important issue. Substantial work in the field of computational anatomy has shown the use of group-wise averages to reduce biases while avoiding the arbitrary selection of a single individual. Here, we demonstrate the impact of the choice of reference on: (a) specificity (b) sensitivity in a simulation study and (c) a real-world comparison of Alzheimer's disease patients to controls. In (a) and (b), simulated deformations and decreases in FA were applied to control subjects to simulate changes of shape and WM integrity similar to what would be seen in AD patients, in order to provide a "ground truth" for evaluating the various methods of TBSS reference. Using a group-wise average atlas as the reference outperformed other references in the TBSS pipeline in all evaluations.
Voxel-wise grey matter asymmetry analysis in left- and right-handers.
Ocklenburg, Sebastian; Friedrich, Patrick; Güntürkün, Onur; Genç, Erhan
2016-10-28
Handedness is thought to originate in the brain, but identifying its structural correlates in the cortex has yielded surprisingly incoherent results. One idea proclaimed by several authors is that structural grey matter asymmetries might underlie handedness. While some authors have found significant associations with handedness in different brain areas (e.g. in the central sulcus and precentral sulcus), others have failed to identify such associations. One method used by many researchers to determine structural grey matter asymmetries is voxel based morphometry (VBM). However, it has recently been suggested that the standard VBM protocol might not be ideal to assess structural grey matter asymmetries, as it establishes accurate voxel-wise correspondence across individuals but not across both hemispheres. This could potentially lead to biased and incoherent results. Recently, a new toolbox specifically geared at assessing structural asymmetries and involving accurate voxel-wise correspondence across hemispheres has been published [F. Kurth, C. Gaser, E. Luders. A 12-step user guide for analyzing voxel-wise gray matter asymmetries in statistical parametric mapping (SPM), Nat Protoc 10 (2015), 293-304]. Here, we used this new toolbox to re-assess grey matter asymmetry differences in left- vs. right-handers and linked them to quantitative measures of hand preference and hand skill. While we identified several significant left-right asymmetries in the overall sample, no difference between left- and right-handers reached significance after correction for multiple comparisons. These findings indicate that the structural brain correlates of handedness are unlikely to be rooted in macroscopic grey matter area differences that can be assessed with VBM. Future studies should focus on other potential structural correlates of handedness, e.g. structural white matter asymmetries. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Keihaninejad, Shiva; Ryan, Natalie S.; Malone, Ian B.; Modat, Marc; Cash, David; Ridgway, Gerard R.; Zhang, Hui; Fox, Nick C.; Ourselin, Sebastien
2012-01-01
Tract-based spatial statistics (TBSS) is a popular method for the analysis of diffusion tensor imaging data. TBSS focuses on differences in white matter voxels with high fractional anisotropy (FA), representing the major fibre tracts, through registering all subjects to a common reference and the creation of a FA skeleton. This work considers the effect of choice of reference in the TBSS pipeline, which can be a standard template, an individual subject from the study, a study-specific template or a group-wise average. While TBSS attempts to overcome registration error by searching the neighbourhood perpendicular to the FA skeleton for the voxel with maximum FA, this projection step may not compensate for large registration errors that might occur in the presence of pathology such as atrophy in neurodegenerative diseases. This makes registration performance and choice of reference an important issue. Substantial work in the field of computational anatomy has shown the use of group-wise averages to reduce biases while avoiding the arbitrary selection of a single individual. Here, we demonstrate the impact of the choice of reference on: (a) specificity (b) sensitivity in a simulation study and (c) a real-world comparison of Alzheimer's disease patients to controls. In (a) and (b), simulated deformations and decreases in FA were applied to control subjects to simulate changes of shape and WM integrity similar to what would be seen in AD patients, in order to provide a “ground truth” for evaluating the various methods of TBSS reference. Using a group-wise average atlas as the reference outperformed other references in the TBSS pipeline in all evaluations. PMID:23139736
Strategic Global Climate Command?
NASA Astrophysics Data System (ADS)
Long, J. C. S.
2016-12-01
Researchers have been exploring geoengineering because Anthropogenic GHG emissions could drive the globe towards unihabitability for people, wildlife and vegetation. Potential global deployment of these technologies is inherently strategic. For example, solar radiation management to reflect more sunlight might be strategically useful during a period of time where the population completes an effort to cease emissions and carbon removal technologies might then be strategically deployed to move the atmospheric concentrations back to a safer level. Consequently, deployment of these global technologies requires the ability to think and act strategically on the part of the planet's governments. Such capacity most definitely does not exist today but it behooves scientists and engineers to be involved in thinking through how global command might develop because the way they do the research could support the development of a capacity to deploy intervention rationally -- or irrationally. Internationalizing research would get countries used to working together. Organizing the research in a step-wise manner where at each step scientists become skilled at explaining what they have learned, the quality of the information they have, what they don't know and what more they can do to reduce or handle uncertainty, etc. Such a process can increase societal confidence in being able to make wise decisions about deployment. Global capacity will also be enhanced if the sceintific establishment reinvents misssion driven research so that the programs will identify the systemic issues invovled in any proposed technology and systematically address them with research while still encouraging individual creativity. Geoengineering will diverge from climate science in that geoengineering research needs to design interventions for some publically desirable goal and investigates whether a proposed intervention will acheive desired outcomes. The effort must be a systems-engineering design problem with public engagement about the goals of intervention. The research enterprise alone cannot ensure wise global governance of climate strategy, but making the science highly tranparent and coherent in a way that ensures public interest will improve the chances for effective global climate action.
Use of probabilistic weights to enhance linear regression myoelectric control
NASA Astrophysics Data System (ADS)
Smith, Lauren H.; Kuiken, Todd A.; Hargrove, Levi J.
2015-12-01
Objective. Clinically available prostheses for transradial amputees do not allow simultaneous myoelectric control of degrees of freedom (DOFs). Linear regression methods can provide simultaneous myoelectric control, but frequently also result in difficulty with isolating individual DOFs when desired. This study evaluated the potential of using probabilistic estimates of categories of gross prosthesis movement, which are commonly used in classification-based myoelectric control, to enhance linear regression myoelectric control. Approach. Gaussian models were fit to electromyogram (EMG) feature distributions for three movement classes at each DOF (no movement, or movement in either direction) and used to weight the output of linear regression models by the probability that the user intended the movement. Eight able-bodied and two transradial amputee subjects worked in a virtual Fitts’ law task to evaluate differences in controllability between linear regression and probability-weighted regression for an intramuscular EMG-based three-DOF wrist and hand system. Main results. Real-time and offline analyses in able-bodied subjects demonstrated that probability weighting improved performance during single-DOF tasks (p < 0.05) by preventing extraneous movement at additional DOFs. Similar results were seen in experiments with two transradial amputees. Though goodness-of-fit evaluations suggested that the EMG feature distributions showed some deviations from the Gaussian, equal-covariance assumptions used in this experiment, the assumptions were sufficiently met to provide improved performance compared to linear regression control. Significance. Use of probability weights can improve the ability to isolate individual during linear regression myoelectric control, while maintaining the ability to simultaneously control multiple DOFs.
NASA Astrophysics Data System (ADS)
Khan, Muazzam A.; Ahmad, Jawad; Javaid, Qaisar; Saqib, Nazar A.
2017-03-01
Wireless Sensor Networks (WSN) is widely deployed in monitoring of some physical activity and/or environmental conditions. Data gathered from WSN is transmitted via network to a central location for further processing. Numerous applications of WSN can be found in smart homes, intelligent buildings, health care, energy efficient smart grids and industrial control systems. In recent years, computer scientists has focused towards findings more applications of WSN in multimedia technologies, i.e. audio, video and digital images. Due to bulky nature of multimedia data, WSN process a large volume of multimedia data which significantly increases computational complexity and hence reduces battery time. With respect to battery life constraints, image compression in addition with secure transmission over a wide ranged sensor network is an emerging and challenging task in Wireless Multimedia Sensor Networks. Due to the open nature of the Internet, transmission of data must be secure through a process known as encryption. As a result, there is an intensive demand for such schemes that is energy efficient as well as highly secure since decades. In this paper, discrete wavelet-based partial image encryption scheme using hashing algorithm, chaotic maps and Hussain's S-Box is reported. The plaintext image is compressed via discrete wavelet transform and then the image is shuffled column-wise and row wise-wise via Piece-wise Linear Chaotic Map (PWLCM) and Nonlinear Chaotic Algorithm, respectively. To get higher security, initial conditions for PWLCM are made dependent on hash function. The permuted image is bitwise XORed with random matrix generated from Intertwining Logistic map. To enhance the security further, final ciphertext is obtained after substituting all elements with Hussain's substitution box. Experimental and statistical results confirm the strength of the anticipated scheme.
Increasing Running Step Rate Reduces Patellofemoral Joint Forces
Lenhart, Rachel L.; Thelen, Darryl G.; Wille, Christa M.; Chumanov, Elizabeth S.; Heiderscheit, Bryan C.
2013-01-01
Purpose Increasing step rate has been shown to elicit changes in joint kinematics and kinetics during running, and has been suggested as a possible rehabilitation strategy for runners with patellofemoral pain. The purpose of this study was to determine how altering step rate affects internal muscle forces and patellofemoral joint loads, and then to determine what kinematic and kinetic factors best predict changes in joint loading. Methods We recorded whole body kinematics of 30 healthy adults running on an instrumented treadmill at three step rate conditions (90%, 100%, and 110% of preferred step rate). We then used a 3D lower extremity musculoskeletal model to estimate muscle, patellar tendon, and patellofemoral joint forces throughout the running gait cycles. Additionally, linear regression analysis allowed us to ascertain the relative influence of limb posture and external loads on patellofemoral joint force. Results Increasing step rate to 110% of preferred reduced peak patellofemoral joint force by 14%. Peak muscle forces were also altered as a result of the increased step rate with hip, knee and ankle extensor forces, and hip abductor forces all reduced in mid-stance. Compared to the 90% step rate condition, there was a concomitant increase in peak rectus femoris and hamstring loads during early and late swing, respectively, at higher step rates. Peak stance phase knee flexion decreased with increasing step rate, and was found to be the most important predictor of the reduction in patellofemoral joint loading. Conclusion Increasing step rate is an effective strategy to reduce patellofemoral joint forces and could be effective in modulating biomechanical factors that can contribute to patellofemoral pain. PMID:23917470
Simplified large African carnivore density estimators from track indices.
Winterbach, Christiaan W; Ferreira, Sam M; Funston, Paul J; Somers, Michael J
2016-01-01
The range, population size and trend of large carnivores are important parameters to assess their status globally and to plan conservation strategies. One can use linear models to assess population size and trends of large carnivores from track-based surveys on suitable substrates. The conventional approach of a linear model with intercept may not intercept at zero, but may fit the data better than linear model through the origin. We assess whether a linear regression through the origin is more appropriate than a linear regression with intercept to model large African carnivore densities and track indices. We did simple linear regression with intercept analysis and simple linear regression through the origin and used the confidence interval for ß in the linear model y = αx + ß, Standard Error of Estimate, Mean Squares Residual and Akaike Information Criteria to evaluate the models. The Lion on Clay and Low Density on Sand models with intercept were not significant ( P > 0.05). The other four models with intercept and the six models thorough origin were all significant ( P < 0.05). The models using linear regression with intercept all included zero in the confidence interval for ß and the null hypothesis that ß = 0 could not be rejected. All models showed that the linear model through the origin provided a better fit than the linear model with intercept, as indicated by the Standard Error of Estimate and Mean Square Residuals. Akaike Information Criteria showed that linear models through the origin were better and that none of the linear models with intercept had substantial support. Our results showed that linear regression through the origin is justified over the more typical linear regression with intercept for all models we tested. A general model can be used to estimate large carnivore densities from track densities across species and study areas. The formula observed track density = 3.26 × carnivore density can be used to estimate densities of large African carnivores using track counts on sandy substrates in areas where carnivore densities are 0.27 carnivores/100 km 2 or higher. To improve the current models, we need independent data to validate the models and data to test for non-linear relationship between track indices and true density at low densities.
[From clinical judgment to linear regression model.
Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O
2013-01-01
When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R 2 ) indicates the importance of independent variables in the outcome.
A practical data processing workflow for multi-OMICS projects.
Kohl, Michael; Megger, Dominik A; Trippler, Martin; Meckel, Hagen; Ahrens, Maike; Bracht, Thilo; Weber, Frank; Hoffmann, Andreas-Claudius; Baba, Hideo A; Sitek, Barbara; Schlaak, Jörg F; Meyer, Helmut E; Stephan, Christian; Eisenacher, Martin
2014-01-01
Multi-OMICS approaches aim on the integration of quantitative data obtained for different biological molecules in order to understand their interrelation and the functioning of larger systems. This paper deals with several data integration and data processing issues that frequently occur within this context. To this end, the data processing workflow within the PROFILE project is presented, a multi-OMICS project that aims on identification of novel biomarkers and the development of new therapeutic targets for seven important liver diseases. Furthermore, a software called CrossPlatformCommander is sketched, which facilitates several steps of the proposed workflow in a semi-automatic manner. Application of the software is presented for the detection of novel biomarkers, their ranking and annotation with existing knowledge using the example of corresponding Transcriptomics and Proteomics data sets obtained from patients suffering from hepatocellular carcinoma. Additionally, a linear regression analysis of Transcriptomics vs. Proteomics data is presented and its performance assessed. It was shown, that for capturing profound relations between Transcriptomics and Proteomics data, a simple linear regression analysis is not sufficient and implementation and evaluation of alternative statistical approaches are needed. Additionally, the integration of multivariate variable selection and classification approaches is intended for further development of the software. Although this paper focuses only on the combination of data obtained from quantitative Proteomics and Transcriptomics experiments, several approaches and data integration steps are also applicable for other OMICS technologies. Keeping specific restrictions in mind the suggested workflow (or at least parts of it) may be used as a template for similar projects that make use of different high throughput techniques. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. Copyright © 2013 Elsevier B.V. All rights reserved.
Effect of body mass index on hemiparetic gait.
Sheffler, Lynne R; Bailey, Stephanie Nogan; Gunzler, Douglas; Chae, John
2014-10-01
To evaluate the relationship between body mass index (BMI) and spatiotemporal, kinematic, and kinetic gait parameters in chronic hemiparetic stroke survivors. Secondary analysis of data collected in a randomized controlled trial comparing two 12-week ambulation training treatments. Academic medical center. Chronic hemiparetic stroke survivors (N = 108, >3 months poststroke) Linear regression analyses were performed of BMI, and selected pretreatment gait parameters were recorded using quantitative gait analysis. Spatiotemporal, kinematic, and kinetic gait parameters. A series of linear regression models that controlled for age, gender, stroke type (ischemic versus hemorrhagic), interval poststroke, level of motor impairment (Fugl-Meyer score), and walking speed found BMI to be positively associated with step width (m) (β = 0.364, P < .001), positively associated with peak hip abduction angle of the nonparetic limb during stance (deg) (β = 0.177, P = .040), negatively associated with ankle dorsiflexion angle at initial contact of the paretic limb (deg) (β = -0.222, P = .023), and negatively associated with peak ankle power at push-off (W/kg) of the paretic limb (W/kg)(β = -0.142, P = .026). When walking at a similar speed, chronic hemiparetic stroke subjects with a higher BMI demonstrated greater step width, greater hip hiking of the paretic lower limb, less paretic limb dorsiflexion at initial contact, and less paretic ankle power at push-off as compared to stroke subjects with a lower BMI and similar level of motor impairment. Further studies are necessary to determine the clinical relevance of these findings with respect to rehabilitation strategies for gait dysfunction in hemiparetic patients with higher BMIs. Copyright © 2014 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Hierarchical Matching and Regression with Application to Photometric Redshift Estimation
NASA Astrophysics Data System (ADS)
Murtagh, Fionn
2017-06-01
This work emphasizes that heterogeneity, diversity, discontinuity, and discreteness in data is to be exploited in classification and regression problems. A global a priori model may not be desirable. For data analytics in cosmology, this is motivated by the variety of cosmological objects such as elliptical, spiral, active, and merging galaxies at a wide range of redshifts. Our aim is matching and similarity-based analytics that takes account of discrete relationships in the data. The information structure of the data is represented by a hierarchy or tree where the branch structure, rather than just the proximity, is important. The representation is related to p-adic number theory. The clustering or binning of the data values, related to the precision of the measurements, has a central role in this methodology. If used for regression, our approach is a method of cluster-wise regression, generalizing nearest neighbour regression. Both to exemplify this analytics approach, and to demonstrate computational benefits, we address the well-known photometric redshift or `photo-z' problem, seeking to match Sloan Digital Sky Survey (SDSS) spectroscopic and photometric redshifts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Haochun; Ishikawa, Kyohei; Ide, Keisuke
2015-11-28
We investigated the effects of residual hydrogen in sputtering atmosphere on subgap states and carrier transport in amorphous In-Ga-Zn-O (a-IGZO) using two sputtering systems with different base pressures of ∼10{sup −4} and 10{sup −7 }Pa (standard (STD) and ultrahigh vacuum (UHV) sputtering, respectively), which produce a-IGZO films with impurity hydrogen contents at the orders of 10{sup 20} and 10{sup 19 }cm{sup −3}, respectively. Several subgap states were observed by hard X-ray photoemission spectroscopy, i.e., peak-shape near-valence band maximum (near-VBM) states, shoulder-shape near-VBM states, peak-shape near-conduction band minimum (near-CBM) states, and step-wise near-CBM states. It was confirmed that the formation of these subgapmore » states were affected strongly by the residual hydrogen (possibly H{sub 2}O). The step-wise near-CBM states were observed only in the STD films deposited without O{sub 2} gas flow and attributed to metallic In. Such step-wise near-CBM state was not detected in the other films including the UHV films even deposited without O{sub 2} flow, substantiating that the metallic In is segregated by the strong reduction effect of the hydrogen/H{sub 2}O. Similarly, the density of the near-VBM states was very high for the STD films deposited without O{sub 2}. These films had low film density and are consistent with a model that voids in the amorphous structure form a part of the near-VBM states. On the other hand, the UHV films had high film densities and much less near-VBM states, keeping the possibility that some of the near-VBM states, in particular, of the peak-shape ones, originate from –OH and weakly bonded oxygen. These results indicate that 2% of excess O{sub 2} flow is required for the STD sputtering to compensate the effects of the residual hydrogen/H{sub 2}O. The high-density near-VBM states and the metallic In segregation deteriorated the electron mobility to 0.4 cm{sup 2}/(V s)« less
Hemmila, April; McGill, Jim; Ritter, David
2008-03-01
To determine if changes in fingerprint infrared spectra linear with age can be found, partial least squares (PLS1) regression of 155 fingerprint infrared spectra against the person's age was constructed. The regression produced a linear model of age as a function of spectrum with a root mean square error of calibration of less than 4 years, showing an inflection at about 25 years of age. The spectral ranges emphasized by the regression do not correspond to the highest concentration constituents of the fingerprints. Separate linear regression models for old and young people can be constructed with even more statistical rigor. The success of the regression demonstrates that a combination of constituents can be found that changes linearly with age, with a significant shift around puberty.
Gimelfarb, A.; Willis, J. H.
1994-01-01
An experiment was conducted to investigate the offspring-parent regression for three quantitative traits (weight, abdominal bristles and wing length) in Drosophila melanogaster. Linear and polynomial models were fitted for the regressions of a character in offspring on both parents. It is demonstrated that responses by the characters to selection predicted by the nonlinear regressions may differ substantially from those predicted by the linear regressions. This is true even, and especially, if selection is weak. The realized heritability for a character under selection is shown to be determined not only by the offspring-parent regression but also by the distribution of the character and by the form and strength of selection. PMID:7828818
NASA Technical Reports Server (NTRS)
Rogers, R. H.; Smith, V. E.; Scherz, J. P.; Woelkerling, W. J.; Adams, M. S.; Gannon, J. E. (Principal Investigator)
1977-01-01
The author has identified the following significant results. A step-by-step procedure for establishing and monitoring the trophic status of inland lakes with the use of LANDSAT data, surface sampling, laboratory analysis, and aerial observations were demonstrated. The biomass was related to chlorophyll-a concentrations, water clarity, and trophic state. A procedure was developed for using surface sampling, LANDSAT data, and linear regression equations to produce a color-coded image of large lakes showing the distribution and concentrations of water quality parameters, causing eutrophication as well as parameters which indicate its effects. Cover categories readily derived from LANDSAT were those for which loading rates were available and were known to have major effects on the quality and quantity of runoff and lake eutrophication. Urban, barren land, cropland, grassland, forest, wetlands, and water were included.
Gullo, Charles A.
2016-01-01
Biomedical programs have a potential treasure trove of data they can mine to assist admissions committees in identification of students who are likely to do well and help educational committees in the identification of students who are likely to do poorly on standardized national exams and who may need remediation. In this article, we provide a step-by-step approach that schools can utilize to generate data that are useful when predicting the future performance of current students in any given program. We discuss the use of linear regression analysis as the means of generating that data and highlight some of the limitations. Finally, we lament on how the combination of these institution-specific data sets are not being fully utilized at the national level where these data could greatly assist programs at large. PMID:27374246
Linear and nonlinear regression techniques for simultaneous and proportional myoelectric control.
Hahne, J M; Biessmann, F; Jiang, N; Rehbaum, H; Farina, D; Meinecke, F C; Muller, K-R; Parra, L C
2014-03-01
In recent years the number of active controllable joints in electrically powered hand-prostheses has increased significantly. However, the control strategies for these devices in current clinical use are inadequate as they require separate and sequential control of each degree-of-freedom (DoF). In this study we systematically compare linear and nonlinear regression techniques for an independent, simultaneous and proportional myoelectric control of wrist movements with two DoF. These techniques include linear regression, mixture of linear experts (ME), multilayer-perceptron, and kernel ridge regression (KRR). They are investigated offline with electro-myographic signals acquired from ten able-bodied subjects and one person with congenital upper limb deficiency. The control accuracy is reported as a function of the number of electrodes and the amount and diversity of training data providing guidance for the requirements in clinical practice. The results showed that KRR, a nonparametric statistical learning method, outperformed the other methods. However, simple transformations in the feature space could linearize the problem, so that linear models could achieve similar performance as KRR at much lower computational costs. Especially ME, a physiologically inspired extension of linear regression represents a promising candidate for the next generation of prosthetic devices.
System Identification Applied to Dynamic CFD Simulation and Wind Tunnel Data
NASA Technical Reports Server (NTRS)
Murphy, Patrick C.; Klein, Vladislav; Frink, Neal T.; Vicroy, Dan D.
2011-01-01
Demanding aerodynamic modeling requirements for military and civilian aircraft have provided impetus for researchers to improve computational and experimental techniques. Model validation is a key component for these research endeavors so this study is an initial effort to extend conventional time history comparisons by comparing model parameter estimates and their standard errors using system identification methods. An aerodynamic model of an aircraft performing one-degree-of-freedom roll oscillatory motion about its body axes is developed. The model includes linear aerodynamics and deficiency function parameters characterizing an unsteady effect. For estimation of unknown parameters two techniques, harmonic analysis and two-step linear regression, were applied to roll-oscillatory wind tunnel data and to computational fluid dynamics (CFD) simulated data. The model used for this study is a highly swept wing unmanned aerial combat vehicle. Differences in response prediction, parameters estimates, and standard errors are compared and discussed
Unitary Response Regression Models
ERIC Educational Resources Information Center
Lipovetsky, S.
2007-01-01
The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with…
An Expert System for the Evaluation of Cost Models
1990-09-01
contrast to the condition of equal error variance, called homoscedasticity. (Reference: Applied Linear Regression Models by John Neter - page 423...normal. (Reference: Applied Linear Regression Models by John Neter - page 125) Click Here to continue -> Autocorrelation Click Here for the index - Index...over time. Error terms correlated over time are said to be autocorrelated or serially correlated. (REFERENCE: Applied Linear Regression Models by John
Kokubun, Hideya; Fukawa, Misako; Matoba, Motohiro; Hoka, Sumio; Yamada, Yasuhiko; Yago, Kazuo
2007-11-01
Compound injections of oxycodone and hydrocotarnine are currently used as one of the treatment options for some cases with cancer pain. However, there have been no reports examining the factors that influence oxycodone and hydrocotarnine clearance, so detailed examination is necessary. As for hydrocotarnine, there have been no reports examining the pharmacokinetics. Therefore in this study, we determined the pharmacokinetics of oxycodone and hydrocotarnine in patients with cancer pain. The study was conducted on 19 patients, in whom pain control was attempted by using the compound injections of oxycodone and hydrocotarnine. We used HPLC-electrochemical detector (ECD) to determine oxycodone and hydrocotarnine serum concentrations, and used the nonlinear least-squares method (MULTI) for calculation of the pharmacokinetic parameters. Furthermore, we examined the factors that influence the clearance of oxycodone and hydrocotarnine by multiple regression analysis (step wise method). The pharmacokinetic parameters were as follows: Oxycodone; V(d)=226.7+/-105.5 l (mean+/-S.D.), CL=37.9+/-25.1 l/h, t(1/2)=4.1+/-1.9 h. Hydrocotarnine; V(d)=276.8+/-237.2 l, CL=95.1+/-64.3 l/h, t(1/2)=2.0+/-0.7 h. The clearance of oxycodone represented by a regression formula was significantly correlated to the age, the presence or absence of within 7 d on the death or liver metastasis, or of the heart failure of the patients. The clearance of hydrocotarnine represented by a regression formula was significantly correlated to the presence or absence of within 7 d on the death or liver metastasis, or of the heart failure of the patients. The clearance also indicated that oxycodone concentration in the blood was likely to be higher in patients having these factors. Oxycodone/hydrocotarnine compound injections should be used with caution and dose reduction may be necessary in such populations.
Teaching ethics to engineers: ethical decision making parallels the engineering design process.
Bero, Bridget; Kuhlman, Alana
2011-09-01
In order to fulfill ABET requirements, Northern Arizona University's Civil and Environmental engineering programs incorporate professional ethics in several of its engineering courses. This paper discusses an ethics module in a 3rd year engineering design course that focuses on the design process and technical writing. Engineering students early in their student careers generally possess good black/white critical thinking skills on technical issues. Engineering design is the first time students are exposed to "grey" or multiple possible solution technical problems. To identify and solve these problems, the engineering design process is used. Ethical problems are also "grey" problems and present similar challenges to students. Students need a practical tool for solving these ethical problems. The step-wise engineering design process was used as a model to demonstrate a similar process for ethical situations. The ethical decision making process of Martin and Schinzinger was adapted for parallelism to the design process and presented to students as a step-wise technique for identification of the pertinent ethical issues, relevant moral theories, possible outcomes and a final decision. Students had greatest difficulty identifying the broader, global issues presented in an ethical situation, but by the end of the module, were better able to not only identify the broader issues, but also to more comprehensively assess specific issues, generate solutions and a desired response to the issue.
El Yakoubi, Warif; Buffin, Eulalie; Cladière, Damien; Gryaznova, Yulia; Berenguer, Inés; Touati, Sandra A; Gómez, Rocío; Suja, José A; van Deursen, Jan M; Wassmann, Katja
2017-09-25
A key feature of meiosis is the step-wise removal of cohesin, the protein complex holding sister chromatids together, first from arms in meiosis I and then from the centromere region in meiosis II. Centromeric cohesin is protected by Sgo2 from Separase-mediated cleavage, in order to maintain sister chromatids together until their separation in meiosis II. Failures in step-wise cohesin removal result in aneuploid gametes, preventing the generation of healthy embryos. Here, we report that kinase activities of Bub1 and Mps1 are required for Sgo2 localisation to the centromere region. Mps1 inhibitor-treated oocytes are defective in centromeric cohesin protection, whereas oocytes devoid of Bub1 kinase activity, which cannot phosphorylate H2A at T121, are not perturbed in cohesin protection as long as Mps1 is functional. Mps1 and Bub1 kinase activities localise Sgo2 in meiosis I preferentially to the centromere and pericentromere respectively, indicating that Sgo2 at the centromere is required for protection.In meiosis I centromeric cohesin is protected by Sgo2 from Separase-mediated cleavage ensuring that sister chromatids are kept together until their separation in meiosis II. Here the authors demonstrate that Bub1 and Mps1 kinase activities are required for Sgo2 localisation to the centromere region.
Reconstruction of missing daily streamflow data using dynamic regression models
NASA Astrophysics Data System (ADS)
Tencaliec, Patricia; Favre, Anne-Catherine; Prieur, Clémentine; Mathevet, Thibault
2015-12-01
River discharge is one of the most important quantities in hydrology. It provides fundamental records for water resources management and climate change monitoring. Even very short data-gaps in this information can cause extremely different analysis outputs. Therefore, reconstructing missing data of incomplete data sets is an important step regarding the performance of the environmental models, engineering, and research applications, thus it presents a great challenge. The objective of this paper is to introduce an effective technique for reconstructing missing daily discharge data when one has access to only daily streamflow data. The proposed procedure uses a combination of regression and autoregressive integrated moving average models (ARIMA) called dynamic regression model. This model uses the linear relationship between neighbor and correlated stations and then adjusts the residual term by fitting an ARIMA structure. Application of the model to eight daily streamflow data for the Durance river watershed showed that the model yields reliable estimates for the missing data in the time series. Simulation studies were also conducted to evaluate the performance of the procedure.
Non-Gaussian spatiotemporal simulation of multisite daily precipitation: downscaling framework
NASA Astrophysics Data System (ADS)
Ben Alaya, M. A.; Ouarda, T. B. M. J.; Chebana, F.
2018-01-01
Probabilistic regression approaches for downscaling daily precipitation are very useful. They provide the whole conditional distribution at each forecast step to better represent the temporal variability. The question addressed in this paper is: how to simulate spatiotemporal characteristics of multisite daily precipitation from probabilistic regression models? Recent publications point out the complexity of multisite properties of daily precipitation and highlight the need for using a non-Gaussian flexible tool. This work proposes a reasonable compromise between simplicity and flexibility avoiding model misspecification. A suitable nonparametric bootstrapping (NB) technique is adopted. A downscaling model which merges a vector generalized linear model (VGLM as a probabilistic regression tool) and the proposed bootstrapping technique is introduced to simulate realistic multisite precipitation series. The model is applied to data sets from the southern part of the province of Quebec, Canada. It is shown that the model is capable of reproducing both at-site properties and the spatial structure of daily precipitations. Results indicate the superiority of the proposed NB technique, over a multivariate autoregressive Gaussian framework (i.e. Gaussian copula).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jantzen, Carol M.; Trivelpiece, Cory L.; Crawford, Charles L.
Glass corrosion data from the ALTGLASS™ database were used to determine if gel compositions, which evolve as glass systems corrode, are correlated with the generation of zeolites and subsequent increase in the glass dissolution rate at long times. The gel compositions were estimated based on the difference between the elemental glass starting compositions and the measured elemental leachate concentrations from the long-term product consistency tests (ASTM C1285) at various stages of dissolution, ie, reaction progress. A well-characterized subset of high level waste glasses from the database was selected: these glasses had been leached for 15-20 years at reaction progresses upmore » to ~80%. The gel composition data, at various reaction progresses, were subjected to a step-wise regression, which demonstrated that hydrogel compositions with Si*/Al* ratios of <1.0 did not generate zeolites and maintained low dissolution rates for the duration of the experiments. Glasses that formed hydrogel compositions with Si^*/Al^* ratios ≥1, generated zeolites accompanied by a resumption in the glass dissolution rate. Finally, the role of the gel Si/Al ratio, and the interactions with the leachate, provides the fundamental understanding needed to predict if and when the glass dissolution rate will increase due to zeolitization.« less
International study of student career choice in psychiatry (ISoSCCiP): results from Modena, Italy.
Ferrari, Silvia; Reggianini, Corinna; Mattei, Giorgio; Rigatelli, Marco; Pingani, Luca; Bhugra, Dinesh
2013-08-01
Italy was one of the 16 countries to take part in the International Study of Student Career Choice in Psychiatry (ISoSCCiP). This paper reports and comments on the IsoSCCiP data on Italian medical students. Italian final year medical students from the University of Modena and Reggio Emilia were asked to fill in an on-line questionnaire during the first semester of two consecutive academic years (2009-2010, 2010-2011). Step-wise logistic regressions were performed. Of the 231 students invited, 106 returned completed questionnaires (response rate = 46.7%). Women constituted 66%, and mean age was 25.14 (SD = 1.15). Psychiatry was the second most common choice of possible career by students (5.7%, n = 6). Choosing psychiatry was predicted by having volunteered for further clinical/research activities in psychiatry (p = 0.01), believing that 'the problems presented by psychiatric patients are often particularly interesting and challenging' (p < 0.01), and by accounts of personal/family experience with physical illness (p < 0.01). Both personal factors and factors related to training may be involved in the choice of psychiatry among Italian medical students. Cultural and organizational specificities of Italian mental healthcare may be involved, particularly the strong tradition of social psychiatry.
Jantzen, Carol M.; Trivelpiece, Cory L.; Crawford, Charles L.; ...
2017-02-18
Glass corrosion data from the ALTGLASS™ database were used to determine if gel compositions, which evolve as glass systems corrode, are correlated with the generation of zeolites and subsequent increase in the glass dissolution rate at long times. The gel compositions were estimated based on the difference between the elemental glass starting compositions and the measured elemental leachate concentrations from the long-term product consistency tests (ASTM C1285) at various stages of dissolution, ie, reaction progress. A well-characterized subset of high level waste glasses from the database was selected: these glasses had been leached for 15-20 years at reaction progresses upmore » to ~80%. The gel composition data, at various reaction progresses, were subjected to a step-wise regression, which demonstrated that hydrogel compositions with Si*/Al* ratios of <1.0 did not generate zeolites and maintained low dissolution rates for the duration of the experiments. Glasses that formed hydrogel compositions with Si^*/Al^* ratios ≥1, generated zeolites accompanied by a resumption in the glass dissolution rate. Finally, the role of the gel Si/Al ratio, and the interactions with the leachate, provides the fundamental understanding needed to predict if and when the glass dissolution rate will increase due to zeolitization.« less
Suzuki, Ikuko; Yanagi, Hisako; Tomura, Shigeo
2007-02-01
We conducted a longitudinal study using Functional Independence Measures to clarify factor related to independence of activities of daily living of elderly receiving in-home service under the long-term care insurance system Fifty-four elderly users of the in-home service of Ibaraki Prefecture assented to participate in this study and were analyzed. A researcher conducted survey at the baseline and after follow-up by visiting the elderly at each home. The evaluation standards used here were the Japanese version of Functional Independence Measure (FIM), Mini-Mental State Examination (MMSE), and Geriatric Depression Scale (GDS-15). The FIM score (mean+/-SD) was decreased 83.6+/-36.4 to 81.7+/-37.4 during the 112+/-22.2 day follow up period. Thirty-nine elderly demonstrated improvement or no change in FIM and 15 had declining scores. To clarify independent factors related to FIM change, we conducted a step-wise multifactor logistic regression analysis, and the results suggested importance for "in-home service availability" and "home care period less than one year". Our study suggested that it is important for maintenance or improvement of ADL in home care elderly to provide sufficient home .care services from the beginning under the long-term care insurance system.
Skeletal Maturation and Mineralisation of Children with Moderate to Severe Spastic Quadriplegia.
Sharawat, Indar Kumar; Sitaraman, Sadasivan
2016-06-01
Diminished bone mineral density and delayed skeletal maturation are common in children with spastic quadriplegia. The purpose of our study was to evaluate the Bone Mineral Density (BMD) of children with moderate to severe spastic quadriplegia and its relationship with other variables like nutrition and growth. This was a hospital based, cross- sectional, case-control study. Forty-two (28 males, 14 females) children with spastic quadriplegia and 42 (24 males, 18 females) healthy children were included in the study. BMD of cases and control were measured by Dual Energy X-ray Absorptiometry (DEXA). Radiographs of left hand and wrist of cases and controls were taken and bone age was determined. BMD values of upper extremity, lower extremity, thoraco-lumbar spine and pelvis in cases were lower than those of controls (p <0.0001). In children with non severe malnutrition, 75% of the cases had lower bone age than chronological age, whereas all cases with severe malnutrition had lower bone age than chronological age. Step wise regression analysis showed that nutritional status independently contributed to lower BMD values but the BMD values did not correlate significantly with the use of anticonvulsant drugs and presence of physical therapy. Decreased BMD and delayed bone age is prevalent in children with spastic quadriplegia and nutritional status is an important contributing factor.
Family functioning mediates adaptation in caregivers of individuals with Rett syndrome.
Lamb, Amanda E; Biesecker, Barbara B; Umstead, Kendall L; Muratori, Michelle; Biesecker, Leslie G; Erby, Lori H
2016-11-01
The objective of this study was to investigate factors related to family functioning and adaptation in caregivers of individuals with Rett syndrome (RS). A cross-sectional quantitative survey explored the relationships between demographics, parental self-efficacy, coping methods, family functioning and adaptation. A forward-backward, step-wise model selection procedure was used to evaluate variables associated with both family functioning and adaptation. Analyses also explored family functioning as a mediator of the relationship between other variables and adaptation. Bivariate analyses (N=400) revealed that greater parental self-efficacy, a greater proportion of problem-focused coping, and a lesser proportion of emotion-focused coping were associated with more effective family functioning. In addition, these key variables were significantly associated with greater adaptation, as was family functioning, while controlling for confounders. Finally, regression analyses suggest family functioning as a mediator of the relationships between three variables (parental self-efficacy, problem-focused coping, and emotion-focused coping) with adaptation. This study demonstrates the potentially predictive roles of expectations and coping methods and the mediator role of family functioning in adaptation among caregivers of individuals with RS, a chronic developmental disorder. A potential target for intervention is strengthening of caregiver competence in the parenting role to enhance caregiver adaptation. Published by Elsevier Ireland Ltd.
McDougall, Matthew A; Walsh, Michael; Wattier, Kristina; Knigge, Ryan; Miller, Lindsey; Stevermer, Michalene; Fogas, Bruce S
2016-12-30
This study examined whether Social Networking Sites (SNSs) have a negative moderator effect on the established relationship between perceived social support and depression in psychiatric inpatients. Survey instruments assessing for depression, perceived social support, and SNS use, were filled out by 301 psychiatric inpatients. Additional data on age, gender, and primary psychiatric diagnosis were collected. A step-wise multiple regression analysis was performed to determine significant interactions. There was no significant interaction of SNS use on the relationship between perceived social support and depression when measured by Social Media Use Integration Scale or by hours of SNS use per day. There was a significant negative relationship between perceived social support and depression, and a significant positive relationship between hours of SNS use per day and depression, measured by the Beck Depression Inventory-II. Limitations include a gender discrepancy among participants, generalizability, recall bias, and SNS measurement. This is the first study to look at SNS use and depression in psychiatric inpatients. SNS use did not affect perceived social support or the protective relationship between perceived social support and depression. Hours of SNS use per day were correlated with depression scores. Future studies between SNS use and depression should quantify daily SNS use. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Assimilation of GOES-Derived Cloud Fields Into MM5
NASA Astrophysics Data System (ADS)
Biazar, A. P.; Doty, K. G.; McNider, R.
2007-12-01
This approach for the assimilation of GOES-derived cloud data into an atmospheric model (the Fifth-Generation Pennsylvania State University-National Center for Atmospheric Research Mesoscale Model, or MM5) was performed in two steps. In the first step, multiple linear regression equations were developed using a control MM5 simulation to develop relationships for several dependent variables in model columns that had one or more layers of clouds. In the second step, the regression equations were applied during an MM5 simulation with assimilation in which the hourly GOES satellite data were used to determine the cloud locations and some of the cloud properties, but with all the other variables being determined by the model data. The satellite-derived fields used were shortwave cloud albedo and cloud top pressure. Ten multiple linear regression equations were developed for the following dependent variables: total cloud depth, number of cloud layers, depth of the layer that contains the maximum vertical velocity, the maximum vertical velocity, the height of the maximum vertical velocity, the estimated 1-h stable (i.e., grid scale) precipitation rate, the estimated 1-h convective precipitation rate, the height of the level with the maximum positive diabatic heating, the magnitude of the maximum positive diabatic heating, and the largest continuous layer of upward motion. The horizontal components of the divergent wind were adjusted to be consistent with the regression estimate of the maximum vertical velocity. The new total horizontal wind field with these new divergent components was then used to nudge an ongoing MM5 model simulation towards the target vertical velocity. Other adjustments included diabatic heating and moistening at specified levels. Where the model simulation had clouds when the satellite data indicated clear conditions, procedures were taken to remove or diminish the errant clouds. The results for the period of 0000 UTC 28 June - 0000 UTC 16 July 1999 for both a continental 32-km grid and an 8-km grid over the Southeastern United States indicate a significant improvement in the cloud bias statistics. The main improvement was the reduction of high bias values that indicated times and locations in the control run when there were model clouds but when the satellite indicated clear conditions. The importance of this technique is that it has been able to assimilate the observed clouds in the model in a dynamically sustainable manner. Acknowledgments. This work was partially funded by the following grants: a GEWEX grant from NASA , the Cooperative Agreement between the University of Alabama in Huntsville and the Minerals Management Service on Gulf of Mexico Issues, a NASA applications grant, and a NSF grant.
NASA Astrophysics Data System (ADS)
Merkord, C. L.; Liu, Y.; DeVos, M.; Wimberly, M. C.
2015-12-01
Malaria early detection and early warning systems are important tools for public health decision makers in regions where malaria transmission is seasonal and varies from year to year with fluctuations in rainfall and temperature. Here we present a new data-driven dynamic linear model based on the Kalman filter with time-varying coefficients that are used to identify malaria outbreaks as they occur (early detection) and predict the location and timing of future outbreaks (early warning). We fit linear models of malaria incidence with trend and Fourier form seasonal components using three years of weekly malaria case data from 30 districts in the Amhara Region of Ethiopia. We identified past outbreaks by comparing the modeled prediction envelopes with observed case data. Preliminary results demonstrated the potential for improved accuracy and timeliness over commonly-used methods in which thresholds are based on simpler summary statistics of historical data. Other benefits of the dynamic linear modeling approach include robustness to missing data and the ability to fit models with relatively few years of training data. To predict future outbreaks, we started with the early detection model for each district and added a regression component based on satellite-derived environmental predictor variables including precipitation data from the Tropical Rainfall Measuring Mission (TRMM) and land surface temperature (LST) and spectral indices from the Moderate Resolution Imaging Spectroradiometer (MODIS). We included lagged environmental predictors in the regression component of the model, with lags chosen based on cross-correlation of the one-step-ahead forecast errors from the first model. Our results suggest that predictions of future malaria outbreaks can be improved by incorporating lagged environmental predictors.
Microsatellite markers associated with resistance to Marek's disease in commercial layer chickens.
McElroy, J P; Dekkers, J C M; Fulton, J E; O'Sullivan, N P; Soller, M; Lipkin, E; Zhang, W; Koehler, K J; Lamont, S J; Cheng, H H
2005-11-01
The objective of the current study was to identify QTL conferring resistance to Marek's disease (MD) in commercial layer chickens. To generate the resource population, 2 partially inbred lines that differed in MD-caused mortality were intermated to produce 5 backcross families. Vaccinated chicks were challenged with very virulent plus (vv+) MD virus strain 648A at 6 d and monitored for MD symptoms. A recent field isolate of the MD virus was used because the lines were resistant to commonly used older laboratory strains. Selective genotyping was employed using 81 microsatellites selected based on prior results with selective DNA pooling. Linear regression and Cox proportional hazard models were used to detect associations between marker genotypes and survival. Significance thresholds were validated by simulation. Seven and 6 markers were significant based on proportion of false positive and false discovery rate thresholds less than 0.2, respectively. Seventeen markers were associated with MD survival considering a comparison-wise error rate of 0.10, which is about twice the number expected by chance, indicating that at least some of the associations represent true effects. Thus, the present study shows that loci affecting MD resistance can be mapped in commercial layer lines. More comprehensive studies are under way to confirm and extend these results.
NASA Astrophysics Data System (ADS)
Rabidas, Rinku; Midya, Abhishek; Chakraborty, Jayasree; Sadhu, Anup; Arif, Wasim
2018-02-01
In this paper, Curvelet based local attributes, Curvelet-Local configuration pattern (C-LCP), is introduced for the characterization of mammographic masses as benign or malignant. Amid different anomalies such as micro- calcification, bilateral asymmetry, architectural distortion, and masses, the reason for targeting the mass lesions is due to their variation in shape, size, and margin which makes the diagnosis a challenging task. Being efficient in classification, multi-resolution property of the Curvelet transform is exploited and local information is extracted from the coefficients of each subband using Local configuration pattern (LCP). The microscopic measures in concatenation with the local textural information provide more discriminating capability than individual. The measures embody the magnitude information along with the pixel-wise relationships among the neighboring pixels. The performance analysis is conducted with 200 mammograms of the DDSM database containing 100 mass cases of each benign and malignant. The optimal set of features is acquired via stepwise logistic regression method and the classification is carried out with Fisher linear discriminant analysis. The best area under the receiver operating characteristic curve and accuracy of 0.95 and 87.55% are achieved with the proposed method, which is further compared with some of the state-of-the-art competing methods.
An exploratory analysis of Indiana and Illinois biotic ...
EPA recognizes the importance of nutrient criteria in protecting designated uses from eutrophication effects associated with elevated phosphorus and nitrogen in streams and has worked with states over the past 12 years to assist them in developing nutrient criteria. Towards that end, EPA has provided states and tribes with technical guidance to assess nutrient impacts and to develop criteria. EPA published recommendations in 2000 on scientifically defensible empirical approaches for setting numeric criteria. EPA also published eco-regional criteria recommendations in 2000-2001 based on a frequency distribution approach meant to approximate reference condition concentrations. In 2010, EPA elaborated on one of these empirical approaches (i.e., stressor-response relationships) for developing nutrient criteria. The purpose of this report was to conduct exploratory analyses of state datasets from Illinois and Indiana to determine threshold values for nutrients and chlorophyll a that could guide Indiana and Illinois criteria development. Box and whisker plots were used to compare nutrient and chlorophyll a concentrations between Illinois and Indiana. Stressor response analyses, using piece-wise linear regression and change-point analysis (Illinois only) were conducted to determine thresholds of change in relationships between nutrients and biotic assemblages. Impact stmt: The purpose of this report was to conduct exploratory analyses of state datasets from Illinois
High-Order Model and Dynamic Filtering for Frame Rate Up-Conversion.
Bao, Wenbo; Zhang, Xiaoyun; Chen, Li; Ding, Lianghui; Gao, Zhiyong
2018-08-01
This paper proposes a novel frame rate up-conversion method through high-order model and dynamic filtering (HOMDF) for video pixels. Unlike the constant brightness and linear motion assumptions in traditional methods, the intensity and position of the video pixels are both modeled with high-order polynomials in terms of time. Then, the key problem of our method is to estimate the polynomial coefficients that represent the pixel's intensity variation, velocity, and acceleration. We propose to solve it with two energy objectives: one minimizes the auto-regressive prediction error of intensity variation by its past samples, and the other minimizes video frame's reconstruction error along the motion trajectory. To efficiently address the optimization problem for these coefficients, we propose the dynamic filtering solution inspired by video's temporal coherence. The optimal estimation of these coefficients is reformulated into a dynamic fusion of the prior estimate from pixel's temporal predecessor and the maximum likelihood estimate from current new observation. Finally, frame rate up-conversion is implemented using motion-compensated interpolation by pixel-wise intensity variation and motion trajectory. Benefited from the advanced model and dynamic filtering, the interpolated frame has much better visual quality. Extensive experiments on the natural and synthesized videos demonstrate the superiority of HOMDF over the state-of-the-art methods in both subjective and objective comparisons.
Control Variate Selection for Multiresponse Simulation.
1987-05-01
M. H. Knuter, Applied Linear Regression Mfodels, Richard D. Erwin, Inc., Homewood, Illinois, 1983. Neuts, Marcel F., Probability, Allyn and Bacon...1982. Neter, J., V. Wasserman, and M. H. Knuter, Applied Linear Regression .fodels, Richard D. Erwin, Inc., Homewood, Illinois, 1983. Neuts, Marcel F...Aspects of J%,ultivariate Statistical Theory, John Wiley and Sons, New York, New York, 1982. dY Neter, J., W. Wasserman, and M. H. Knuter, Applied Linear Regression Mfodels
ERIC Educational Resources Information Center
Kobrin, Jennifer L.; Sinharay, Sandip; Haberman, Shelby J.; Chajewski, Michael
2011-01-01
This study examined the adequacy of a multiple linear regression model for predicting first-year college grade point average (FYGPA) using SAT[R] scores and high school grade point average (HSGPA). A variety of techniques, both graphical and statistical, were used to examine if it is possible to improve on the linear regression model. The results…
High correlations between MRI brain volume measurements based on NeuroQuant® and FreeSurfer.
Ross, David E; Ochs, Alfred L; Tate, David F; Tokac, Umit; Seabaugh, John; Abildskov, Tracy J; Bigler, Erin D
2018-05-30
NeuroQuant ® (NQ) and FreeSurfer (FS) are commonly used computer-automated programs for measuring MRI brain volume. Previously they were reported to have high intermethod reliabilities but often large intermethod effect size differences. We hypothesized that linear transformations could be used to reduce the large effect sizes. This study was an extension of our previously reported study. We performed NQ and FS brain volume measurements on 60 subjects (including normal controls, patients with traumatic brain injury, and patients with Alzheimer's disease). We used two statistical approaches in parallel to develop methods for transforming FS volumes into NQ volumes: traditional linear regression, and Bayesian linear regression. For both methods, we used regression analyses to develop linear transformations of the FS volumes to make them more similar to the NQ volumes. The FS-to-NQ transformations based on traditional linear regression resulted in effect sizes which were small to moderate. The transformations based on Bayesian linear regression resulted in all effect sizes being trivially small. To our knowledge, this is the first report describing a method for transforming FS to NQ data so as to achieve high reliability and low effect size differences. Machine learning methods like Bayesian regression may be more useful than traditional methods. Copyright © 2018 Elsevier B.V. All rights reserved.
Quantile Regression in the Study of Developmental Sciences
Petscher, Yaacov; Logan, Jessica A. R.
2014-01-01
Linear regression analysis is one of the most common techniques applied in developmental research, but only allows for an estimate of the average relations between the predictor(s) and the outcome. This study describes quantile regression, which provides estimates of the relations between the predictor(s) and outcome, but across multiple points of the outcome’s distribution. Using data from the High School and Beyond and U.S. Sustained Effects Study databases, quantile regression is demonstrated and contrasted with linear regression when considering models with: (a) one continuous predictor, (b) one dichotomous predictor, (c) a continuous and a dichotomous predictor, and (d) a longitudinal application. Results from each example exhibited the differential inferences which may be drawn using linear or quantile regression. PMID:24329596
Definition of molecular determinants of prostate cancer cell bone extravasation.
Barthel, Steven R; Hays, Danielle L; Yazawa, Erika M; Opperman, Matthew; Walley, Kempland C; Nimrichter, Leonardo; Burdick, Monica M; Gillard, Bryan M; Moser, Michael T; Pantel, Klaus; Foster, Barbara A; Pienta, Kenneth J; Dimitroff, Charles J
2013-01-15
Advanced prostate cancer commonly metastasizes to bone, but transit of malignant cells across the bone marrow endothelium (BMEC) remains a poorly understood step in metastasis. Prostate cancer cells roll on E-selectin(+) BMEC through E-selectin ligand-binding interactions under shear flow, and prostate cancer cells exhibit firm adhesion to BMEC via β1, β4, and αVβ3 integrins in static assays. However, whether these discrete prostate cancer cell-BMEC adhesive contacts culminate in cooperative, step-wise transendothelial migration into bone is not known. Here, we describe how metastatic prostate cancer cells breach BMEC monolayers in a step-wise fashion under physiologic hemodynamic flow. Prostate cancer cells tethered and rolled on BMEC and then firmly adhered to and traversed BMEC via sequential dependence on E-selectin ligands and β1 and αVβ3 integrins. Expression analysis in human metastatic prostate cancer tissue revealed that β1 was markedly upregulated compared with expression of other β subunits. Prostate cancer cell breaching was regulated by Rac1 and Rap1 GTPases and, notably, did not require exogenous chemokines as β1, αVβ3, Rac1, and Rap1 were constitutively active. In homing studies, prostate cancer cell trafficking to murine femurs was dependent on E-selectin ligand, β1 integrin, and Rac1. Moreover, eliminating E-selectin ligand-synthesizing α1,3 fucosyltransferases in transgenic adenoma of mouse prostate mice dramatically reduced prostate cancer incidence. These results unify the requirement for E-selectin ligands, α1,3 fucosyltransferases, β1 and αVβ3 integrins, and Rac/Rap1 GTPases in mediating prostate cancer cell homing and entry into bone and offer new insight into the role of α1,3 fucosylation in prostate cancer development.
The interval testing procedure: A general framework for inference in functional data analysis.
Pini, Alessia; Vantini, Simone
2016-09-01
We introduce in this work the Interval Testing Procedure (ITP), a novel inferential technique for functional data. The procedure can be used to test different functional hypotheses, e.g., distributional equality between two or more functional populations, equality of mean function of a functional population to a reference. ITP involves three steps: (i) the representation of data on a (possibly high-dimensional) functional basis; (ii) the test of each possible set of consecutive basis coefficients; (iii) the computation of the adjusted p-values associated to each basis component, by means of a new strategy here proposed. We define a new type of error control, the interval-wise control of the family wise error rate, particularly suited for functional data. We show that ITP is provided with such a control. A simulation study comparing ITP with other testing procedures is reported. ITP is then applied to the analysis of hemodynamical features involved with cerebral aneurysm pathology. ITP is implemented in the fdatest R package. © 2016, The International Biometric Society.
Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L
2018-01-01
Aims A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R2), using R2 as the primary metric of assay agreement. However, the use of R2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. Methods We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Results Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. Conclusions The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. PMID:28747393
Erdogan, Saffet
2009-10-01
The aim of the study is to describe the inter-province differences in traffic accidents and mortality on roads of Turkey. Two different risk indicators were used to evaluate the road safety performance of the provinces in Turkey. These indicators are the ratios between the number of persons killed in road traffic accidents (1) and the number of accidents (2) (nominators) and their exposure to traffic risk (denominator). Population and the number of registered motor vehicles in the provinces were used as denominators individually. Spatial analyses were performed to the mean annual rate of deaths and to the number of fatal accidents that were calculated for the period of 2001-2006. Empirical Bayes smoothing was used to remove background noise from the raw death and accident rates because of the sparsely populated provinces and small number of accident and death rates of provinces. Global and local spatial autocorrelation analyses were performed to show whether the provinces with high rates of deaths-accidents show clustering or are located closer by chance. The spatial distribution of provinces with high rates of deaths and accidents was nonrandom and detected as clustered with significance of P<0.05 with spatial autocorrelation analyses. Regions with high concentration of fatal accidents and deaths were located in the provinces that contain the roads connecting the Istanbul, Ankara, and Antalya provinces. Accident and death rates were also modeled with some independent variables such as number of motor vehicles, length of roads, and so forth using geographically weighted regression analysis with forward step-wise elimination. The level of statistical significance was taken as P<0.05. Large differences were found between the rates of deaths and accidents according to denominators in the provinces. The geographically weighted regression analyses did significantly better predictions for both accident rates and death rates than did ordinary least regressions, as indicated by adjusted R(2) values. Geographically weighted regression provided values of 0.89-0.99 adjusted R(2) for death and accident rates, compared with 0.88-0.95, respectively, by ordinary least regressions. Geographically weighted regression has the potential to reveal local patterns in the spatial distribution of rates, which would be ignored by the ordinary least regression approach. The application of spatial analysis and modeling of accident statistics and death rates at provincial level in Turkey will help to identification of provinces with outstandingly high accident and death rates. This could help more efficient road safety management in Turkey.
Hino, Kimihiro; Lee, Jung Su; Asami, Yasushi
2017-12-01
People's year-round interpersonal step count variations according to meteorological conditions are not fully understood, because complete year-round data from a sufficient sample of the general population are difficult to acquire. This study examined the associations between meteorological conditions and objectively measured step counts using year-round data collected from a large cohort ( N = 24,625) in Yokohama, Japan from April 2015 to March 2016. Two-piece linear regression analysis was used to examine the associations between the monthly median daily step count and three meteorological indices (mean values of temperature, temperature-humidity index (THI), and net effective temperature (NET)). The number of steps per day peaked at temperatures between 19.4 and 20.7 °C. At lower temperatures, the increase in steps per day was between 46.4 and 52.5 steps per 1 °C increase. At temperatures higher than those at which step counts peaked, the decrease in steps per day was between 98.0 and 187.9 per 1 °C increase. Furthermore, these effects were more obvious in elderly than non-elderly persons in both sexes. A similar tendency was seen when using THI and NET instead of temperature. Among the three meteorological indices, the highest R 2 value with step counts was observed with THI in all four groups. Both high and low meteorological indices discourage people from walking and higher values of the indices adversely affect step count more than lower values, particularly among the elderly. Among the three indices assessed, THI best explains the seasonal fluctuations in step counts.
A SEMIPARAMETRIC BAYESIAN MODEL FOR CIRCULAR-LINEAR REGRESSION
We present a Bayesian approach to regress a circular variable on a linear predictor. The regression coefficients are assumed to have a nonparametric distribution with a Dirichlet process prior. The semiparametric Bayesian approach gives added flexibility to the model and is usefu...
NASA Astrophysics Data System (ADS)
Izat Rashed, Ghamgeen
2018-03-01
This paper presented a way of obtaining certain operating rules on time steps for the management of a large reservoir operation with a peak hydropower plant associated to it. The rules were allowed to have the form of non-linear regression equations which link a decision variable (here the water volume in the reservoir at the end of the time step) by several parameters influencing it. This paper considered the Dokan hydroelectric development KR-Iraq, which operation data are available for. It was showing that both the monthly average inflows and the monthly power demands are random variables. A model of deterministic dynamic programming intending the minimization of the total amount of the squares differences between the demanded energy and the generated energy is run with a multitude of annual scenarios of inflows and monthly required energies. The operating rules achieved allow the efficient and safe management of the operation and it is quietly and accurately known the forecast of the inflow and of the energy demand on the next time step.
NASA Astrophysics Data System (ADS)
Rashed, G. I.
2018-02-01
This paper presented a way of obtaining certain operating rules on time steps for the management of a large reservoir operation with a peak hydropower plant associated to it. The rules were allowed to have the form of non-linear regression equations which link a decision variable (here the water volume in the reservoir at the end of the time step) by several parameters influencing it. This paper considered the Dokan hydroelectric development KR-Iraq, which operation data are available for. It was showing that both the monthly average inflows and the monthly power demands are random variables. A model of deterministic dynamic programming intending the minimization of the total amount of the squares differences between the demanded energy and the generated energy is run with a multitude of annual scenarios of inflows and monthly required energies. The operating rules achieved allow the efficient and safe management of the operation and it is quietly and accurately known the forecast of the inflow and of the energy demand on the next time step.
Decision Making Processes and Outcomes
Hicks Patrick, Julie; Steele, Jenessa C.; Spencer, S. Melinda
2013-01-01
The primary aim of this study was to examine the contributions of individual characteristics and strategic processing to the prediction of decision quality. Data were provided by 176 adults, ages 18 to 93 years, who completed computerized decision-making vignettes and a battery of demographic and cognitive measures. We examined the relations among age, domain-specific experience, working memory, and three measures of strategic information search to the prediction of solution quality using a 4-step hierarchical linear regression analysis. Working memory and two measures of strategic processing uniquely contributed to the variance explained. Results are discussed in terms of potential advances to both theory and intervention efforts. PMID:24282638
Rotolo, Federico; Paoletti, Xavier; Michiels, Stefan
2018-03-01
Surrogate endpoints are attractive for use in clinical trials instead of well-established endpoints because of practical convenience. To validate a surrogate endpoint, two important measures can be estimated in a meta-analytic context when individual patient data are available: the R indiv 2 or the Kendall's τ at the individual level, and the R trial 2 at the trial level. We aimed at providing an R implementation of classical and well-established as well as more recent statistical methods for surrogacy assessment with failure time endpoints. We also intended incorporating utilities for model checking and visualization and data generating methods described in the literature to date. In the case of failure time endpoints, the classical approach is based on two steps. First, a Kendall's τ is estimated as measure of individual level surrogacy using a copula model. Then, the R trial 2 is computed via a linear regression of the estimated treatment effects; at this second step, the estimation uncertainty can be accounted for via measurement-error model or via weights. In addition to the classical approach, we recently developed an approach based on bivariate auxiliary Poisson models with individual random effects to measure the Kendall's τ and treatment-by-trial interactions to measure the R trial 2 . The most common data simulation models described in the literature are based on: copula models, mixed proportional hazard models, and mixture of half-normal and exponential random variables. The R package surrosurv implements the classical two-step method with Clayton, Plackett, and Hougaard copulas. It also allows to optionally adjusting the second-step linear regression for measurement-error. The mixed Poisson approach is implemented with different reduced models in addition to the full model. We present the package functions for estimating the surrogacy models, for checking their convergence, for performing leave-one-trial-out cross-validation, and for plotting the results. We illustrate their use in practice on individual patient data from a meta-analysis of 4069 patients with advanced gastric cancer from 20 trials of chemotherapy. The surrosurv package provides an R implementation of classical and recent statistical methods for surrogacy assessment of failure time endpoints. Flexible simulation functions are available to generate data according to the methods described in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.
Modeling non-linear growth responses to temperature and hydrology in wetland trees
NASA Astrophysics Data System (ADS)
Keim, R.; Allen, S. T.
2016-12-01
Growth responses of wetland trees to flooding and climate variations are difficult to model because they depend on multiple, apparently interacting factors, but are a critical link in hydrological control of wetland carbon budgets. To more generally understand tree growth to hydrological forcing, we modeled non-linear responses of tree ring growth to flooding and climate at sub-annual time steps, using Vaganov-Shashkin response functions. We calibrated the model to six baldcypress tree-ring chronologies from two hydrologically distinct sites in southern Louisiana, and tested several hypotheses of plasticity in wetlands tree responses to interacting environmental variables. The model outperformed traditional multiple linear regression. More importantly, optimized response parameters were generally similar among sites with varying hydrological conditions, suggesting generality to the functions. Model forms that included interacting responses to multiple forcing factors were more effective than were single response functions, indicating the principle of a single limiting factor is not correct in wetlands and both climatic and hydrological variables must be considered in predicting responses to hydrological or climate change.
Linear Regression Links Transcriptomic Data and Cellular Raman Spectra.
Kobayashi-Kirschvink, Koseki J; Nakaoka, Hidenori; Oda, Arisa; Kamei, Ken-Ichiro F; Nosho, Kazuki; Fukushima, Hiroko; Kanesaki, Yu; Yajima, Shunsuke; Masaki, Haruhiko; Ohta, Kunihiro; Wakamoto, Yuichi
2018-06-08
Raman microscopy is an imaging technique that has been applied to assess molecular compositions of living cells to characterize cell types and states. However, owing to the diverse molecular species in cells and challenges of assigning peaks to specific molecules, it has not been clear how to interpret cellular Raman spectra. Here, we provide firm evidence that cellular Raman spectra and transcriptomic profiles of Schizosaccharomyces pombe and Escherichia coli can be computationally connected and thus interpreted. We find that the dimensions of high-dimensional Raman spectra and transcriptomes measured by RNA sequencing can be reduced and connected linearly through a shared low-dimensional subspace. Accordingly, we were able to predict global gene expression profiles by applying the calculated transformation matrix to Raman spectra, and vice versa. Highly expressed non-coding RNAs contributed to the Raman-transcriptome linear correspondence more significantly than mRNAs in S. pombe. This demonstration of correspondence between cellular Raman spectra and transcriptomes is a promising step toward establishing spectroscopic live-cell omics studies. Copyright © 2018 Elsevier Inc. All rights reserved.
Forecasting volatility with neural regression: a contribution to model adequacy.
Refenes, A N; Holt, W T
2001-01-01
Neural nets' usefulness for forecasting is limited by problems of overfitting and the lack of rigorous procedures for model identification, selection and adequacy testing. This paper describes a methodology for neural model misspecification testing. We introduce a generalization of the Durbin-Watson statistic for neural regression and discuss the general issues of misspecification testing using residual analysis. We derive a generalized influence matrix for neural estimators which enables us to evaluate the distribution of the statistic. We deploy Monte Carlo simulation to compare the power of the test for neural and linear regressors. While residual testing is not a sufficient condition for model adequacy, it is nevertheless a necessary condition to demonstrate that the model is a good approximation to the data generating process, particularly as neural-network estimation procedures are susceptible to partial convergence. The work is also an important step toward developing rigorous procedures for neural model identification, selection and adequacy testing which have started to appear in the literature. We demonstrate its applicability in the nontrivial problem of forecasting implied volatility innovations using high-frequency stock index options. Each step of the model building process is validated using statistical tests to verify variable significance and model adequacy with the results confirming the presence of nonlinear relationships in implied volatility innovations.
Kumar, K Vasanth; Sivanesan, S
2006-08-25
Pseudo second order kinetic expressions of Ho, Sobkowsk and Czerwinski, Blanachard et al. and Ritchie were fitted to the experimental kinetic data of malachite green onto activated carbon by non-linear and linear method. Non-linear method was found to be a better way of obtaining the parameters involved in the second order rate kinetic expressions. Both linear and non-linear regression showed that the Sobkowsk and Czerwinski and Ritchie's pseudo second order model were the same. Non-linear regression analysis showed that both Blanachard et al. and Ho have similar ideas on the pseudo second order model but with different assumptions. The best fit of experimental data in Ho's pseudo second order expression by linear and non-linear regression method showed that Ho pseudo second order model was a better kinetic expression when compared to other pseudo second order kinetic expressions. The amount of dye adsorbed at equilibrium, q(e), was predicted from Ho pseudo second order expression and were fitted to the Langmuir, Freundlich and Redlich Peterson expressions by both linear and non-linear method to obtain the pseudo isotherms. The best fitting pseudo isotherm was found to be the Langmuir and Redlich Peterson isotherm. Redlich Peterson is a special case of Langmuir when the constant g equals unity.
Cross-correlation of WISE galaxies with the cosmic microwave background
NASA Astrophysics Data System (ADS)
Goto, Tomotsugu; Szapudi, István.; Granett, Benjamin R.
2012-05-01
We estimated the cross-power spectra of a galaxy sample from the Wide-field Infrared Survey Explorer (WISE) survey with the 7-year Wilkinson Microwave Anisotropy Probe (WMAP) temperature anisotropy maps. A conservatively selected galaxy sample covers ˜13 000 deg2 with a median redshift of z= 0.15. Cross-power spectra show correlations between the two data sets with no discernible dependence on the WMAPQ, V and W frequency bands. We interpret these results in terms of the integrated Sachs-Wolfe (ISW) effect: for the |b| > 20° sample at l= 6-87, we measure the amplitude (normalized to be 1 for vanilla Λ cold dark matter expectation) of the signal to be 3.4 ± 1.1, i.e. 3.1σ detection. We discuss other possibilities, but at face value the detection of the linear ISW effect in a flat universe is caused by large-scale decaying potentials, a sign of accelerated expansion driven by dark energy.