Sample records for standard multiple linear

  1. A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield

    NASA Astrophysics Data System (ADS)

    Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan

    2018-04-01

    In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.

  2. Optimal space communications techniques. [using digital and phase locked systems for signal processing

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1974-01-01

    Digital multiplication of two waveforms using delta modulation (DM) is discussed. It is shown that while conventional multiplication of two N bit words requires N2 complexity, multiplication using DM requires complexity which increases linearly with N. Bounds on the signal-to-quantization noise ratio (SNR) resulting from this multiplication are determined and compared with the SNR obtained using standard multiplication techniques. The phase locked loop (PLL) system, consisting of a phase detector, voltage controlled oscillator, and a linear loop filter, is discussed in terms of its design and system advantages. Areas requiring further research are identified.

  3. A Constrained Linear Estimator for Multiple Regression

    ERIC Educational Resources Information Center

    Davis-Stober, Clintin P.; Dana, Jason; Budescu, David V.

    2010-01-01

    "Improper linear models" (see Dawes, Am. Psychol. 34:571-582, "1979"), such as equal weighting, have garnered interest as alternatives to standard regression models. We analyze the general circumstances under which these models perform well by recasting a class of "improper" linear models as "proper" statistical models with a single predictor. We…

  4. An improved null model for assessing the net effects of multiple stressors on communities.

    PubMed

    Thompson, Patrick L; MacLennan, Megan M; Vinebrooke, Rolf D

    2018-01-01

    Ecological stressors (i.e., environmental factors outside their normal range of variation) can mediate each other through their interactions, leading to unexpected combined effects on communities. Determining whether the net effect of stressors is ecologically surprising requires comparing their cumulative impact to a null model that represents the linear combination of their individual effects (i.e., an additive expectation). However, we show that standard additive and multiplicative null models that base their predictions on the effects of single stressors on community properties (e.g., species richness or biomass) do not provide this linear expectation, leading to incorrect interpretations of antagonistic and synergistic responses by communities. We present an alternative, the compositional null model, which instead bases its predictions on the effects of stressors on individual species, and then aggregates them to the community level. Simulations demonstrate the improved ability of the compositional null model to accurately provide a linear expectation of the net effect of stressors. We simulate the response of communities to paired stressors that affect species in a purely additive fashion and compare the relative abilities of the compositional null model and two standard community property null models (additive and multiplicative) to predict these linear changes in species richness and community biomass across different combinations (both positive, negative, or opposite) and intensities of stressors. The compositional model predicts the linear effects of multiple stressors under almost all scenarios, allowing for proper classification of net effects, whereas the standard null models do not. Our findings suggest that current estimates of the prevalence of ecological surprises on communities based on community property null models are unreliable, and should be improved by integrating the responses of individual species to the community level as does our compositional null model. © 2017 John Wiley & Sons Ltd.

  5. The Geometry of Enhancement in Multiple Regression

    ERIC Educational Resources Information Center

    Waller, Niels G.

    2011-01-01

    In linear multiple regression, "enhancement" is said to occur when R[superscript 2] = b[prime]r greater than r[prime]r, where b is a p x 1 vector of standardized regression coefficients and r is a p x 1 vector of correlations between a criterion y and a set of standardized regressors, x. When p = 1 then b [is congruent to] r and…

  6. An Introduction to Multilinear Formula Score Theory. Measurement Series 84-4.

    ERIC Educational Resources Information Center

    Levine, Michael V.

    Formula score theory (FST) associates each multiple choice test with a linear operator and expresses all of the real functions of item response theory as linear combinations of the operator's eigenfunctions. Hard measurement problems can then often be reformulated as easier, standard mathematical problems. For example, the problem of estimating…

  7. Development of a technique for estimating noise covariances using multiple observers

    NASA Technical Reports Server (NTRS)

    Bundick, W. Thomas

    1988-01-01

    Friedland's technique for estimating the unknown noise variances of a linear system using multiple observers has been extended by developing a general solution for the estimates of the variances, developing the statistics (mean and standard deviation) of these estimates, and demonstrating the solution on two examples.

  8. Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.

    PubMed

    Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P

    2017-03-01

    The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Sparse matrix multiplications for linear scaling electronic structure calculations in an atom-centered basis set using multiatom blocks.

    PubMed

    Saravanan, Chandra; Shao, Yihan; Baer, Roi; Ross, Philip N; Head-Gordon, Martin

    2003-04-15

    A sparse matrix multiplication scheme with multiatom blocks is reported, a tool that can be very useful for developing linear-scaling methods with atom-centered basis functions. Compared to conventional element-by-element sparse matrix multiplication schemes, efficiency is gained by the use of the highly optimized basic linear algebra subroutines (BLAS). However, some sparsity is lost in the multiatom blocking scheme because these matrix blocks will in general contain negligible elements. As a result, an optimal block size that minimizes the CPU time by balancing these two effects is recovered. In calculations on linear alkanes, polyglycines, estane polymers, and water clusters the optimal block size is found to be between 40 and 100 basis functions, where about 55-75% of the machine peak performance was achieved on an IBM RS6000 workstation. In these calculations, the blocked sparse matrix multiplications can be 10 times faster than a standard element-by-element sparse matrix package. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 618-622, 2003

  10. Analysis of Slope Limiters on Irregular Grids

    NASA Technical Reports Server (NTRS)

    Berger, Marsha; Aftosmis, Michael J.

    2005-01-01

    This paper examines the behavior of flux and slope limiters on non-uniform grids in multiple dimensions. Many slope limiters in standard use do not preserve linear solutions on irregular grids impacting both accuracy and convergence. We rewrite some well-known limiters to highlight their underlying symmetry, and use this form to examine the proper - ties of both traditional and novel limiter formulations on non-uniform meshes. A consistent method of handling stretched meshes is developed which is both linearity preserving for arbitrary mesh stretchings and reduces to common limiters on uniform meshes. In multiple dimensions we analyze the monotonicity region of the gradient vector and show that the multidimensional limiting problem may be cast as the solution of a linear programming problem. For some special cases we present a new directional limiting formulation that preserves linear solutions in multiple dimensions on irregular grids. Computational results using model problems and complex three-dimensional examples are presented, demonstrating accuracy, monotonicity and robustness.

  11. Adaptive receiver structures for asynchronous CDMA systems

    NASA Astrophysics Data System (ADS)

    Rapajic, Predrag B.; Vucetic, Branka S.

    1994-05-01

    Adaptive linear and decision feedback receiver structures for coherent demodulation in asynchronous code division multiple access (CDMA) systems are considered. It is assumed that the adaptive receiver has no knowledge of the signature waveforms and timing of other users. The receiver is trained by a known training sequence prior to data transmission and continuously adjusted by an adaptive algorithm during data transmission. The proposed linear receiver is as simple as a standard single-user detector receiver consisting of a matched filter with constant coefficients, but achieves essential advantages with respect to timing recovery, multiple access interference elimination, near/far effect, narrowband and frequency-selective fading interference suppression, and user privacy. An adaptive centralized decision feedback receiver has the same advantages of the linear receiver but, in addition, achieves a further improvement in multiple access interference cancellation at the expense of higher complexity. The proposed receiver structures are tested by simulation over a channel with multipath propagation, multiple access interference, narrowband interference, and additive white Gaussian noise.

  12. Multiple imputation of covariates by fully conditional specification: Accommodating the substantive model

    PubMed Central

    Seaman, Shaun R; White, Ian R; Carpenter, James R

    2015-01-01

    Missing covariate data commonly occur in epidemiological and clinical research, and are often dealt with using multiple imputation. Imputation of partially observed covariates is complicated if the substantive model is non-linear (e.g. Cox proportional hazards model), or contains non-linear (e.g. squared) or interaction terms, and standard software implementations of multiple imputation may impute covariates from models that are incompatible with such substantive models. We show how imputation by fully conditional specification, a popular approach for performing multiple imputation, can be modified so that covariates are imputed from models which are compatible with the substantive model. We investigate through simulation the performance of this proposal, and compare it with existing approaches. Simulation results suggest our proposal gives consistent estimates for a range of common substantive models, including models which contain non-linear covariate effects or interactions, provided data are missing at random and the assumed imputation models are correctly specified and mutually compatible. Stata software implementing the approach is freely available. PMID:24525487

  13. Impact of Texas high school science teacher credentials on student performance in high school science

    NASA Astrophysics Data System (ADS)

    George, Anna Ray Bayless

    A study was conducted to determine the relationship between the credentials held by science teachers who taught at a school that administered the Science Texas Assessment on Knowledge and Skills (Science TAKS), the state standardized exam in science, at grade 11 and student performance on a state standardized exam in science administered in grade 11. Years of teaching experience, teacher certification type(s), highest degree level held, teacher and school demographic information, and the percentage of students who met the passing standard on the Science TAKS were obtained through a public records request to the Texas Education Agency (TEA) and the State Board for Educator Certification (SBEC). Analysis was performed through the use of canonical correlation analysis and multiple linear regression analysis. The results of the multiple linear regression analysis indicate that a larger percentage of students met the passing standard on the Science TAKS state attended schools in which a large portion of the high school science teachers held post baccalaureate degrees, elementary and physical science certifications, and had 11-20 years of teaching experience.

  14. Nonparametric Bayesian Multiple Imputation for Incomplete Categorical Variables in Large-Scale Assessment Surveys

    ERIC Educational Resources Information Center

    Si, Yajuan; Reiter, Jerome P.

    2013-01-01

    In many surveys, the data comprise a large number of categorical variables that suffer from item nonresponse. Standard methods for multiple imputation, like log-linear models or sequential regression imputation, can fail to capture complex dependencies and can be difficult to implement effectively in high dimensions. We present a fully Bayesian,…

  15. Investigating Mathematics Students' Use of Multiple Representations when Solving Linear Equations with One Unknown

    ERIC Educational Resources Information Center

    Beyranevand, Matthew L.

    2010-01-01

    Although it is difficult to find any current literature that does not encourage use of multiple representations in mathematics classrooms, there has been very limited research that compared such practice to student achievement level on standardized tests. This study examined the associations between students' achievement levels and their (a)…

  16. FIRE: an SPSS program for variable selection in multiple linear regression analysis via the relative importance of predictors.

    PubMed

    Lorenzo-Seva, Urbano; Ferrando, Pere J

    2011-03-01

    We provide an SPSS program that implements currently recommended techniques and recent developments for selecting variables in multiple linear regression analysis via the relative importance of predictors. The approach consists of: (1) optimally splitting the data for cross-validation, (2) selecting the final set of predictors to be retained in the equation regression, and (3) assessing the behavior of the chosen model using standard indices and procedures. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.

  17. Practical Session: Simple Linear Regression

    NASA Astrophysics Data System (ADS)

    Clausel, M.; Grégoire, G.

    2014-12-01

    Two exercises are proposed to illustrate the simple linear regression. The first one is based on the famous Galton's data set on heredity. We use the lm R command and get coefficients estimates, standard error of the error, R2, residuals …In the second example, devoted to data related to the vapor tension of mercury, we fit a simple linear regression, predict values, and anticipate on multiple linear regression. This pratical session is an excerpt from practical exercises proposed by A. Dalalyan at EPNC (see Exercises 1 and 2 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_4.pdf).

  18. Guidelines and Procedures for Computing Time-Series Suspended-Sediment Concentrations and Loads from In-Stream Turbidity-Sensor and Streamflow Data

    USGS Publications Warehouse

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.

    2009-01-01

    In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.

  19. Linearity-Preserving Limiters on Irregular Grids

    NASA Technical Reports Server (NTRS)

    Berger, Marsha; Aftosmis, Michael; Murman, Scott

    2004-01-01

    This paper examines the behavior of flux and slope limiters on non-uniform grids in multiple dimensions. We note that on non-uniform grids the scalar formulation in standard use today sacrifices k-exactness, even for linear solutions, impacting both accuracy and convergence. We rewrite some well-known limiters in a n way to highlight their underlying symmetry, and use this to examine both traditional and novel limiter formulations. A consistent method of handling stretched meshes is developed, as is a new directional formulation in multiple dimensions for irregular grids. Results are presented demonstrating improved accuracy and convergence using a combination of model problems and complex three-dimensional examples.

  20. Spacecraft nonlinear control

    NASA Technical Reports Server (NTRS)

    Sheen, Jyh-Jong; Bishop, Robert H.

    1992-01-01

    The feedback linearization technique is applied to the problem of spacecraft attitude control and momentum management with control moment gyros (CMGs). The feedback linearization consists of a coordinate transformation, which transforms the system to a companion form, and a nonlinear feedback control law to cancel the nonlinear dynamics resulting in a linear equivalent model. Pole placement techniques are then used to place the closed-loop poles. The coordinate transformation proposed here evolves from three output functions of relative degree four, three, and two, respectively. The nonlinear feedback control law is presented. Stability in a neighborhood of a controllable torque equilibrium attitude (TEA) is guaranteed and this fact is demonstrated by the simulation results. An investigation of the nonlinear control law shows that singularities exist in the state space outside the neighborhood of the controllable TEA. The nonlinear control law is simplified by a standard linearization technique and it is shown that the linearized nonlinear controller provides a natural way to select control gains for the multiple-input, multiple-output system. Simulation results using the linearized nonlinear controller show good performance relative to the nonlinear controller in the neighborhood of the TEA.

  1. Linearized inversion of multiple scattering seismic energy

    NASA Astrophysics Data System (ADS)

    Aldawood, Ali; Hoteit, Ibrahim; Zuberi, Mohammad

    2014-05-01

    Internal multiples deteriorate the quality of the migrated image obtained conventionally by imaging single scattering energy. So, imaging seismic data with the single-scattering assumption does not locate multiple bounces events in their actual subsurface positions. However, imaging internal multiples properly has the potential to enhance the migrated image because they illuminate zones in the subsurface that are poorly illuminated by single scattering energy such as nearly vertical faults. Standard migration of these multiples provides subsurface reflectivity distributions with low spatial resolution and migration artifacts due to the limited recording aperture, coarse sources and receivers sampling, and the band-limited nature of the source wavelet. The resultant image obtained by the adjoint operator is a smoothed depiction of the true subsurface reflectivity model and is heavily masked by migration artifacts and the source wavelet fingerprint that needs to be properly deconvolved. Hence, we proposed a linearized least-square inversion scheme to mitigate the effect of the migration artifacts, enhance the spatial resolution, and provide more accurate amplitude information when imaging internal multiples. The proposed algorithm uses the least-square image based on single-scattering assumption as a constraint to invert for the part of the image that is illuminated by internal scattering energy. Then, we posed the problem of imaging double-scattering energy as a least-square minimization problem that requires solving the normal equation of the following form: GTGv = GTd, (1) where G is a linearized forward modeling operator that predicts double-scattered seismic data. Also, GT is a linearized adjoint operator that image double-scattered seismic data. Gradient-based optimization algorithms solve this linear system. Hence, we used a quasi-Newton optimization technique to find the least-square minimizer. In this approach, an estimate of the Hessian matrix that contains curvature information is modified at every iteration by a low-rank update based on gradient changes at every step. At each iteration, the data residual is imaged using GT to determine the model update. Application of the linearized inversion to synthetic data to image a vertical fault plane demonstrate the effectiveness of this methodology to properly delineate the vertical fault plane and give better amplitude information than the standard migrated image using the adjoint operator that takes into account internal multiples. Thus, least-square imaging of multiple scattering enhances the spatial resolution of the events illuminated by internal scattering energy. It also deconvolves the source signature and helps remove the fingerprint of the acquisition geometry. The final image is obtained by the superposition of the least-square solution based on single scattering assumption and the least-square solution based on double scattering assumption.

  2. Reduction of interferences in graphite furnace atomic absorption spectrometry by multiple linear regression modelling

    NASA Astrophysics Data System (ADS)

    Grotti, Marco; Abelmoschi, Maria Luisa; Soggia, Francesco; Tiberiade, Christian; Frache, Roberto

    2000-12-01

    The multivariate effects of Na, K, Mg and Ca as nitrates on the electrothermal atomisation of manganese, cadmium and iron were studied by multiple linear regression modelling. Since the models proved to efficiently predict the effects of the considered matrix elements in a wide range of concentrations, they were applied to correct the interferences occurring in the determination of trace elements in seawater after pre-concentration of the analytes. In order to obtain a statistically significant number of samples, a large volume of the certified seawater reference materials CASS-3 and NASS-3 was treated with Chelex-100 resin; then, the chelating resin was separated from the solution, divided into several sub-samples, each of them was eluted with nitric acid and analysed by electrothermal atomic absorption spectrometry (for trace element determinations) and inductively coupled plasma optical emission spectrometry (for matrix element determinations). To minimise any other systematic error besides that due to matrix effects, accuracy of the pre-concentration step and contamination levels of the procedure were checked by inductively coupled plasma mass spectrometric measurements. Analytical results obtained by applying the multiple linear regression models were compared with those obtained with other calibration methods, such as external calibration using acid-based standards, external calibration using matrix-matched standards and the analyte addition technique. Empirical models proved to efficiently reduce interferences occurring in the analysis of real samples, allowing an improvement of accuracy better than for other calibration methods.

  3. INCORPORATING CONCENTRATION DEPENDENCE IN STABLE ISOTOPE MIXING MODELS

    EPA Science Inventory

    Stable isotopes are frequently used to quantify the contributions of multiple sources to a mixture; e.g., C and N isotopic signatures can be used to determine the fraction of three food sources in a consumer's diet. The standard dual isotope, three source linear mixing model ass...

  4. Suppression Situations in Multiple Linear Regression

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2006-01-01

    This article proposes alternative expressions for the two most prevailing definitions of suppression without resorting to the standardized regression modeling. The formulation provides a simple basis for the examination of their relationship. For the two-predictor regression, the author demonstrates that the previous results in the literature are…

  5. Simple and multiple linear regression: sample size considerations.

    PubMed

    Hanley, James A

    2016-11-01

    The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Multiple internal standard normalization for improving HS-SPME-GC-MS quantitation in virgin olive oil volatile organic compounds (VOO-VOCs) profile.

    PubMed

    Fortini, Martina; Migliorini, Marzia; Cherubini, Chiara; Cecchi, Lorenzo; Calamai, Luca

    2017-04-01

    The commercial value of virgin olive oils (VOOs) strongly depends on their classification, also based on the aroma of the oils, usually evaluated by a panel test. Nowadays, a reliable analytical method is still needed to evaluate the volatile organic compounds (VOCs) and support the standard panel test method. To date, the use of HS-SPME sampling coupled to GC-MS is generally accepted for the analysis of VOCs in VOOs. However, VOO is a challenging matrix due to the simultaneous presence of: i) compounds at ppm and ppb concentrations; ii) molecules belonging to different chemical classes and iii) analytes with a wide range of molecular mass. Therefore, HS-SPME-GC-MS quantitation based upon the use of external standard method or of only a single internal standard (ISTD) for data normalization in an internal standard method, may be troublesome. In this work a multiple internal standard normalization is proposed to overcome these problems and improving quantitation of VOO-VOCs. As many as 11 ISTDs were used for quantitation of 71 VOCs. For each of them the most suitable ISTD was selected and a good linearity in a wide range of calibration was obtained. Except for E-2-hexenal, without ISTD or with an unsuitable ISTD, the linear range of calibration was narrower with respect to that obtained by a suitable ISTD, confirming the usefulness of multiple internal standard normalization for the correct quantitation of VOCs profile in VOOs. The method was validated for 71 VOCs, and then applied to a series of lampante virgin olive oils and extra virgin olive oils. In light of our results, we propose the application of this analytical approach for routine quantitative analyses and to support sensorial analysis for the evaluation of positive and negative VOOs attributes. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Estimation of standard liver volume in Chinese adult living donors.

    PubMed

    Fu-Gui, L; Lu-Nan, Y; Bo, L; Yong, Z; Tian-Fu, W; Ming-Qing, X; Wen-Tao, W; Zhe-Yu, C

    2009-12-01

    To determine a formula predicting the standard liver volume based on body surface area (BSA) or body weight in Chinese adults. A total of 115 consecutive right-lobe living donors not including the middle hepatic vein underwent right hemi-hepatectomy. No organs were used from prisoners, and no subjects were prisoners. Donor anthropometric data including age, gender, body weight, and body height were recorded prospectively. The weights and volumes of the right lobe liver grafts were measured at the back table. Liver weights and volumes were calculated from the right lobe graft weight and volume obtained at the back table, divided by the proportion of the right lobe on computed tomography. By simple linear regression analysis and stepwise multiple linear regression analysis, we correlated calculated liver volume and body height, body weight, or body surface area. The subjects had a mean age of 35.97 +/- 9.6 years, and a female-to-male ratio of 60:55. The mean volume of the right lobe was 727.47 +/- 136.17 mL, occupying 55.59% +/- 6.70% of the whole liver by computed tomography. The volume of the right lobe was 581.73 +/- 96.137 mL, and the estimated liver volume was 1053.08 +/- 167.56 mL. Females of the same body weight showed a slightly lower liver weight. By simple linear regression analysis and stepwise multiple linear regression analysis, a formula was derived based on body weight. All formulae except the Hong Kong formula overestimated liver volume compared to this formula. The formula of standard liver volume, SLV (mL) = 11.508 x body weight (kg) + 334.024, may be applied to estimate liver volumes in Chinese adults.

  8. Development of quantitative screen for 1550 chemicals with GC-MS.

    PubMed

    Bergmann, Alan J; Points, Gary L; Scott, Richard P; Wilson, Glenn; Anderson, Kim A

    2018-05-01

    With hundreds of thousands of chemicals in the environment, effective monitoring requires high-throughput analytical techniques. This paper presents a quantitative screening method for 1550 chemicals based on statistical modeling of responses with identification and integration performed using deconvolution reporting software. The method was evaluated with representative environmental samples. We tested biological extracts, low-density polyethylene, and silicone passive sampling devices spiked with known concentrations of 196 representative chemicals. A multiple linear regression (R 2  = 0.80) was developed with molecular weight, logP, polar surface area, and fractional ion abundance to predict chemical responses within a factor of 2.5. Linearity beyond the calibration had R 2  > 0.97 for three orders of magnitude. Median limits of quantitation were estimated to be 201 pg/μL (1.9× standard deviation). The number of detected chemicals and the accuracy of quantitation were similar for environmental samples and standard solutions. To our knowledge, this is the most precise method for the largest number of semi-volatile organic chemicals lacking authentic standards. Accessible instrumentation and software make this method cost effective in quantifying a large, customizable list of chemicals. When paired with silicone wristband passive samplers, this quantitative screen will be very useful for epidemiology where binning of concentrations is common. Graphical abstract A multiple linear regression of chemical responses measured with GC-MS allowed quantitation of 1550 chemicals in samples such as silicone wristbands.

  9. Multiple regression for physiological data analysis: the problem of multicollinearity.

    PubMed

    Slinker, B K; Glantz, S A

    1985-07-01

    Multiple linear regression, in which several predictor variables are related to a response variable, is a powerful statistical tool for gaining quantitative insight into complex in vivo physiological systems. For these insights to be correct, all predictor variables must be uncorrelated. However, in many physiological experiments the predictor variables cannot be precisely controlled and thus change in parallel (i.e., they are highly correlated). There is a redundancy of information about the response, a situation called multicollinearity, that leads to numerical problems in estimating the parameters in regression equations; the parameters are often of incorrect magnitude or sign or have large standard errors. Although multicollinearity can be avoided with good experimental design, not all interesting physiological questions can be studied without encountering multicollinearity. In these cases various ad hoc procedures have been proposed to mitigate multicollinearity. Although many of these procedures are controversial, they can be helpful in applying multiple linear regression to some physiological problems.

  10. Revisiting Mr. Tall and Mr. Short

    ERIC Educational Resources Information Center

    Riehl, Suzanne M.; Steinthorsdottir, Olof Bjorg

    2014-01-01

    Ratio, rate, and proportion are central ideas in the Common Core State Standards (CCSS) for middle-grades mathematics (CCSSI 2010). These ideas closely connect to themes in earlier grades (pattern building, multiplicative reasoning, rational number concepts) and are the foundation for understanding linear functions as well as many high school…

  11. A comparison of multiple imputation methods for handling missing values in longitudinal data in the presence of a time-varying covariate with a non-linear association with time: a simulation study.

    PubMed

    De Silva, Anurika Priyanjali; Moreno-Betancur, Margarita; De Livera, Alysha Madhu; Lee, Katherine Jane; Simpson, Julie Anne

    2017-07-25

    Missing data is a common problem in epidemiological studies, and is particularly prominent in longitudinal data, which involve multiple waves of data collection. Traditional multiple imputation (MI) methods (fully conditional specification (FCS) and multivariate normal imputation (MVNI)) treat repeated measurements of the same time-dependent variable as just another 'distinct' variable for imputation and therefore do not make the most of the longitudinal structure of the data. Only a few studies have explored extensions to the standard approaches to account for the temporal structure of longitudinal data. One suggestion is the two-fold fully conditional specification (two-fold FCS) algorithm, which restricts the imputation of a time-dependent variable to time blocks where the imputation model includes measurements taken at the specified and adjacent times. To date, no study has investigated the performance of two-fold FCS and standard MI methods for handling missing data in a time-varying covariate with a non-linear trajectory over time - a commonly encountered scenario in epidemiological studies. We simulated 1000 datasets of 5000 individuals based on the Longitudinal Study of Australian Children (LSAC). Three missing data mechanisms: missing completely at random (MCAR), and a weak and a strong missing at random (MAR) scenarios were used to impose missingness on body mass index (BMI) for age z-scores; a continuous time-varying exposure variable with a non-linear trajectory over time. We evaluated the performance of FCS, MVNI, and two-fold FCS for handling up to 50% of missing data when assessing the association between childhood obesity and sleep problems. The standard two-fold FCS produced slightly more biased and less precise estimates than FCS and MVNI. We observed slight improvements in bias and precision when using a time window width of two for the two-fold FCS algorithm compared to the standard width of one. We recommend the use of FCS or MVNI in a similar longitudinal setting, and when encountering convergence issues due to a large number of time points or variables with missing values, the two-fold FCS with exploration of a suitable time window.

  12. Analysis of Binary Adherence Data in the Setting of Polypharmacy: A Comparison of Different Approaches

    PubMed Central

    Esserman, Denise A.; Moore, Charity G.; Roth, Mary T.

    2009-01-01

    Older community dwelling adults often take multiple medications for numerous chronic diseases. Non-adherence to these medications can have a large public health impact. Therefore, the measurement and modeling of medication adherence in the setting of polypharmacy is an important area of research. We apply a variety of different modeling techniques (standard linear regression; weighted linear regression; adjusted linear regression; naïve logistic regression; beta-binomial (BB) regression; generalized estimating equations (GEE)) to binary medication adherence data from a study in a North Carolina based population of older adults, where each medication an individual was taking was classified as adherent or non-adherent. In addition, through simulation we compare these different methods based on Type I error rates, bias, power, empirical 95% coverage, and goodness of fit. We find that estimation and inference using GEE is robust to a wide variety of scenarios and we recommend using this in the setting of polypharmacy when adherence is dichotomously measured for multiple medications per person. PMID:20414358

  13. Multivariate meta-analysis for non-linear and other multi-parameter associations

    PubMed Central

    Gasparrini, A; Armstrong, B; Kenward, M G

    2012-01-01

    In this paper, we formalize the application of multivariate meta-analysis and meta-regression to synthesize estimates of multi-parameter associations obtained from different studies. This modelling approach extends the standard two-stage analysis used to combine results across different sub-groups or populations. The most straightforward application is for the meta-analysis of non-linear relationships, described for example by regression coefficients of splines or other functions, but the methodology easily generalizes to any setting where complex associations are described by multiple correlated parameters. The modelling framework of multivariate meta-analysis is implemented in the package mvmeta within the statistical environment R. As an illustrative example, we propose a two-stage analysis for investigating the non-linear exposure–response relationship between temperature and non-accidental mortality using time-series data from multiple cities. Multivariate meta-analysis represents a useful analytical tool for studying complex associations through a two-stage procedure. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22807043

  14. SU-F-R-20: Image Texture Features Correlate with Time to Local Failure in Lung SBRT Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, M; Abazeed, M; Woody, N

    Purpose: To explore possible correlation between CT image-based texture and histogram features and time-to-local-failure in early stage non-small cell lung cancer (NSCLC) patients treated with stereotactic body radiotherapy (SBRT).Methods and Materials: From an IRB-approved lung SBRT registry for patients treated between 2009–2013 we selected 48 (20 male, 28 female) patients with local failure. Median patient age was 72.3±10.3 years. Mean time to local failure was 15 ± 7.1 months. Physician-contoured gross tumor volumes (GTV) on the planning CT images were processed and 3D gray-level co-occurrence matrix (GLCM) based texture and histogram features were calculated in Matlab. Data were exported tomore » R and a multiple linear regression model was used to examine the relationship between texture features and time-to-local-failure. Results: Multiple linear regression revealed that entropy (p=0.0233, multiple R2=0.60) from GLCM-based texture analysis and the standard deviation (p=0.0194, multiple R2=0.60) from the histogram-based features were statistically significantly correlated with the time-to-local-failure. Conclusion: Image-based texture analysis can be used to predict certain aspects of treatment outcomes of NSCLC patients treated with SBRT. We found entropy and standard deviation calculated for the GTV on the CT images displayed a statistically significant correlation with and time-to-local-failure in lung SBRT patients.« less

  15. Overcoming Matrix Effects in a Complex Sample: Analysis of Multiple Elements in Multivitamins by Atomic Absorption Spectroscopy

    ERIC Educational Resources Information Center

    Arnold, Randy J.; Arndt, Brett; Blaser, Emilia; Blosser, Chris; Caulton, Dana; Chung, Won Sog; Fiorenza, Garrett; Heath, Wyatt; Jacobs, Alex; Kahng, Eunice; Koh, Eun; Le, Thao; Mandla, Kyle; McCory, Chelsey; Newman, Laura; Pithadia, Amit; Reckelhoff, Anna; Rheinhardt, Joseph; Skljarevski, Sonja; Stuart, Jordyn; Taylor, Cassie; Thomas, Scott; Tse, Kyle; Wall, Rachel; Warkentien, Chad

    2011-01-01

    A multivitamin tablet and liquid are analyzed for the elements calcium, magnesium, iron, zinc, copper, and manganese using atomic absorption spectrometry. Linear calibration and standard addition are used for all elements except calcium, allowing for an estimate of the matrix effects encountered for this complex sample. Sample preparation using…

  16. Comparison of two-concentration with multi-concentration linear regressions: Retrospective data analysis of multiple regulated LC-MS bioanalytical projects.

    PubMed

    Musuku, Adrien; Tan, Aimin; Awaiye, Kayode; Trabelsi, Fethi

    2013-09-01

    Linear calibration is usually performed using eight to ten calibration concentration levels in regulated LC-MS bioanalysis because a minimum of six are specified in regulatory guidelines. However, we have previously reported that two-concentration linear calibration is as reliable as or even better than using multiple concentrations. The purpose of this research is to compare two-concentration with multiple-concentration linear calibration through retrospective data analysis of multiple bioanalytical projects that were conducted in an independent regulated bioanalytical laboratory. A total of 12 bioanalytical projects were randomly selected: two validations and two studies for each of the three most commonly used types of sample extraction methods (protein precipitation, liquid-liquid extraction, solid-phase extraction). When the existing data were retrospectively linearly regressed using only the lowest and the highest concentration levels, no extra batch failure/QC rejection was observed and the differences in accuracy and precision between the original multi-concentration regression and the new two-concentration linear regression are negligible. Specifically, the differences in overall mean apparent bias (square root of mean individual bias squares) are within the ranges of -0.3% to 0.7% and 0.1-0.7% for the validations and studies, respectively. The differences in mean QC concentrations are within the ranges of -0.6% to 1.8% and -0.8% to 2.5% for the validations and studies, respectively. The differences in %CV are within the ranges of -0.7% to 0.9% and -0.3% to 0.6% for the validations and studies, respectively. The average differences in study sample concentrations are within the range of -0.8% to 2.3%. With two-concentration linear regression, an average of 13% of time and cost could have been saved for each batch together with 53% of saving in the lead-in for each project (the preparation of working standard solutions, spiking, and aliquoting). Furthermore, examples are given as how to evaluate the linearity over the entire concentration range when only two concentration levels are used for linear regression. To conclude, two-concentration linear regression is accurate and robust enough for routine use in regulated LC-MS bioanalysis and it significantly saves time and cost as well. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Estimating standard errors in feature network models.

    PubMed

    Frank, Laurence E; Heiser, Willem J

    2007-05-01

    Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.

  18. Cooling in the single-photon strong-coupling regime of cavity optomechanics

    NASA Astrophysics Data System (ADS)

    Nunnenkamp, A.; Børkje, K.; Girvin, S. M.

    2012-05-01

    In this Rapid Communication we discuss how red-sideband cooling is modified in the single-photon strong-coupling regime of cavity optomechanics where the radiation pressure of a single photon displaces the mechanical oscillator by more than its zero-point uncertainty. Using Fermi's golden rule we calculate the transition rates induced by the optical drive without linearizing the optomechanical interaction. In the resolved-sideband limit we find multiple-phonon cooling resonances for strong single-photon coupling that lead to nonthermal steady states including the possibility of phonon antibunching. Our study generalizes the standard linear cooling theory.

  19. Linear Quantitative Profiling Method Fast Monitors Alkaloids of Sophora Flavescens That Was Verified by Tri-Marker Analyses.

    PubMed

    Hou, Zhifei; Sun, Guoxiang; Guo, Yong

    2016-01-01

    The present study demonstrated the use of the Linear Quantitative Profiling Method (LQPM) to evaluate the quality of Alkaloids of Sophora flavescens (ASF) based on chromatographic fingerprints in an accurate, economical and fast way. Both linear qualitative and quantitative similarities were calculated in order to monitor the consistency of the samples. The results indicate that the linear qualitative similarity (LQLS) is not sufficiently discriminating due to the predominant presence of three alkaloid compounds (matrine, sophoridine and oxymatrine) in the test samples; however, the linear quantitative similarity (LQTS) was shown to be able to obviously identify the samples based on the difference in the quantitative content of all the chemical components. In addition, the fingerprint analysis was also supported by the quantitative analysis of three marker compounds. The LQTS was found to be highly correlated to the contents of the marker compounds, indicating that quantitative analysis of the marker compounds may be substituted with the LQPM based on the chromatographic fingerprints for the purpose of quantifying all chemicals of a complex sample system. Furthermore, once reference fingerprint (RFP) developed from a standard preparation in an immediate detection way and the composition similarities calculated out, LQPM could employ the classical mathematical model to effectively quantify the multiple components of ASF samples without any chemical standard.

  20. Large Spatial and Temporal Separations of Cause and Effect in Policy Making - Dealing with Non-linear Effects

    NASA Astrophysics Data System (ADS)

    McCaskill, John

    There can be large spatial and temporal separation of cause and effect in policy making. Determining the correct linkage between policy inputs and outcomes can be highly impractical in the complex environments faced by policy makers. In attempting to see and plan for the probable outcomes, standard linear models often overlook, ignore, or are unable to predict catastrophic events that only seem improbable due to the issue of multiple feedback loops. There are several issues with the makeup and behaviors of complex systems that explain the difficulty many mathematical models (factor analysis/structural equation modeling) have in dealing with non-linear effects in complex systems. This chapter highlights those problem issues and offers insights to the usefulness of ABM in dealing with non-linear effects in complex policy making environments.

  1. Determination of water depth with high-resolution satellite imagery over variable bottom types

    USGS Publications Warehouse

    Stumpf, Richard P.; Holderied, Kristine; Sinclair, Mark

    2003-01-01

    A standard algorithm for determining depth in clear water from passive sensors exists; but it requires tuning of five parameters and does not retrieve depths where the bottom has an extremely low albedo. To address these issues, we developed an empirical solution using a ratio of reflectances that has only two tunable parameters and can be applied to low-albedo features. The two algorithms--the standard linear transform and the new ratio transform--were compared through analysis of IKONOS satellite imagery against lidar bathymetry. The coefficients for the ratio algorithm were tuned manually to a few depths from a nautical chart, yet performed as well as the linear algorithm tuned using multiple linear regression against the lidar. Both algorithms compensate for variable bottom type and albedo (sand, pavement, algae, coral) and retrieve bathymetry in water depths of less than 10-15 m. However, the linear transform does not distinguish depths >15 m and is more subject to variability across the studied atolls. The ratio transform can, in clear water, retrieve depths in >25 m of water and shows greater stability between different areas. It also performs slightly better in scattering turbidity than the linear transform. The ratio algorithm is somewhat noisier and cannot always adequately resolve fine morphology (structures smaller than 4-5 pixels) in water depths >15-20 m. In general, the ratio transform is more robust than the linear transform.

  2. Analysis of Sequence Data Under Multivariate Trait-Dependent Sampling.

    PubMed

    Tao, Ran; Zeng, Donglin; Franceschini, Nora; North, Kari E; Boerwinkle, Eric; Lin, Dan-Yu

    2015-06-01

    High-throughput DNA sequencing allows for the genotyping of common and rare variants for genetic association studies. At the present time and for the foreseeable future, it is not economically feasible to sequence all individuals in a large cohort. A cost-effective strategy is to sequence those individuals with extreme values of a quantitative trait. We consider the design under which the sampling depends on multiple quantitative traits. Under such trait-dependent sampling, standard linear regression analysis can result in bias of parameter estimation, inflation of type I error, and loss of power. We construct a likelihood function that properly reflects the sampling mechanism and utilizes all available data. We implement a computationally efficient EM algorithm and establish the theoretical properties of the resulting maximum likelihood estimators. Our methods can be used to perform separate inference on each trait or simultaneous inference on multiple traits. We pay special attention to gene-level association tests for rare variants. We demonstrate the superiority of the proposed methods over standard linear regression through extensive simulation studies. We provide applications to the Cohorts for Heart and Aging Research in Genomic Epidemiology Targeted Sequencing Study and the National Heart, Lung, and Blood Institute Exome Sequencing Project.

  3. Toward customer-centric organizational science: A common language effect size indicator for multiple linear regressions and regressions with higher-order terms.

    PubMed

    Krasikova, Dina V; Le, Huy; Bachura, Eric

    2018-06-01

    To address a long-standing concern regarding a gap between organizational science and practice, scholars called for more intuitive and meaningful ways of communicating research results to users of academic research. In this article, we develop a common language effect size index (CLβ) that can help translate research results to practice. We demonstrate how CLβ can be computed and used to interpret the effects of continuous and categorical predictors in multiple linear regression models. We also elaborate on how the proposed CLβ index is computed and used to interpret interactions and nonlinear effects in regression models. In addition, we test the robustness of the proposed index to violations of normality and provide means for computing standard errors and constructing confidence intervals around its estimates. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  4. Amesos2 and Belos: Direct and Iterative Solvers for Large Sparse Linear Systems

    DOE PAGES

    Bavier, Eric; Hoemmen, Mark; Rajamanickam, Sivasankaran; ...

    2012-01-01

    Solvers for large sparse linear systems come in two categories: direct and iterative. Amesos2, a package in the Trilinos software project, provides direct methods, and Belos, another Trilinos package, provides iterative methods. Amesos2 offers a common interface to many different sparse matrix factorization codes, and can handle any implementation of sparse matrices and vectors, via an easy-to-extend C++ traits interface. It can also factor matrices whose entries have arbitrary “Scalar” type, enabling extended-precision and mixed-precision algorithms. Belos includes many different iterative methods for solving large sparse linear systems and least-squares problems. Unlike competing iterative solver libraries, Belos completely decouples themore » algorithms from the implementations of the underlying linear algebra objects. This lets Belos exploit the latest hardware without changes to the code. Belos favors algorithms that solve higher-level problems, such as multiple simultaneous linear systems and sequences of related linear systems, faster than standard algorithms. The package also supports extended-precision and mixed-precision algorithms. Together, Amesos2 and Belos form a complete suite of sparse linear solvers.« less

  5. High-speed multiple sequence alignment on a reconfigurable platform.

    PubMed

    Oliver, Tim; Schmidt, Bertil; Maskell, Douglas; Nathan, Darran; Clemens, Ralf

    2006-01-01

    Progressive alignment is a widely used approach to compute multiple sequence alignments (MSAs). However, aligning several hundred sequences by popular progressive alignment tools requires hours on sequential computers. Due to the rapid growth of sequence databases biologists have to compute MSAs in a far shorter time. In this paper we present a new approach to MSA on reconfigurable hardware platforms to gain high performance at low cost. We have constructed a linear systolic array to perform pairwise sequence distance computations using dynamic programming. This results in an implementation with significant runtime savings on a standard FPGA.

  6. Multiple Equilibria and Endogenous Cycles in a Non-Linear Harrodian Growth Model

    NASA Astrophysics Data System (ADS)

    Commendatore, Pasquale; Michetti, Elisabetta; Pinto, Antonio

    The standard result of Harrod's growth model is that, because investors react more strongly than savers to a change in income, the long run equilibrium of the economy is unstable. We re-interpret the Harrodian instability puzzle as a local instability problem and integrate his model with a nonlinear investment function. Multiple equilibria and different types of complex behaviour emerge. Moreover, even in the presence of locally unstable equilibria, for a large set of initial conditions the time path of the economy is not diverging, providing a solution to the instability puzzle.

  7. Does transport time help explain the high trauma mortality rates in rural areas? New and traditional predictors assessed by new and traditional statistical methods

    PubMed Central

    Røislien, Jo; Lossius, Hans Morten; Kristiansen, Thomas

    2015-01-01

    Background Trauma is a leading global cause of death. Trauma mortality rates are higher in rural areas, constituting a challenge for quality and equality in trauma care. The aim of the study was to explore population density and transport time to hospital care as possible predictors of geographical differences in mortality rates, and to what extent choice of statistical method might affect the analytical results and accompanying clinical conclusions. Methods Using data from the Norwegian Cause of Death registry, deaths from external causes 1998–2007 were analysed. Norway consists of 434 municipalities, and municipality population density and travel time to hospital care were entered as predictors of municipality mortality rates in univariate and multiple regression models of increasing model complexity. We fitted linear regression models with continuous and categorised predictors, as well as piecewise linear and generalised additive models (GAMs). Models were compared using Akaike's information criterion (AIC). Results Population density was an independent predictor of trauma mortality rates, while the contribution of transport time to hospital care was highly dependent on choice of statistical model. A multiple GAM or piecewise linear model was superior, and similar, in terms of AIC. However, while transport time was statistically significant in multiple models with piecewise linear or categorised predictors, it was not in GAM or standard linear regression. Conclusions Population density is an independent predictor of trauma mortality rates. The added explanatory value of transport time to hospital care is marginal and model-dependent, highlighting the importance of exploring several statistical models when studying complex associations in observational data. PMID:25972600

  8. Phytotoxicity and accumulation of chromium in carrot plants and the derivation of soil thresholds for Chinese soils.

    PubMed

    Ding, Changfeng; Li, Xiaogang; Zhang, Taolin; Ma, Yibing; Wang, Xingxiang

    2014-10-01

    Soil environmental quality standards in respect of heavy metals for farmlands should be established considering both their effects on crop yield and their accumulation in the edible part. A greenhouse experiment was conducted to investigate the effects of chromium (Cr) on biomass production and Cr accumulation in carrot plants grown in a wide range of soils. The results revealed that carrot yield significantly decreased in 18 of the total 20 soils with Cr addition being the soil environmental quality standard of China. The Cr content of carrot grown in the five soils with pH>8.0 exceeded the maximum allowable level (0.5mgkg(-1)) according to the Chinese General Standard for Contaminants in Foods. The relationship between carrot Cr concentration and soil pH could be well fitted (R(2)=0.70, P<0.0001) by a linear-linear segmented regression model. The addition of Cr to soil influenced carrot yield firstly rather than the food quality. The major soil factors controlling Cr phytotoxicity and the prediction models were further identified and developed using path analysis and stepwise multiple linear regression analysis. Soil Cr thresholds for phytotoxicity meanwhile ensuring food safety were then derived on the condition of 10 percent yield reduction. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Linear Quantitative Profiling Method Fast Monitors Alkaloids of Sophora Flavescens That Was Verified by Tri-Marker Analyses

    PubMed Central

    Hou, Zhifei; Sun, Guoxiang; Guo, Yong

    2016-01-01

    The present study demonstrated the use of the Linear Quantitative Profiling Method (LQPM) to evaluate the quality of Alkaloids of Sophora flavescens (ASF) based on chromatographic fingerprints in an accurate, economical and fast way. Both linear qualitative and quantitative similarities were calculated in order to monitor the consistency of the samples. The results indicate that the linear qualitative similarity (LQLS) is not sufficiently discriminating due to the predominant presence of three alkaloid compounds (matrine, sophoridine and oxymatrine) in the test samples; however, the linear quantitative similarity (LQTS) was shown to be able to obviously identify the samples based on the difference in the quantitative content of all the chemical components. In addition, the fingerprint analysis was also supported by the quantitative analysis of three marker compounds. The LQTS was found to be highly correlated to the contents of the marker compounds, indicating that quantitative analysis of the marker compounds may be substituted with the LQPM based on the chromatographic fingerprints for the purpose of quantifying all chemicals of a complex sample system. Furthermore, once reference fingerprint (RFP) developed from a standard preparation in an immediate detection way and the composition similarities calculated out, LQPM could employ the classical mathematical model to effectively quantify the multiple components of ASF samples without any chemical standard. PMID:27529425

  10. Multiple object tracking using the shortest path faster association algorithm.

    PubMed

    Xi, Zhenghao; Liu, Heping; Liu, Huaping; Yang, Bin

    2014-01-01

    To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time.

  11. Multiple Object Tracking Using the Shortest Path Faster Association Algorithm

    PubMed Central

    Liu, Heping; Liu, Huaping; Yang, Bin

    2014-01-01

    To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time. PMID:25215322

  12. Multiple regression technique for Pth degree polynominals with and without linear cross products

    NASA Technical Reports Server (NTRS)

    Davis, J. W.

    1973-01-01

    A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.

  13. FAST TRACK PAPER: Non-iterative multiple-attenuation methods: linear inverse solutions to non-linear inverse problems - II. BMG approximation

    NASA Astrophysics Data System (ADS)

    Ikelle, Luc T.; Osen, Are; Amundsen, Lasse; Shen, Yunqing

    2004-12-01

    The classical linear solutions to the problem of multiple attenuation, like predictive deconvolution, τ-p filtering, or F-K filtering, are generally fast, stable, and robust compared to non-linear solutions, which are generally either iterative or in the form of a series with an infinite number of terms. These qualities have made the linear solutions more attractive to seismic data-processing practitioners. However, most linear solutions, including predictive deconvolution or F-K filtering, contain severe assumptions about the model of the subsurface and the class of free-surface multiples they can attenuate. These assumptions limit their usefulness. In a recent paper, we described an exception to this assertion for OBS data. We showed in that paper that a linear and non-iterative solution to the problem of attenuating free-surface multiples which is as accurate as iterative non-linear solutions can be constructed for OBS data. We here present a similar linear and non-iterative solution for attenuating free-surface multiples in towed-streamer data. For most practical purposes, this linear solution is as accurate as the non-linear ones.

  14. Adjusted variable plots for Cox's proportional hazards regression model.

    PubMed

    Hall, C B; Zeger, S L; Bandeen-Roche, K J

    1996-01-01

    Adjusted variable plots are useful in linear regression for outlier detection and for qualitative evaluation of the fit of a model. In this paper, we extend adjusted variable plots to Cox's proportional hazards model for possibly censored survival data. We propose three different plots: a risk level adjusted variable (RLAV) plot in which each observation in each risk set appears, a subject level adjusted variable (SLAV) plot in which each subject is represented by one point, and an event level adjusted variable (ELAV) plot in which the entire risk set at each failure event is represented by a single point. The latter two plots are derived from the RLAV by combining multiple points. In each point, the regression coefficient and standard error from a Cox proportional hazards regression is obtained by a simple linear regression through the origin fit to the coordinates of the pictured points. The plots are illustrated with a reanalysis of a dataset of 65 patients with multiple myeloma.

  15. Interface Technology for Geometrically Nonlinear Analysis of Multiple Connected Subdomains

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.

    1997-01-01

    Interface technology for geometrically nonlinear analysis is presented and demonstrated. This technology is based on an interface element which makes use of a hybrid variational formulation to provide for compatibility between independently modeled connected subdomains. The interface element developed herein extends previous work to include geometric nonlinearity and to use standard linear and nonlinear solution procedures. Several benchmark nonlinear applications of the interface technology are presented and aspects of the implementation are discussed.

  16. Limits of linearity and detection for some drugs of abuse.

    PubMed

    Needleman, S B; Romberg, R W

    1990-01-01

    The limits of linearity (LOL) and detection (LOD) are important factors in establishing the reliability of an analytical procedure for accurately assaying drug concentrations in urine specimens. Multiple analyses of analyte over an extended range of concentrations provide a measure of the ability of the analytical procedure to correctly identify known quantities of drug in a biofluid matrix. Each of the seven drugs of abuse gives linear analytical responses from concentrations at or near their LOD to concentrations several-fold higher than those generally encountered in the drug screening laboratory. The upper LOL exceeds the Department of Navy (DON) cutoff values by factors of approximately 2 to 160. The LOD varies from 0.4 to 5.0% of the DON cutoff value for each drug. The limit of quantitation (LOQ) is calculated as the LOD + 7 SD. The range for LOL is greater for drugs analyzed with deuterated internal standards compared with those using conventional internal standards. For THC acid, cocaine, PCP, and morphine, LOLs are 8 to 160-fold greater than the defined cutoff concentrations. For the other drugs, the LOL's are only 2 to 4-fold greater than the defined cutoff concentrations.

  17. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    PubMed

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.

  18. Dimensional accuracy of pickup implant impression: an in vitro comparison of novel modular versus standard custom trays.

    PubMed

    Simeone, Piero; Valentini, Pier Paolo; Pizzoferrato, Roberto; Scudieri, Folco

    2011-01-01

    The purpose of this in vitro study was to compare the dimensional accuracy of the pickup impression technique using a modular individual tray (MIT) and using a standard individual tray (ST) for multiple internal-connection implants. The roles of both materials and geometric misfits were considered. First, because the MIT relies on the stiffness and elasticity of acrylic resin material, a preliminary investigation of the resin volume contraction during curing and polymerization was done. Then, two sets of specimens were tested to compare the accuracy of the MIT (test group) to that of the ST (control group). The linear and angular displacements of the transfer copings were measured and compared during three different stages of the impression procedure. Experimental measurements were performed with a computerized coordinate measuring machine. The curing dynamic of the acrylic resin was strongly dependent on the physical properties of the acrylic material and the powder/liquid ratio. Specifically, an increase in the powder/liquid ratio accelerated resin polymerization (curing time decreases by 70%) and reduced the final volume contraction by 45%. However, the total shrinkage never exceeded the elastic limits of the material; hence, it did not affect the coping's stability. In the test group, linear errors were reduced by 55% and angular errors were reduced by 65%. Linear and angular displacements of the transfer copings were significantly reduced with the MIT technique, which led to higher dimensional accuracy versus the ST group. The MIT approach, in combination with a thin and uniform amount of acrylic resin in the pickup impression technique, showed no significant permanent distortions in multiple misalignment internal-connection implants compared to the ST technique.

  19. Isotropic-resolution linear-array-based photoacoustic computed tomography through inverse Radon transform

    NASA Astrophysics Data System (ADS)

    Li, Guo; Xia, Jun; Li, Lei; Wang, Lidai; Wang, Lihong V.

    2015-03-01

    Linear transducer arrays are readily available for ultrasonic detection in photoacoustic computed tomography. They offer low cost, hand-held convenience, and conventional ultrasonic imaging. However, the elevational resolution of linear transducer arrays, which is usually determined by the weak focus of the cylindrical acoustic lens, is about one order of magnitude worse than the in-plane axial and lateral spatial resolutions. Therefore, conventional linear scanning along the elevational direction cannot provide high-quality three-dimensional photoacoustic images due to the anisotropic spatial resolutions. Here we propose an innovative method to achieve isotropic resolutions for three-dimensional photoacoustic images through combined linear and rotational scanning. In each scan step, we first elevationally scan the linear transducer array, and then rotate the linear transducer array along its center in small steps, and scan again until 180 degrees have been covered. To reconstruct isotropic three-dimensional images from the multiple-directional scanning dataset, we use the standard inverse Radon transform originating from X-ray CT. We acquired a three-dimensional microsphere phantom image through the inverse Radon transform method and compared it with a single-elevational-scan three-dimensional image. The comparison shows that our method improves the elevational resolution by up to one order of magnitude, approaching the in-plane lateral-direction resolution. In vivo rat images were also acquired.

  20. Geometry of the scalar sector

    DOE PAGES

    Alonso, Rodrigo; Jenkins, Elizabeth E.; Manohar, Aneesh V.

    2016-08-17

    The S-matrix of a quantum field theory is unchanged by field redefinitions, and so it only depends on geometric quantities such as the curvature of field space. Whether the Higgs multiplet transforms linearly or non-linearly under electroweak symmetry is a subtle question since one can make a coordinate change to convert a field that transforms linearly into one that transforms non-linearly. Renormalizability of the Standard Model (SM) does not depend on the choice of scalar fields or whether the scalar fields transform linearly or non-linearly under the gauge group, but only on the geometric requirement that the scalar field manifoldmore » M is flat. Standard Model Effective Field Theory (SMEFT) and Higgs Effective Field Theory (HEFT) have curved M, since they parametrize deviations from the flat SM case. We show that the HEFT Lagrangian can be written in SMEFT form if and only ifMhas a SU(2) L U(1) Y invariant fixed point. Experimental observables in HEFT depend on local geometric invariants of M such as sectional curvatures, which are of order 1/Λ 2 , where Λ is the EFT scale. We give explicit expressions for these quantities in terms of the structure constants for a general G → H symmetry breaking pattern. The one-loop radiative correction in HEFT is determined using a covariant expansion which preserves manifest invariance of M under coordinate redefinitions. The formula for the radiative correction is simple when written in terms of the curvature of M and the gauge curvature field strengths. We also extend the CCWZ formalism to non-compact groups, and generalize the HEFT curvature computation to the case of multiple singlet scalar fields.« less

  1. Linear Time Algorithms to Restrict Insider Access using Multi-Policy Access Control Systems

    PubMed Central

    Mell, Peter; Shook, James; Harang, Richard; Gavrila, Serban

    2017-01-01

    An important way to limit malicious insiders from distributing sensitive information is to as tightly as possible limit their access to information. This has always been the goal of access control mechanisms, but individual approaches have been shown to be inadequate. Ensemble approaches of multiple methods instantiated simultaneously have been shown to more tightly restrict access, but approaches to do so have had limited scalability (resulting in exponential calculations in some cases). In this work, we take the Next Generation Access Control (NGAC) approach standardized by the American National Standards Institute (ANSI) and demonstrate its scalability. The existing publicly available reference implementations all use cubic algorithms and thus NGAC was widely viewed as not scalable. The primary NGAC reference implementation took, for example, several minutes to simply display the set of files accessible to a user on a moderately sized system. In our approach, we take these cubic algorithms and make them linear. We do this by reformulating the set theoretic approach of the NGAC standard into a graph theoretic approach and then apply standard graph algorithms. We thus can answer important access control decision questions (e.g., which files are available to a user and which users can access a file) using linear time graph algorithms. We also provide a default linear time mechanism to visualize and review user access rights for an ensemble of access control mechanisms. Our visualization appears to be a simple file directory hierarchy but in reality is an automatically generated structure abstracted from the underlying access control graph that works with any set of simultaneously instantiated access control policies. It also provide an implicit mechanism for symbolic linking that provides a powerful access capability. Our work thus provides the first efficient implementation of NGAC while enabling user privilege review through a novel visualization approach. This may help transition from concept to reality the idea of using ensembles of simultaneously instantiated access control methodologies, thereby limiting insider threat. PMID:28758045

  2. Fast, Low-Power, Hysteretic Level-Detector Circuit

    NASA Technical Reports Server (NTRS)

    Arditti, Mordechai

    1993-01-01

    Circuit for detection of preset levels of voltage or current intended to replace standard fast voltage comparator. Hysteretic analog/digital level detector operates at unusually low power with little sacrifice of speed. Comprises low-power analog circuit and complementary metal oxide/semiconductor (CMOS) digital circuit connected in overall closed feedback loop to decrease rise and fall times, provide hysteresis, and trip-level control. Contains multiple subloops combining linear and digital feedback. Levels of sensed signals and hysteresis level easily adjusted by selection of components to suit specific application.

  3. FALDO: a semantic standard for describing the location of nucleotide and protein feature annotation.

    PubMed

    Bolleman, Jerven T; Mungall, Christopher J; Strozzi, Francesco; Baran, Joachim; Dumontier, Michel; Bonnal, Raoul J P; Buels, Robert; Hoehndorf, Robert; Fujisawa, Takatomo; Katayama, Toshiaki; Cock, Peter J A

    2016-06-13

    Nucleotide and protein sequence feature annotations are essential to understand biology on the genomic, transcriptomic, and proteomic level. Using Semantic Web technologies to query biological annotations, there was no standard that described this potentially complex location information as subject-predicate-object triples. We have developed an ontology, the Feature Annotation Location Description Ontology (FALDO), to describe the positions of annotated features on linear and circular sequences. FALDO can be used to describe nucleotide features in sequence records, protein annotations, and glycan binding sites, among other features in coordinate systems of the aforementioned "omics" areas. Using the same data format to represent sequence positions that are independent of file formats allows us to integrate sequence data from multiple sources and data types. The genome browser JBrowse is used to demonstrate accessing multiple SPARQL endpoints to display genomic feature annotations, as well as protein annotations from UniProt mapped to genomic locations. Our ontology allows users to uniformly describe - and potentially merge - sequence annotations from multiple sources. Data sources using FALDO can prospectively be retrieved using federalised SPARQL queries against public SPARQL endpoints and/or local private triple stores.

  4. FALDO: a semantic standard for describing the location of nucleotide and protein feature annotation

    DOE PAGES

    Bolleman, Jerven T.; Mungall, Christopher J.; Strozzi, Francesco; ...

    2016-06-13

    Nucleotide and protein sequence feature annotations are essential to understand biology on the genomic, transcriptomic, and proteomic level. Using Semantic Web technologies to query biological annotations, there was no standard that described this potentially complex location information as subject-predicate-object triples. In this paper, we have developed an ontology, the Feature Annotation Location Description Ontology (FALDO), to describe the positions of annotated features on linear and circular sequences. FALDO can be used to describe nucleotide features in sequence records, protein annotations, and glycan binding sites, among other features in coordinate systems of the aforementioned “omics” areas. Using the same data formatmore » to represent sequence positions that are independent of file formats allows us to integrate sequence data from multiple sources and data types. The genome browser JBrowse is used to demonstrate accessing multiple SPARQL endpoints to display genomic feature annotations, as well as protein annotations from UniProt mapped to genomic locations. Our ontology allows users to uniformly describe – and potentially merge – sequence annotations from multiple sources. Finally, data sources using FALDO can prospectively be retrieved using federalised SPARQL queries against public SPARQL endpoints and/or local private triple stores.« less

  5. FALDO: a semantic standard for describing the location of nucleotide and protein feature annotation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolleman, Jerven T.; Mungall, Christopher J.; Strozzi, Francesco

    Nucleotide and protein sequence feature annotations are essential to understand biology on the genomic, transcriptomic, and proteomic level. Using Semantic Web technologies to query biological annotations, there was no standard that described this potentially complex location information as subject-predicate-object triples. In this paper, we have developed an ontology, the Feature Annotation Location Description Ontology (FALDO), to describe the positions of annotated features on linear and circular sequences. FALDO can be used to describe nucleotide features in sequence records, protein annotations, and glycan binding sites, among other features in coordinate systems of the aforementioned “omics” areas. Using the same data formatmore » to represent sequence positions that are independent of file formats allows us to integrate sequence data from multiple sources and data types. The genome browser JBrowse is used to demonstrate accessing multiple SPARQL endpoints to display genomic feature annotations, as well as protein annotations from UniProt mapped to genomic locations. Our ontology allows users to uniformly describe – and potentially merge – sequence annotations from multiple sources. Finally, data sources using FALDO can prospectively be retrieved using federalised SPARQL queries against public SPARQL endpoints and/or local private triple stores.« less

  6. A simple method for HPLC retention time prediction: linear calibration using two reference substances.

    PubMed

    Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng

    2017-01-01

    Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.

  7. A global lightning parameterization based on statistical relationships among environmental factors, aerosols, and convective clouds in the TRMM climatology

    NASA Astrophysics Data System (ADS)

    Stolz, Douglas C.; Rutledge, Steven A.; Pierce, Jeffrey R.; van den Heever, Susan C.

    2017-07-01

    The objective of this study is to determine the relative contributions of normalized convective available potential energy (NCAPE), cloud condensation nuclei (CCN) concentrations, warm cloud depth (WCD), vertical wind shear (SHEAR), and environmental relative humidity (RH) to the variability of lightning and radar reflectivity within convective features (CFs) observed by the Tropical Rainfall Measuring Mission (TRMM) satellite. Our approach incorporates multidimensional binned representations of observations of CFs and modeled thermodynamics, kinematics, and CCN as inputs to develop approximations for total lightning density (TLD) and the average height of 30 dBZ radar reflectivity (AVGHT30). The results suggest that TLD and AVGHT30 increase with increasing NCAPE, increasing CCN, decreasing WCD, increasing SHEAR, and decreasing RH. Multiple-linear approximations for lightning and radar quantities using the aforementioned predictors account for significant portions of the variance in the binned data set (R2 ≈ 0.69-0.81). The standardized weights attributed to CCN, NCAPE, and WCD are largest, the standardized weight of RH varies relative to other predictors, while the standardized weight for SHEAR is comparatively small. We investigate these statistical relationships for collections of CFs within various geographic areas and compare the aerosol (CCN) and thermodynamic (NCAPE and WCD) contributions to variations in the CF population in a partial sensitivity analysis based on multiple-linear regression approximations computed herein. A global lightning parameterization is developed; the average difference between predicted and observed TLD decreases from +21.6 to +11.6% when using a hybrid approach to combine separate approximations over continents and oceans, thus highlighting the need for regionally targeted investigations in the future.

  8. Executive functions and consumption of fruits/ vegetables and high saturated fat foods in young adults.

    PubMed

    Limbers, Christine A; Young, Danielle

    2015-05-01

    Executive functions play a critical role in regulating eating behaviors and have been shown to be associated with overeating which over time can result in overweight and obesity. There has been a paucity of research examining the associations among healthy dietary behaviors and executive functions utilizing behavioral rating scales of executive functioning. The objective of the present cross-sectional study was to evaluate the associations among fruit and vegetable consumption, intake of foods high in saturated fat, and executive functions using the Behavioral Rating Inventory of Executive Functioning-Adult Version. A total of 240 university students completed the Behavioral Rating Inventory of Executive Functioning-Adult Version, the 26-Item Eating Attitudes Test, and the Diet subscale of the Summary of Diabetes Self-Care Activities Questionnaire. Multiple linear regression analysis was conducted with two separate models in which fruit and vegetable consumption and saturated fat intake were the outcomes. Demographic variables, body mass index, and eating styles were controlled for in the analysis. Better initiation skills were associated with greater intake of fruits and vegetables in the last 7 days (standardized beta = -0.17; p < 0.05). Stronger inhibitory control was associated with less consumption of high fat foods in the last 7 days (standardized beta = 0.20; p < 0.05) in the multiple linear regression analysis. Executive functions that predict fruit and vegetable consumption are distinct from those that predict avoidance of foods high in saturated fat. Future research should investigate whether continued skill enhancement in initiation and inhibition following standard behavioral interventions improves long-term maintenance of weight loss. © The Author(s) 2015.

  9. Saturation current and collection efficiency for ionization chambers in pulsed beams.

    PubMed

    DeBlois, F; Zankowski, C; Podgorsak, E B

    2000-05-01

    Saturation currents and collection efficiencies in ionization chambers exposed to pulsed megavoltage photon and electron beams are determined assuming a linear relationship between 1/I and 1/V in the extreme near-saturation region, with I and V the chamber current and polarizing voltage, respectively. Careful measurements of chamber current against polarizing voltage in the extreme near-saturation region reveal a current rising faster than that predicted by the linear relationship. This excess current combined with conventional "two-voltage" technique for determination of collection efficiency may result in an up to 0.7% overestimate of the saturation current for standard radiation field sizes of 10X10 cm2. The measured excess current is attributed to charge multiplication in the chamber air volume and to radiation-induced conductivity in the stem of the chamber (stem effect). These effects may be accounted for by an exponential term used in conjunction with Boag's equation for collection efficiency in pulsed beams. The semiempirical model follows the experimental data well and accounts for both the charge recombination as well as for the charge multiplication effects and the chamber stem effect.

  10. Impact of divorce on the quality of life in school-age children.

    PubMed

    Eymann, Alfredo; Busaniche, Julio; Llera, Julián; De Cunto, Carmen; Wahren, Carlos

    2009-01-01

    To assess psychosocial quality of life in school-age children of divorced parents. A cross-sectional survey was conducted at the pediatric outpatient clinic of a community hospital. Children 5 to 12 years old from married families and divorced families were included. Child quality of life was assessed through maternal reports using a Child Health Questionnaire-Parent Form 50. A multiple linear regression model was constructed including clinically relevant variables significant on univariate analysis (beta coefficient and 95%CI). Three hundred and thirty families were invited to participate and 313 completed the questionnaire. Univariate analysis showed that quality of life was significantly associated with parental separation, child sex, time spent with the father, standard of living, and maternal education. In a multiple linear regression model, quality of life scores decreased in boys -4.5 (-6.8 to -2.3) and increased for time spent with the father 0.09 (0.01 to 0.2). In divorced families, multiple linear regression showed that quality of life scores increased when parents had separated by mutual agreement 6.1 (2.7 to 9.4), when the mother had university level education 5.9 (1.7 to 10.1) and for each year elapsed since separation 0.6 (0.2 to 1.1), whereas scores decreased in boys -5.4 (-9.5 to -1.3) and for each one-year increment of maternal age -0.4 (-0.7 to -0.05). Children's psychosocial quality of life was affected by divorce. The Child Health Questionnaire can be useful to detect a decline in the psychosocial quality of life.

  11. Incremental dynamical downscaling for probabilistic analysis based on multiple GCM projections

    NASA Astrophysics Data System (ADS)

    Wakazuki, Y.

    2015-12-01

    A dynamical downscaling method for probabilistic regional scale climate change projections was developed to cover an uncertainty of multiple general circulation model (GCM) climate simulations. The climatological increments (future minus present climate states) estimated by GCM simulation results were statistically analyzed using the singular vector decomposition. Both positive and negative perturbations from the ensemble mean with the magnitudes of their standard deviations were extracted and were added to the ensemble mean of the climatological increments. The analyzed multiple modal increments were utilized to create multiple modal lateral boundary conditions for the future climate regional climate model (RCM) simulations by adding to an objective analysis data. This data handling is regarded to be an advanced method of the pseudo-global-warming (PGW) method previously developed by Kimura and Kitoh (2007). The incremental handling for GCM simulations realized approximated probabilistic climate change projections with the smaller number of RCM simulations. Three values of a climatological variable simulated by RCMs for a mode were used to estimate the response to the perturbation of the mode. For the probabilistic analysis, climatological variables of RCMs were assumed to show linear response to the multiple modal perturbations, although the non-linearity was seen for local scale rainfall. Probability of temperature was able to be estimated within two modes perturbation simulations, where the number of RCM simulations for the future climate is five. On the other hand, local scale rainfalls needed four modes simulations, where the number of the RCM simulations is nine. The probabilistic method is expected to be used for regional scale climate change impact assessment in the future.

  12. Comparing Multiple-Group Multinomial Log-Linear Models for Multidimensional Skill Distributions in the General Diagnostic Model. Research Report. ETS RR-08-35

    ERIC Educational Resources Information Center

    Xu, Xueli; von Davier, Matthias

    2008-01-01

    The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…

  13. Immobilized Metal Affinity Chromatography Coupled to Multiple Reaction Monitoring Enables Reproducible Quantification of Phospho-signaling*

    PubMed Central

    Kennedy, Jacob J.; Yan, Ping; Zhao, Lei; Ivey, Richard G.; Voytovich, Uliana J.; Moore, Heather D.; Lin, Chenwei; Pogosova-Agadjanyan, Era L.; Stirewalt, Derek L.; Reding, Kerryn W.; Whiteaker, Jeffrey R.; Paulovich, Amanda G.

    2016-01-01

    A major goal in cell signaling research is the quantification of phosphorylation pharmacodynamics following perturbations. Traditional methods of studying cellular phospho-signaling measure one analyte at a time with poor standardization, rendering them inadequate for interrogating network biology and contributing to the irreproducibility of preclinical research. In this study, we test the feasibility of circumventing these issues by coupling immobilized metal affinity chromatography (IMAC)-based enrichment of phosphopeptides with targeted, multiple reaction monitoring (MRM) mass spectrometry to achieve precise, specific, standardized, multiplex quantification of phospho-signaling responses. A multiplex immobilized metal affinity chromatography- multiple reaction monitoring assay targeting phospho-analytes responsive to DNA damage was configured, analytically characterized, and deployed to generate phospho-pharmacodynamic curves from primary and immortalized human cells experiencing genotoxic stress. The multiplexed assays demonstrated linear ranges of ≥3 orders of magnitude, median lower limit of quantification of 0.64 fmol on column, median intra-assay variability of 9.3%, median inter-assay variability of 12.7%, and median total CV of 16.0%. The multiplex immobilized metal affinity chromatography- multiple reaction monitoring assay enabled robust quantification of 107 DNA damage-responsive phosphosites from human cells following DNA damage. The assays have been made publicly available as a resource to the community. The approach is generally applicable, enabling wide interrogation of signaling networks. PMID:26621847

  14. Simple and efficient LCAO basis sets for the diffuse states in carbon nanostructures.

    PubMed

    Papior, Nick R; Calogero, Gaetano; Brandbyge, Mads

    2018-06-27

    We present a simple way to describe the lowest unoccupied diffuse states in carbon nanostructures in density functional theory calculations using a minimal LCAO (linear combination of atomic orbitals) basis set. By comparing plane wave basis calculations, we show how these states can be captured by adding long-range orbitals to the standard LCAO basis sets for the extreme cases of planar sp 2 (graphene) and curved carbon (C 60 ). In particular, using Bessel functions with a long range as additional basis functions retain a minimal basis size. This provides a smaller and simpler atom-centered basis set compared to the standard pseudo-atomic orbitals (PAOs) with multiple polarization orbitals or by adding non-atom-centered states to the basis.

  15. Simple and efficient LCAO basis sets for the diffuse states in carbon nanostructures

    NASA Astrophysics Data System (ADS)

    Papior, Nick R.; Calogero, Gaetano; Brandbyge, Mads

    2018-06-01

    We present a simple way to describe the lowest unoccupied diffuse states in carbon nanostructures in density functional theory calculations using a minimal LCAO (linear combination of atomic orbitals) basis set. By comparing plane wave basis calculations, we show how these states can be captured by adding long-range orbitals to the standard LCAO basis sets for the extreme cases of planar sp 2 (graphene) and curved carbon (C60). In particular, using Bessel functions with a long range as additional basis functions retain a minimal basis size. This provides a smaller and simpler atom-centered basis set compared to the standard pseudo-atomic orbitals (PAOs) with multiple polarization orbitals or by adding non-atom-centered states to the basis.

  16. Nonlinear ionic transport through microstructured solid electrolytes: homogenization estimates

    NASA Astrophysics Data System (ADS)

    Curto Sillamoni, Ignacio J.; Idiart, Martín I.

    2016-10-01

    We consider the transport of multiple ionic species by diffusion and migration through microstructured solid electrolytes in the presence of strong electric fields. The assumed constitutive relations for the constituent phases follow from convex energy and dissipation potentials which guarantee thermodynamic consistency. The effective response is heuristically deduced from a multi-scale convergence analysis of the relevant field equations. The resulting homogenized response involves an effective dissipation potential per species. Each potential is mathematically akin to that of a standard nonlinear heterogeneous conductor. A ‘linear-comparison’ homogenization technique is then used to generate estimates for these nonlinear potentials in terms of available estimates for corresponding linear conductors. By way of example, use is made of the Maxwell-Garnett and effective-medium linear approximations to generate estimates for two-phase systems with power-law dissipation. Explicit formulas are given for some limiting cases. In the case of threshold-type behavior, the estimates exhibit non-analytical dilute limits and seem to be consistent with fields localized in low energy paths.

  17. European Multicenter Study on Analytical Performance of DxN Veris System HCV Assay.

    PubMed

    Braun, Patrick; Delgado, Rafael; Drago, Monica; Fanti, Diana; Fleury, Hervé; Gismondo, Maria Rita; Hofmann, Jörg; Izopet, Jacques; Kühn, Sebastian; Lombardi, Alessandra; Marcos, Maria Angeles; Sauné, Karine; O'Shea, Siobhan; Pérez-Rivilla, Alfredo; Ramble, John; Trimoulet, Pascale; Vila, Jordi; Whittaker, Duncan; Artus, Alain; Rhodes, Daniel W

    2017-04-01

    The analytical performance of the Veris HCV Assay for use on the new and fully automated Beckman Coulter DxN Veris Molecular Diagnostics System (DxN Veris System) was evaluated at 10 European virology laboratories. Precision, analytical sensitivity, specificity, and performance with negative samples, linearity, and performance with hepatitis C virus (HCV) genotypes were evaluated. Precision for all sites showed a standard deviation (SD) of 0.22 log 10 IU/ml or lower for each level tested. Analytical sensitivity determined by probit analysis was between 6.2 and 9.0 IU/ml. Specificity on 94 unique patient samples was 100%, and performance with 1,089 negative samples demonstrated 100% not-detected results. Linearity using patient samples was shown from 1.34 to 6.94 log 10 IU/ml. The assay demonstrated linearity upon dilution with all HCV genotypes. The Veris HCV Assay demonstrated an analytical performance comparable to that of currently marketed HCV assays when tested across multiple European sites. Copyright © 2017 American Society for Microbiology.

  18. Bayesian Correction for Misclassification in Multilevel Count Data Models.

    PubMed

    Nelson, Tyler; Song, Joon Jin; Chin, Yoo-Mi; Stamey, James D

    2018-01-01

    Covariate misclassification is well known to yield biased estimates in single level regression models. The impact on hierarchical count models has been less studied. A fully Bayesian approach to modeling both the misclassified covariate and the hierarchical response is proposed. Models with a single diagnostic test and with multiple diagnostic tests are considered. Simulation studies show the ability of the proposed model to appropriately account for the misclassification by reducing bias and improving performance of interval estimators. A real data example further demonstrated the consequences of ignoring the misclassification. Ignoring misclassification yielded a model that indicated there was a significant, positive impact on the number of children of females who observed spousal abuse between their parents. When the misclassification was accounted for, the relationship switched to negative, but not significant. Ignoring misclassification in standard linear and generalized linear models is well known to lead to biased results. We provide an approach to extend misclassification modeling to the important area of hierarchical generalized linear models.

  19. A Profilometry-Based Dentifrice Abrasion Method for V8 Brushing Machines Part III: Multi-Laboratory Validation Testing of RDA-PE.

    PubMed

    Schneiderman, Eva; Colón, Ellen L; White, Donald J; Schemehorn, Bruce; Ganovsky, Tara; Haider, Amir; Garcia-Godoy, Franklin; Morrow, Brian R; Srimaneepong, Viritpon; Chumprasert, Sujin

    2017-09-01

    We have previously reported on progress toward the refinement of profilometry-based abrasivity testing of dentifrices using a V8 brushing machine and tactile or optical measurement of dentin wear. The general application of this technique may be advanced by demonstration of successful inter-laboratory confirmation of the method. The objective of this study was to explore the capability of different laboratories in the assessment of dentifrice abrasivity using a profilometry-based evaluation technique developed in our Mason laboratories. In addition, we wanted to assess the interchangeability of human and bovine specimens. Participating laboratories were instructed in methods associated with Radioactive Dentin Abrasivity-Profilometry Equivalent (RDA-PE) evaluation, including site visits to discuss critical elements of specimen preparation, masking, profilometry scanning, and procedures. Laboratories were likewise instructed on the requirement for demonstration of proportional linearity as a key condition for validation of the technique. Laboratories were provided with four test dentifrices, blinded for testing, with a broad range of abrasivity. In each laboratory, a calibration curve was developed for varying V8 brushing strokes (0, 4,000, and 10,000 strokes) with the ISO abrasive standard. Proportional linearity was determined as the ratio of standard abrasion mean depths created with 4,000 and 10,000 strokes (2.5 fold differences). Criteria for successful calibration within the method (established in our Mason laboratory) was set at proportional linearity = 2.5 ± 0.3. RDA-PE was compared to Radiotracer RDA for the four test dentifrices, with the latter obtained by averages from three independent Radiotracer RDA sites. Individual laboratories and their results were compared by 1) proportional linearity and 2) acquired RDA-PE values for test pastes. Five sites participated in the study. One site did not pass proportional linearity objectives. Data for this site are not reported at the request of the researchers. Three of the remaining four sites reported herein tested human dentin and all three met proportional linearity objectives for human dentin. Three of four sites participated in testing bovine dentin and all three met the proportional linearity objectives for bovine dentin. RDA-PE values for test dentifrices were similar between sites. All four sites that met proportional linearity requirement successfully identified the dentifrice formulated above the industry standard 250 RDA (as RDA-PE). The profilometry method showed at least as good reproducibility and differentiation as Radiotracer assessments. It was demonstrated that human and bovine specimens could be used interchangeably. The standardized RDA-PE method was reproduced in multiple laboratories in this inter-laboratory study. Evidence supports that this method is a suitable technique for ISO method 11609 Annex B.

  20. Analysis of statistical and standard algorithms for detecting muscle onset with surface electromyography.

    PubMed

    Tenan, Matthew S; Tweedell, Andrew J; Haynes, Courtney A

    2017-01-01

    The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60-90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity.

  1. Accurate prediction of cardiorespiratory fitness using cycle ergometry in minimally disabled persons with relapsing-remitting multiple sclerosis.

    PubMed

    Motl, Robert W; Fernhall, Bo

    2012-03-01

    To examine the accuracy of predicting peak oxygen consumption (VO(2peak)) primarily from peak work rate (WR(peak)) recorded during a maximal, incremental exercise test on a cycle ergometer among persons with relapsing-remitting multiple sclerosis (RRMS) who had minimal disability. Cross-sectional study. Clinical research laboratory. Women with RRMS (n=32) and sex-, age-, height-, and weight-matched healthy controls (n=16) completed an incremental exercise test on a cycle ergometer to volitional termination. Not applicable. Measured and predicted VO(2peak) and WR(peak). There were strong, statistically significant associations between measured and predicted VO(2peak) in the overall sample (R(2)=.89, standard error of the estimate=127.4 mL/min) and subsamples with (R(2)=.89, standard error of the estimate=131.3 mL/min) and without (R(2)=.85, standard error of the estimate=126.8 mL/min) multiple sclerosis (MS) based on the linear regression analyses. Based on the 95% confidence limits for worst-case errors, the equation predicted VO(2peak) within 10% of its true value in 95 of every 100 subjects with MS. Peak VO(2) can be accurately predicted in persons with RRMS who have minimal disability as it is in controls by using established equations and WR(peak) recorded from a maximal, incremental exercise test on a cycle ergometer. Copyright © 2012 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  2. Comparative quantification of human intestinal bacteria based on cPCR and LDR/LCR

    PubMed Central

    Tang, Zhou-Rui; Li, Kai; Zhou, Yu-Xun; Xiao, Zhen-Xian; Xiao, Jun-Hua; Huang, Rui; Gu, Guo-Hao

    2012-01-01

    AIM: To establish a multiple detection method based on comparative polymerase chain reaction (cPCR) and ligase detection reaction (LDR)/ligase chain reaction (LCR) to quantify the intestinal bacterial components. METHODS: Comparative quantification of 16S rDNAs from different intestinal bacterial components was used to quantify multiple intestinal bacteria. The 16S rDNAs of different bacteria were amplified simultaneously by cPCR. The LDR/LCR was examined to actualize the genotyping and quantification. Two beneficial (Bifidobacterium, Lactobacillus) and three conditionally pathogenic bacteria (Enterococcus, Enterobacterium and Eubacterium) were used in this detection. With cloned standard bacterial 16S rDNAs, standard curves were prepared to validate the quantitative relations between the ratio of original concentrations of two templates and the ratio of the fluorescence signals of their final ligation products. The internal controls were added to monitor the whole detection flow. The quantity ratio between two bacteria was tested. RESULTS: cPCR and LDR revealed obvious linear correlations with standard DNAs, but cPCR and LCR did not. In the sample test, the distributions of the quantity ratio between each two bacterial species were obtained. There were significant differences among these distributions in the total samples. But these distributions of quantity ratio of each two bacteria remained stable among groups divided by age or sex. CONCLUSION: The detection method in this study can be used to conduct multiple intestinal bacteria genotyping and quantification, and to monitor the human intestinal health status as well. PMID:22294830

  3. Comparative quantification of human intestinal bacteria based on cPCR and LDR/LCR.

    PubMed

    Tang, Zhou-Rui; Li, Kai; Zhou, Yu-Xun; Xiao, Zhen-Xian; Xiao, Jun-Hua; Huang, Rui; Gu, Guo-Hao

    2012-01-21

    To establish a multiple detection method based on comparative polymerase chain reaction (cPCR) and ligase detection reaction (LDR)/ligase chain reaction (LCR) to quantify the intestinal bacterial components. Comparative quantification of 16S rDNAs from different intestinal bacterial components was used to quantify multiple intestinal bacteria. The 16S rDNAs of different bacteria were amplified simultaneously by cPCR. The LDR/LCR was examined to actualize the genotyping and quantification. Two beneficial (Bifidobacterium, Lactobacillus) and three conditionally pathogenic bacteria (Enterococcus, Enterobacterium and Eubacterium) were used in this detection. With cloned standard bacterial 16S rDNAs, standard curves were prepared to validate the quantitative relations between the ratio of original concentrations of two templates and the ratio of the fluorescence signals of their final ligation products. The internal controls were added to monitor the whole detection flow. The quantity ratio between two bacteria was tested. cPCR and LDR revealed obvious linear correlations with standard DNAs, but cPCR and LCR did not. In the sample test, the distributions of the quantity ratio between each two bacterial species were obtained. There were significant differences among these distributions in the total samples. But these distributions of quantity ratio of each two bacteria remained stable among groups divided by age or sex. The detection method in this study can be used to conduct multiple intestinal bacteria genotyping and quantification, and to monitor the human intestinal health status as well.

  4. Combinatorics of transformations from standard to non-standard bases in Brauer algebras

    NASA Astrophysics Data System (ADS)

    Chilla, Vincenzo

    2007-05-01

    Transformation coefficients between standard bases for irreducible representations of the Brauer centralizer algebra \\mathfrak{B}_f(x) and split bases adapted to the \\mathfrak{B}_{f_1} (x) \\times \\mathfrak{B}_{f_2} (x) \\subset \\mathfrak{B}_f (x) subalgebra (f1 + f2 = f) are considered. After providing the suitable combinatorial background, based on the definition of the i-coupling relation on nodes of the subduction grid, we introduce a generalized version of the subduction graph which extends the one given in Chilla (2006 J. Phys. A: Math. Gen. 39 7657) for symmetric groups. Thus, we can describe the structure of the subduction system arising from the linear method and give an outline of the form of the solution space. An ordering relation on the grid is also given and then, as in the case of symmetric groups, the choices of the phases and of the free factors governing the multiplicity separations are discussed.

  5. MULTIVARIATE LINEAR MIXED MODELS FOR MULTIPLE OUTCOMES. (R824757)

    EPA Science Inventory

    We propose a multivariate linear mixed (MLMM) for the analysis of multiple outcomes, which generalizes the latent variable model of Sammel and Ryan. The proposed model assumes a flexible correlation structure among the multiple outcomes, and allows a global test of the impact of ...

  6. Use of Mueller and non-Mueller matrices to describe polarization properties of telescope-based polarimeters

    NASA Astrophysics Data System (ADS)

    Seagraves, P. H.; Elmore, David F.

    1994-09-01

    Systems using optical elements such as linear polarizers, retarders, and mirrors can be represented by Mueller matrices. Some polarimeters include elements with time-varying polarization properties, multiple light beams, light detectors, and signal processing equipment. Standard Mueller matrix forms describing time-varying retarders, and beam splitters are presented, as well as non-Mueller matrices which describe detection and signal processing. These matrices provide a compact and intuitive mathematical description of polarimeter response which can aid in the refining of instrument designs.

  7. Analysis of in vitro fertilization data with multiple outcomes using discrete time-to-event analysis

    PubMed Central

    Maity, Arnab; Williams, Paige; Ryan, Louise; Missmer, Stacey; Coull, Brent; Hauser, Russ

    2014-01-01

    In vitro fertilization (IVF) is an increasingly common method of assisted reproductive technology. Because of the careful observation and followup required as part of the procedure, IVF studies provide an ideal opportunity to identify and assess clinical and demographic factors along with environmental exposures that may impact successful reproduction. A major challenge in analyzing data from IVF studies is handling the complexity and multiplicity of outcome, resulting from both multiple opportunities for pregnancy loss within a single IVF cycle in addition to multiple IVF cycles. To date, most evaluations of IVF studies do not make use of full data due to its complex structure. In this paper, we develop statistical methodology for analysis of IVF data with multiple cycles and possibly multiple failure types observed for each individual. We develop a general analysis framework based on a generalized linear modeling formulation that allows implementation of various types of models including shared frailty models, failure specific frailty models, and transitional models, using standard software. We apply our methodology to data from an IVF study conducted at the Brigham and Women’s Hospital, Massachusetts. We also summarize the performance of our proposed methods based on a simulation study. PMID:24317880

  8. [Rapid screening and confirmation of 205 pesticide residues in rice by QuEChERS and liquid chromatography-mass spectrometry].

    PubMed

    Chen, Xi; Cheng, Lei; Qu, Shichao; Huang, Daliang; Liu, Jiacheng; Cui, Han; Jia, Yanbo; Ji, Mingshan

    2015-10-01

    A method for rapid screening and confirmation of 205 pesticide residues in rice was developed by combining QuEChERS and high performance liquid chromatography-triple quadrupole-linear ion trap mass spectrometry (LC-Q-TRAP/MS). The rice samples were extracted with acetonitrile, and then cleaned up with primary secondary amine (PSA), anhydrous magnesium sulfate (MgSO4) and C18 adsorbent. Finally, the samples were detected by LC-Q-TRAP/MS in multiple reaction monitoring with information-dependent acquisition of enhanced product ion (MRM-IDA-EPI) mode followed with database searching. A total of 205 pesticide residues were confirmed by retention times, ion pairs and the database searching using EPI library, and quantified by external standard method. All the pesticides showed good linearities with linear correlation coefficients all above 0.995. The limits of quantification (LOQs) for the 205 pesticides were 0.5-10.0 μg/kg. The average recoveries of the 205 pesticides ranged from 62.4% to 127.1% with the relative standard deviations (RSDs) of 1.0% - 20.0% at spiked levels of 10 μg/kg and 50 μg/kg, and only 20 min were needed for the analysis of an actual rice sample. In brief, the method is fast, accurate and highly sensitive, and is suitable for the screening and confirmation of pesticide residues in rice.

  9. Functional linear models for zero-inflated count data with application to modeling hospitalizations in patients on dialysis.

    PubMed

    Sentürk, Damla; Dalrymple, Lorien S; Nguyen, Danh V

    2014-11-30

    We propose functional linear models for zero-inflated count data with a focus on the functional hurdle and functional zero-inflated Poisson (ZIP) models. Although the hurdle model assumes the counts come from a mixture of a degenerate distribution at zero and a zero-truncated Poisson distribution, the ZIP model considers a mixture of a degenerate distribution at zero and a standard Poisson distribution. We extend the generalized functional linear model framework with a functional predictor and multiple cross-sectional predictors to model counts generated by a mixture distribution. We propose an estimation procedure for functional hurdle and ZIP models, called penalized reconstruction, geared towards error-prone and sparsely observed longitudinal functional predictors. The approach relies on dimension reduction and pooling of information across subjects involving basis expansions and penalized maximum likelihood techniques. The developed functional hurdle model is applied to modeling hospitalizations within the first 2 years from initiation of dialysis, with a high percentage of zeros, in the Comprehensive Dialysis Study participants. Hospitalization counts are modeled as a function of sparse longitudinal measurements of serum albumin concentrations, patient demographics, and comorbidities. Simulation studies are used to study finite sample properties of the proposed method and include comparisons with an adaptation of standard principal components regression. Copyright © 2014 John Wiley & Sons, Ltd.

  10. Near Infrared Spectrometry of Clinically Significant Fatty Acids Using Multicomponent Regression

    NASA Astrophysics Data System (ADS)

    Kalinin, A. V.; Krasheninnikov, V. N.; Sviridov, A. P.; Titov, V. N.

    2016-11-01

    We have developed methods for determining the content of clinically important fatty acids (FAs), primarily saturated palmitic acid, monounsaturated oleic acid, and the sum of polyenoic fatty acids (eicosapentaenoic + docosahexaenoic), in oily media (food products and supplements, fish oils) using different types of near infrared (NIR) spectrometers: Fourier-transform, linear photodiode array, and Raman. Based on a calibration method (regression) by means of projections to latent structures, using standard samples of oil and fat mixtures, we have confirmed the feasibility of reliable and selective quantitative analysis of the above-indicated fatty acids. As a result of comparing the calibration models for Fourier-transform spectrometers in different parts of the NIR range (based on different overtones and combinations of fatty acid absorption), we have provided a basis for selection of the spectral range for a portable linear InGaAs-photodiode array spectrometer. In testing the calibrations of a linear InGaAs-photodiode array spectrometer which is a prototype for a portable instrument, for palmitic and oleic acids and also the sum of the polyenoic fatty acids we have achieved a multiple correlation coefficient of 0.89, 0.85, and 0.96 and a standard error of 0.53%, 1.43%, and 0.39% respectively. We have confirmed the feasibility of using Raman spectra to determine the content of the above-indicated fatty acids in media where water is present.

  11. Generalized nonlinear Schrödinger equation and ultraslow optical solitons in a cold four-state atomic system.

    PubMed

    Hang, Chao; Huang, Guoxiang; Deng, L

    2006-03-01

    We investigate the influence of high-order dispersion and nonlinearity on the propagation of ultraslow optical solitons in a lifetime broadened four-state atomic system under a Raman excitation. Using a standard method of multiple-scales we derive a generalized nonlinear Schrödinger equation and show that for realistic physical parameters and at the pulse duration of 10(-6)s, the effects of third-order linear dispersion, nonlinear dispersion, and delay in nonlinear refractive index can be significant and may not be considered as perturbations. We provide exact soliton solutions for the generalized nonlinear Schrödinger equation and demonstrate that optical solitons obtained may still have ultraslow propagating velocity. Numerical simulations on the stability and interaction of these ultraslow optical solitons in the presence of linear and differential absorptions are also presented.

  12. J3Gen: A PRNG for Low-Cost Passive RFID

    PubMed Central

    Melià-Seguí, Joan; Garcia-Alfaro, Joaquin; Herrera-Joancomartí, Jordi

    2013-01-01

    Pseudorandom number generation (PRNG) is the main security tool in low-cost passive radio-frequency identification (RFID) technologies, such as EPC Gen2. We present a lightweight PRNG design for low-cost passive RFID tags, named J3Gen. J3Gen is based on a linear feedback shift register (LFSR) configured with multiple feedback polynomials. The polynomials are alternated during the generation of sequences via a physical source of randomness. J3Gen successfully handles the inherent linearity of LFSR based PRNGs and satisfies the statistical requirements imposed by the EPC Gen2 standard. A hardware implementation of J3Gen is presented and evaluated with regard to different design parameters, defining the key-equivalence security and nonlinearity of the design. The results of a SPICE simulation confirm the power-consumption suitability of the proposal. PMID:23519344

  13. Testing of next-generation nonlinear calibration based non-uniformity correction techniques using SWIR devices

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna R.; Wickert, Mark A.

    2017-05-01

    A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.

  14. SU-F-T-547: Off-Isocenter Winston-Lutz Test for Stereotactic Radiosurgery/stereotactic Body Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, J; Liu, X

    2016-06-15

    Purpose: To perform a quantitative study to verify that the mechanical field center coincides with the radiation field center when both are off from the isocenter during the single-isocenter technique in linear accelerator-based SRS/SBRT procedure to treat multiple lesions. Methods: We developed an innovative method to measure this accuracy, called the off-isocenter Winston-Lutz test, and here we provide a practical clinical guideline to implement this technique. We used ImagePro V.6 to analyze images of a Winston-Lutz phantom obtained using a Varian 21EX linear accelerator with an electronic portal imaging device, set up as for single-isocenter SRS/SBRT for multiple lesions. Wemore » investigated asymmetry field centers that were 3 cm and 5 cm away from the isocenter, as well as performing the standard Winston-Lutz test. We used a special beam configuration to acquire images while avoiding collision, and we investigated both jaw and multileaf collimation. Results: For the jaw collimator setting, at 3 cm off-isocenter, the mechanical field deviated from the radiation field by about 2.5 mm; at 5 cm, the deviation was above 3 mm, up to 4.27 mm. For the multileaf collimator setting, at 3 cm off-isocenter, the deviation was below 1 mm; at 5 cm, the deviation was above 1 mm, up to 1.72 mm, which is 72% higher than the tolerance threshold. Conclusion: These results indicated that the further the asymmetry field center is from the machine isocenter, the larger the deviation of the mechanical field from the radiation field, and the distance between the center of the asymmetry field and the isocenter should not exceed 3 cm in of our clinic. We recommend that every clinic that uses linear accelerator, multileaf collimator-based SRS/SBRT perform the off-isocenter Winston-Lutz test in addition to the standard Winston-Lutz test and use their own deviation data to design the treatment plan.« less

  15. Fast discovery and visualization of conserved regions in DNA sequences using quasi-alignment

    PubMed Central

    2013-01-01

    Background Next Generation Sequencing techniques are producing enormous amounts of biological sequence data and analysis becomes a major computational problem. Currently, most analysis, especially the identification of conserved regions, relies heavily on Multiple Sequence Alignment and its various heuristics such as progressive alignment, whose run time grows with the square of the number and the length of the aligned sequences and requires significant computational resources. In this work, we present a method to efficiently discover regions of high similarity across multiple sequences without performing expensive sequence alignment. The method is based on approximating edit distance between segments of sequences using p-mer frequency counts. Then, efficient high-throughput data stream clustering is used to group highly similar segments into so called quasi-alignments. Quasi-alignments have numerous applications such as identifying species and their taxonomic class from sequences, comparing sequences for similarities, and, as in this paper, discovering conserved regions across related sequences. Results In this paper, we show that quasi-alignments can be used to discover highly similar segments across multiple sequences from related or different genomes efficiently and accurately. Experiments on a large number of unaligned 16S rRNA sequences obtained from the Greengenes database show that the method is able to identify conserved regions which agree with known hypervariable regions in 16S rRNA. Furthermore, the experiments show that the proposed method scales well for large data sets with a run time that grows only linearly with the number and length of sequences, whereas for existing multiple sequence alignment heuristics the run time grows super-linearly. Conclusion Quasi-alignment-based algorithms can detect highly similar regions and conserved areas across multiple sequences. Since the run time is linear and the sequences are converted into a compact clustering model, we are able to identify conserved regions fast or even interactively using a standard PC. Our method has many potential applications such as finding characteristic signature sequences for families of organisms and studying conserved and variable regions in, for example, 16S rRNA. PMID:24564200

  16. Fast discovery and visualization of conserved regions in DNA sequences using quasi-alignment.

    PubMed

    Nagar, Anurag; Hahsler, Michael

    2013-01-01

    Next Generation Sequencing techniques are producing enormous amounts of biological sequence data and analysis becomes a major computational problem. Currently, most analysis, especially the identification of conserved regions, relies heavily on Multiple Sequence Alignment and its various heuristics such as progressive alignment, whose run time grows with the square of the number and the length of the aligned sequences and requires significant computational resources. In this work, we present a method to efficiently discover regions of high similarity across multiple sequences without performing expensive sequence alignment. The method is based on approximating edit distance between segments of sequences using p-mer frequency counts. Then, efficient high-throughput data stream clustering is used to group highly similar segments into so called quasi-alignments. Quasi-alignments have numerous applications such as identifying species and their taxonomic class from sequences, comparing sequences for similarities, and, as in this paper, discovering conserved regions across related sequences. In this paper, we show that quasi-alignments can be used to discover highly similar segments across multiple sequences from related or different genomes efficiently and accurately. Experiments on a large number of unaligned 16S rRNA sequences obtained from the Greengenes database show that the method is able to identify conserved regions which agree with known hypervariable regions in 16S rRNA. Furthermore, the experiments show that the proposed method scales well for large data sets with a run time that grows only linearly with the number and length of sequences, whereas for existing multiple sequence alignment heuristics the run time grows super-linearly. Quasi-alignment-based algorithms can detect highly similar regions and conserved areas across multiple sequences. Since the run time is linear and the sequences are converted into a compact clustering model, we are able to identify conserved regions fast or even interactively using a standard PC. Our method has many potential applications such as finding characteristic signature sequences for families of organisms and studying conserved and variable regions in, for example, 16S rRNA.

  17. A Nanocoaxial-Based Electrochemical Sensor for the Detection of Cholera Toxin

    NASA Astrophysics Data System (ADS)

    Archibald, Michelle M.; Rizal, Binod; Connolly, Timothy; Burns, Michael J.; Naughton, Michael J.; Chiles, Thomas C.

    2015-03-01

    Sensitive, real-time detection of biomarkers is of critical importance for rapid and accurate diagnosis of disease for point of care (POC) technologies. Current methods do not allow for POC applications due to several limitations, including sophisticated instrumentation, high reagent consumption, limited multiplexing capability, and cost. Here, we report a nanocoaxial-based electrochemical sensor for the detection of bacterial toxins using an electrochemical enzyme-linked immunosorbent assay (ELISA) and differential pulse voltammetry (DPV). Proof-of-concept was demonstrated for the detection of cholera toxin (CT). The linear dynamic range of detection was 10 ng/ml - 1 μg/ml, and the limit of detection (LOD) was found to be 2 ng/ml. This level of sensitivity is comparable to the standard optical ELISA used widely in clinical applications. In addition to matching the detection profile of the standard ELISA, the nanocoaxial array provides a simple electrochemical readout and a miniaturized platform with multiplexing capabilities for the simultaneous detection of multiple biomarkers, giving the nanocoax a desirable advantage over the standard method towards POC applications. Sensitive, real-time detection of biomarkers is of critical importance for rapid and accurate diagnosis of disease for point of care (POC) technologies. Current methods do not allow for POC applications due to several limitations, including sophisticated instrumentation, high reagent consumption, limited multiplexing capability, and cost. Here, we report a nanocoaxial-based electrochemical sensor for the detection of bacterial toxins using an electrochemical enzyme-linked immunosorbent assay (ELISA) and differential pulse voltammetry (DPV). Proof-of-concept was demonstrated for the detection of cholera toxin (CT). The linear dynamic range of detection was 10 ng/ml - 1 μg/ml, and the limit of detection (LOD) was found to be 2 ng/ml. This level of sensitivity is comparable to the standard optical ELISA used widely in clinical applications. In addition to matching the detection profile of the standard ELISA, the nanocoaxial array provides a simple electrochemical readout and a miniaturized platform with multiplexing capabilities for the simultaneous detection of multiple biomarkers, giving the nanocoax a desirable advantage over the standard method towards POC applications. This work was supported by the National Institutes of Health (National Cancer Institute award No. CA137681 and National Institute of Allergy and Infectious Diseases Award No. AI100216).

  18. An Enhanced Differential Evolution Algorithm Based on Multiple Mutation Strategies.

    PubMed

    Xiang, Wan-li; Meng, Xue-lei; An, Mei-qing; Li, Yin-zhen; Gao, Ming-xia

    2015-01-01

    Differential evolution algorithm is a simple yet efficient metaheuristic for global optimization over continuous spaces. However, there is a shortcoming of premature convergence in standard DE, especially in DE/best/1/bin. In order to take advantage of direction guidance information of the best individual of DE/best/1/bin and avoid getting into local trap, based on multiple mutation strategies, an enhanced differential evolution algorithm, named EDE, is proposed in this paper. In the EDE algorithm, an initialization technique, opposition-based learning initialization for improving the initial solution quality, and a new combined mutation strategy composed of DE/current/1/bin together with DE/pbest/bin/1 for the sake of accelerating standard DE and preventing DE from clustering around the global best individual, as well as a perturbation scheme for further avoiding premature convergence, are integrated. In addition, we also introduce two linear time-varying functions, which are used to decide which solution search equation is chosen at the phases of mutation and perturbation, respectively. Experimental results tested on twenty-five benchmark functions show that EDE is far better than the standard DE. In further comparisons, EDE is compared with other five state-of-the-art approaches and related results show that EDE is still superior to or at least equal to these methods on most of benchmark functions.

  19. BLAS- BASIC LINEAR ALGEBRA SUBPROGRAMS

    NASA Technical Reports Server (NTRS)

    Krogh, F. T.

    1994-01-01

    The Basic Linear Algebra Subprogram (BLAS) library is a collection of FORTRAN callable routines for employing standard techniques in performing the basic operations of numerical linear algebra. The BLAS library was developed to provide a portable and efficient source of basic operations for designers of programs involving linear algebraic computations. The subprograms available in the library cover the operations of dot product, multiplication of a scalar and a vector, vector plus a scalar times a vector, Givens transformation, modified Givens transformation, copy, swap, Euclidean norm, sum of magnitudes, and location of the largest magnitude element. Since these subprograms are to be used in an ANSI FORTRAN context, the cases of single precision, double precision, and complex data are provided for. All of the subprograms have been thoroughly tested and produce consistent results even when transported from machine to machine. BLAS contains Assembler versions and FORTRAN test code for any of the following compilers: Lahey F77L, Microsoft FORTRAN, or IBM Professional FORTRAN. It requires the Microsoft Macro Assembler and a math co-processor. The PC implementation allows individual arrays of over 64K. The BLAS library was developed in 1979. The PC version was made available in 1986 and updated in 1988.

  20. Multiplication factor versus regression analysis in stature estimation from hand and foot dimensions.

    PubMed

    Krishan, Kewal; Kanchan, Tanuj; Sharma, Abhilasha

    2012-05-01

    Estimation of stature is an important parameter in identification of human remains in forensic examinations. The present study is aimed to compare the reliability and accuracy of stature estimation and to demonstrate the variability in estimated stature and actual stature using multiplication factor and regression analysis methods. The study is based on a sample of 246 subjects (123 males and 123 females) from North India aged between 17 and 20 years. Four anthropometric measurements; hand length, hand breadth, foot length and foot breadth taken on the left side in each subject were included in the study. Stature was measured using standard anthropometric techniques. Multiplication factors were calculated and linear regression models were derived for estimation of stature from hand and foot dimensions. Derived multiplication factors and regression formula were applied to the hand and foot measurements in the study sample. The estimated stature from the multiplication factors and regression analysis was compared with the actual stature to find the error in estimated stature. The results indicate that the range of error in estimation of stature from regression analysis method is less than that of multiplication factor method thus, confirming that the regression analysis method is better than multiplication factor analysis in stature estimation. Copyright © 2012 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  1. Age estimation standards for a Western Australian population using the coronal pulp cavity index.

    PubMed

    Karkhanis, Shalmira; Mack, Peter; Franklin, Daniel

    2013-09-10

    Age estimation is a vital aspect in creating a biological profile and aids investigators by narrowing down potentially matching identities from the available pool. In addition to routine casework, in the present global political scenario, age estimation in living individuals is required in cases of refugees, asylum seekers, human trafficking and to ascertain age of criminal responsibility. Thus robust methods that are simple, non-invasive and ethically viable are required. The aim of the present study is, therefore, to test the reliability and applicability of the coronal pulp cavity index method, for the purpose of developing age estimation standards for an adult Western Australian population. A total of 450 orthopantomograms (220 females and 230 males) of Australian individuals were analyzed. Crown and coronal pulp chamber heights were measured in the mandibular left and right premolars, and the first and second molars. These measurements were then used to calculate the tooth coronal index. Data was analyzed using paired sample t-tests to assess bilateral asymmetry followed by simple linear and multiple regressions to develop age estimation models. The most accurate age estimation based on simple linear regression model was with mandibular right first molar (SEE ±8.271 years). Multiple regression models improved age prediction accuracy considerably and the most accurate model was with bilateral first and second molars (SEE ±6.692 years). This study represents the first investigation of this method in a Western Australian population and our results indicate that the method is suitable for forensic application. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  2. Inverse Association between Air Pressure and Rheumatoid Arthritis Synovitis

    PubMed Central

    Furu, Moritoshi; Nakabo, Shuichiro; Ohmura, Koichiro; Nakashima, Ran; Imura, Yoshitaka; Yukawa, Naoichiro; Yoshifuji, Hajime; Matsuda, Fumihiko; Ito, Hiromu; Fujii, Takao; Mimori, Tsuneyo

    2014-01-01

    Rheumatoid arthritis (RA) is a bone destructive autoimmune disease. Many patients with RA recognize fluctuations of their joint synovitis according to changes of air pressure, but the correlations between them have never been addressed in large-scale association studies. To address this point we recruited large-scale assessments of RA activity in a Japanese population, and performed an association analysis. Here, a total of 23,064 assessments of RA activity from 2,131 patients were obtained from the KURAMA (Kyoto University Rheumatoid Arthritis Management Alliance) database. Detailed correlations between air pressure and joint swelling or tenderness were analyzed separately for each of the 326 patients with more than 20 assessments to regulate intra-patient correlations. Association studies were also performed for seven consecutive days to identify the strongest correlations. Standardized multiple linear regression analysis was performed to evaluate independent influences from other meteorological factors. As a result, components of composite measures for RA disease activity revealed suggestive negative associations with air pressure. The 326 patients displayed significant negative mean correlations between air pressure and swellings or the sum of swellings and tenderness (p = 0.00068 and 0.00011, respectively). Among the seven consecutive days, the most significant mean negative correlations were observed for air pressure three days before evaluations of RA synovitis (p = 1.7×10−7, 0.00027, and 8.3×10−8, for swellings, tenderness and the sum of them, respectively). Standardized multiple linear regression analysis revealed these associations were independent from humidity and temperature. Our findings suggest that air pressure is inversely associated with synovitis in patients with RA. PMID:24454853

  3. Comparing Single-Point and Multi-point Calibration Methods in Modulated DSC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Buskirk, Caleb Griffith

    2017-06-14

    Heat capacity measurements for High Density Polyethylene (HDPE) and Ultra-high Molecular Weight Polyethylene (UHMWPE) were performed using Modulated Differential Scanning Calorimetry (mDSC) over a wide temperature range, -70 to 115 °C, with a TA Instruments Q2000 mDSC. The default calibration method for this instrument involves measuring the heat capacity of a sapphire standard at a single temperature near the middle of the temperature range of interest. However, this method often fails for temperature ranges that exceed a 50 °C interval, likely because of drift or non-linearity in the instrument's heat capacity readings over time or over the temperature range. Therefore,more » in this study a method was developed to calibrate the instrument using multiple temperatures and the same sapphire standard.« less

  4. Linear increases in carbon nanotube density through multiple transfer technique.

    PubMed

    Shulaker, Max M; Wei, Hai; Patil, Nishant; Provine, J; Chen, Hong-Yu; Wong, H-S P; Mitra, Subhasish

    2011-05-11

    We present a technique to increase carbon nanotube (CNT) density beyond the as-grown CNT density. We perform multiple transfers, whereby we transfer CNTs from several growth wafers onto the same target surface, thereby linearly increasing CNT density on the target substrate. This process, called transfer of nanotubes through multiple sacrificial layers, is highly scalable, and we demonstrate linear CNT density scaling up to 5 transfers. We also demonstrate that this linear CNT density increase results in an ideal linear increase in drain-source currents of carbon nanotube field effect transistors (CNFETs). Experimental results demonstrate that CNT density can be improved from 2 to 8 CNTs/μm, accompanied by an increase in drain-source CNFET current from 4.3 to 17.4 μA/μm.

  5. Computational Tools for Probing Interactions in Multiple Linear Regression, Multilevel Modeling, and Latent Curve Analysis

    ERIC Educational Resources Information Center

    Preacher, Kristopher J.; Curran, Patrick J.; Bauer, Daniel J.

    2006-01-01

    Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the…

  6. Use of phenyl/tetrazolyl-functionalized magnetic microspheres and stable isotope labeled internal standards for significant reduction of matrix effect in determination of nine fluoroquinolones by liquid chromatography-quadrupole linear ion trap mass spectrometry.

    PubMed

    Xu, Fei; Liu, Feng; Wang, Chaozhan; Wei, Yinmao

    2018-02-01

    In this study, the strategy of unique adsorbent combined with isotope labeled internal standards was used to significantly reduce the matrix effect for the enrichment and analysis of nine fluoroquinolones in a complex sample by liquid chromatography coupled to quadrupole linear ion trap mass spectrometry (LC-QqQ LIT -MS/MS). The adsorbent was prepared conveniently by functionalizing Fe 3 O 4 @SiO 2 microspheres with phenyl and tetrazolyl groups, which could adsorb fluoroquinolones selectively via hydrophobic, electrostatic, and π-π interactions. The established magnetic solid-phase extraction (MSPE) method as well as using stable isotope labeled internal standards in the next MS/MS detection was able to reduce the matrix effect significantly. In the process of LC-QqQ LIT -MS/MS analysis, the precursor and product ions of the analytes were monitored quantitatively and qualitatively on a QTrap system equipped simultaneously with the multiple reaction monitoring (MRM) and enhanced product ion (EPI) scan. Subsequently, the enrichment method combined with LC-QqQ LIT -MS/MS demonstrated good analytical features in terms of linearity (7.5-100.0 ng mL -1 , r > 0.9960), satisfactory recoveries (88.6%-118.3%) with RSDs < 12.0%, LODs = 0.5 μg kg -1 and LOQs = 1.5 μg kg -1 for all tested analytes. Finally, the developed MSPE-LC-QqQ LIT -MS/MS method had been successfully applied to real pork samples for food-safety risk monitoring in Ningxia Province, China. Graphical abstract Mechanism of reducing matrix effect through the as-prepared adsorbent.

  7. Analysis of statistical and standard algorithms for detecting muscle onset with surface electromyography

    PubMed Central

    Tweedell, Andrew J.; Haynes, Courtney A.

    2017-01-01

    The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60–90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity. PMID:28489897

  8. Investigating Integer Restrictions in Linear Programming

    ERIC Educational Resources Information Center

    Edwards, Thomas G.; Chelst, Kenneth R.; Principato, Angela M.; Wilhelm, Thad L.

    2015-01-01

    Linear programming (LP) is an application of graphing linear systems that appears in many Algebra 2 textbooks. Although not explicitly mentioned in the Common Core State Standards for Mathematics, linear programming blends seamlessly into modeling with mathematics, the fourth Standard for Mathematical Practice (CCSSI 2010, p. 7). In solving a…

  9. Practical multipeptide synthesis: dedicated software for the definition of multiple, overlapping peptides covering polypeptide sequences.

    PubMed

    Heegaard, P M; Holm, A; Hagerup, M

    1993-01-01

    A personal computer program for the conversion of linear amino acid sequences to multiple, small, overlapping peptide sequences has been developed. Peptide lengths and "jumps" (the distance between two consecutive overlapping peptides) are defined by the user. To facilitate the use of the program for parallel solid-phase chemical peptide syntheses for the synchronous production of multiple peptides, amino acids at each acylation step are laid out by the program in a convenient standard multi-well setup. Also, the total number of equivalents, as well as the derived amount in milligrams (depend-ending on user-defined equivalent weights and molar surplus), of each amino acid are given. The program facilitates the implementation of multipeptide synthesis, e.g., for the elucidation of polypeptide structure-function relationships, and greatly reduces the risk of introducing mistakes at the planning step. It is written in Pascal and runs on any DOS-based personal computer. No special graphic display is needed.

  10. Advanced statistics: linear regression, part II: multiple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.

  11. Data Mining Methods Applied to Flight Operations Quality Assurance Data: A Comparison to Standard Statistical Methods

    NASA Technical Reports Server (NTRS)

    Stolzer, Alan J.; Halford, Carl

    2007-01-01

    In a previous study, multiple regression techniques were applied to Flight Operations Quality Assurance-derived data to develop parsimonious model(s) for fuel consumption on the Boeing 757 airplane. The present study examined several data mining algorithms, including neural networks, on the fuel consumption problem and compared them to the multiple regression results obtained earlier. Using regression methods, parsimonious models were obtained that explained approximately 85% of the variation in fuel flow. In general data mining methods were more effective in predicting fuel consumption. Classification and Regression Tree methods reported correlation coefficients of .91 to .92, and General Linear Models and Multilayer Perceptron neural networks reported correlation coefficients of about .99. These data mining models show great promise for use in further examining large FOQA databases for operational and safety improvements.

  12. RRegrs: an R package for computer-aided model selection with multiple regression models.

    PubMed

    Tsiliki, Georgia; Munteanu, Cristian R; Seoane, Jose A; Fernandez-Lozano, Carlos; Sarimveis, Haralambos; Willighagen, Egon L

    2015-01-01

    Predictive regression models can be created with many different modelling approaches. Choices need to be made for data set splitting, cross-validation methods, specific regression parameters and best model criteria, as they all affect the accuracy and efficiency of the produced predictive models, and therefore, raising model reproducibility and comparison issues. Cheminformatics and bioinformatics are extensively using predictive modelling and exhibit a need for standardization of these methodologies in order to assist model selection and speed up the process of predictive model development. A tool accessible to all users, irrespectively of their statistical knowledge, would be valuable if it tests several simple and complex regression models and validation schemes, produce unified reports, and offer the option to be integrated into more extensive studies. Additionally, such methodology should be implemented as a free programming package, in order to be continuously adapted and redistributed by others. We propose an integrated framework for creating multiple regression models, called RRegrs. The tool offers the option of ten simple and complex regression methods combined with repeated 10-fold and leave-one-out cross-validation. Methods include Multiple Linear regression, Generalized Linear Model with Stepwise Feature Selection, Partial Least Squares regression, Lasso regression, and Support Vector Machines Recursive Feature Elimination. The new framework is an automated fully validated procedure which produces standardized reports to quickly oversee the impact of choices in modelling algorithms and assess the model and cross-validation results. The methodology was implemented as an open source R package, available at https://www.github.com/enanomapper/RRegrs, by reusing and extending on the caret package. The universality of the new methodology is demonstrated using five standard data sets from different scientific fields. Its efficiency in cheminformatics and QSAR modelling is shown with three use cases: proteomics data for surface-modified gold nanoparticles, nano-metal oxides descriptor data, and molecular descriptors for acute aquatic toxicity data. The results show that for all data sets RRegrs reports models with equal or better performance for both training and test sets than those reported in the original publications. Its good performance as well as its adaptability in terms of parameter optimization could make RRegrs a popular framework to assist the initial exploration of predictive models, and with that, the design of more comprehensive in silico screening applications.Graphical abstractRRegrs is a computer-aided model selection framework for R multiple regression models; this is a fully validated procedure with application to QSAR modelling.

  13. Communication: modeling charge-sign asymmetric solvation free energies with nonlinear boundary conditions.

    PubMed

    Bardhan, Jaydeep P; Knepley, Matthew G

    2014-10-07

    We show that charge-sign-dependent asymmetric hydration can be modeled accurately using linear Poisson theory after replacing the standard electric-displacement boundary condition with a simple nonlinear boundary condition. Using a single multiplicative scaling factor to determine atomic radii from molecular dynamics Lennard-Jones parameters, the new model accurately reproduces MD free-energy calculations of hydration asymmetries for: (i) monatomic ions, (ii) titratable amino acids in both their protonated and unprotonated states, and (iii) the Mobley "bracelet" and "rod" test problems [D. L. Mobley, A. E. Barber II, C. J. Fennell, and K. A. Dill, "Charge asymmetries in hydration of polar solutes," J. Phys. Chem. B 112, 2405-2414 (2008)]. Remarkably, the model also justifies the use of linear response expressions for charging free energies. Our boundary-element method implementation demonstrates the ease with which other continuum-electrostatic solvers can be extended to include asymmetry.

  14. Optical Modeling Activities for the James Webb Space Telescope (JWST) Project. II; Determining Image Motion and Wavefront Error Over an Extended Field of View with a Segmented Optical System

    NASA Technical Reports Server (NTRS)

    Howard, Joseph M.; Ha, Kong Q.

    2004-01-01

    This is part two of a series on the optical modeling activities for JWST. Starting with the linear optical model discussed in part one, we develop centroid and wavefront error sensitivities for the special case of a segmented optical system such as JWST, where the primary mirror consists of 18 individual segments. Our approach extends standard sensitivity matrix methods used for systems consisting of monolithic optics, where the image motion is approximated by averaging ray coordinates at the image and residual wavefront error is determined with global tip/tilt removed. We develop an exact formulation using the linear optical model, and extend it to cover multiple field points for performance prediction at each instrument aboard JWST. This optical model is then driven by thermal and dynamic structural perturbations in an integrated modeling environment. Results are presented.

  15. Robust Combining of Disparate Classifiers Through Order Statistics

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2001-01-01

    Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In this article we investigate a family of combiners based on order statistics, for robust handling of situations where there are large discrepancies in performance of individual classifiers. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when simple output combination methods based on the the median, the maximum and in general, the ith order statistic, are used. Furthermore, we analyze the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that in the presence of uneven classifier performance, they often provide substantial gains over both linear and simple order statistics combiners. Experimental results on both real world data and standard public domain data sets corroborate these findings.

  16. Communication: Modeling charge-sign asymmetric solvation free energies with nonlinear boundary conditions

    PubMed Central

    Bardhan, Jaydeep P.; Knepley, Matthew G.

    2014-01-01

    We show that charge-sign-dependent asymmetric hydration can be modeled accurately using linear Poisson theory after replacing the standard electric-displacement boundary condition with a simple nonlinear boundary condition. Using a single multiplicative scaling factor to determine atomic radii from molecular dynamics Lennard-Jones parameters, the new model accurately reproduces MD free-energy calculations of hydration asymmetries for: (i) monatomic ions, (ii) titratable amino acids in both their protonated and unprotonated states, and (iii) the Mobley “bracelet” and “rod” test problems [D. L. Mobley, A. E. Barber II, C. J. Fennell, and K. A. Dill, “Charge asymmetries in hydration of polar solutes,” J. Phys. Chem. B 112, 2405–2414 (2008)]. Remarkably, the model also justifies the use of linear response expressions for charging free energies. Our boundary-element method implementation demonstrates the ease with which other continuum-electrostatic solvers can be extended to include asymmetry. PMID:25296776

  17. Hybrid finite element method for describing the electrical response of biological cells to applied fields.

    PubMed

    Ying, Wenjun; Henriquez, Craig S

    2007-04-01

    A novel hybrid finite element method (FEM) for modeling the response of passive and active biological membranes to external stimuli is presented. The method is based on the differential equations that describe the conservation of electric flux and membrane currents. By introducing the electric flux through the cell membrane as an additional variable, the algorithm decouples the linear partial differential equation part from the nonlinear ordinary differential equation part that defines the membrane dynamics of interest. This conveniently results in two subproblems: a linear interface problem and a nonlinear initial value problem. The linear interface problem is solved with a hybrid FEM. The initial value problem is integrated by a standard ordinary differential equation solver such as the Euler and Runge-Kutta methods. During time integration, these two subproblems are solved alternatively. The algorithm can be used to model the interaction of stimuli with multiple cells of almost arbitrary geometries and complex ion-channel gating at the plasma membrane. Numerical experiments are presented demonstrating the uses of the method for modeling field stimulation and action potential propagation.

  18. Using the Coefficient of Determination "R"[superscript 2] to Test the Significance of Multiple Linear Regression

    ERIC Educational Resources Information Center

    Quinino, Roberto C.; Reis, Edna A.; Bessegato, Lupercio F.

    2013-01-01

    This article proposes the use of the coefficient of determination as a statistic for hypothesis testing in multiple linear regression based on distributions acquired by beta sampling. (Contains 3 figures.)

  19. The Ability of American Football Helmets to Manage Linear Acceleration With Repeated High-Energy Impacts.

    PubMed

    Cournoyer, Janie; Post, Andrew; Rousseau, Philippe; Hoshizaki, Blaine

    2016-03-01

    Football players can receive up to 1400 head impacts per season, averaging 6.3 impacts per practice and 14.3 impacts per game. A decrease in the capacity of a helmet to manage linear acceleration with multiple impacts could increase the risk of traumatic brain injury. To investigate the ability of football helmets to manage linear acceleration with multiple high-energy impacts. Descriptive laboratory study. Laboratory. We collected linear-acceleration data for 100 impacts at 6 locations on 4 helmets of different models currently used in football. Impacts 11 to 20 were compared with impacts 91 to 100 for each of the 6 locations. Linear acceleration was greater after multiple impacts (91-100) than after the first few impacts (11-20) for the front, front-boss, rear, and top locations. However, these differences are not clinically relevant as they do not affect the risk for head injury. American football helmet performance deteriorated with multiple impacts, but this is unlikely to be a factor in head-injury causation during a game or over a season.

  20. Linkage Determination of Linear Oligosaccharides by MSn (n > 2) Collision-Induced Dissociation of Z1 Ions in the Negative Ion Mode

    NASA Astrophysics Data System (ADS)

    Konda, Chiharu; Bendiak, Brad; Xia, Yu

    2014-02-01

    Obtaining unambiguous linkage information between sugars in oligosaccharides is an important step in their detailed structural analysis. An approach is described that provides greater confidence in linkage determination for linear oligosaccharides based on multiple-stage tandem mass spectrometry (MSn, n >2) and collision-induced dissociation (CID) of Z1 ions in the negative ion mode. Under low energy CID conditions, disaccharides 18O-labeled on the reducing carbonyl group gave rise to Z1 product ions (m/z 163) derived from the reducing sugar, which could be mass-discriminated from other possible structural isomers having m/z 161. MS3 CID of these m/z 163 ions showed distinct fragmentation fingerprints corresponding to the linkage types and largely unaffected by sugar unit identities or their anomeric configurations. This unique property allowed standard CID spectra of Z1 ions to be generated from a small set of disaccharide samples that were representative of many other possible isomeric structures. With the use of MSn CID (n = 3 - 5), model linear oligosaccharides were dissociated into overlapping disaccharide structures, which were subsequently fragmented to form their corresponding Z1 ions. CID data of these Z1 ions were collected and compared with the standard database of Z1 ion CID using spectra similarity scores for linkage determination. As the proof-of-principle tests demonstrated, we achieved correct determination of individual linkage types along with their locations within two trisaccharides and a pentasaccharide.

  1. Missing continuous outcomes under covariate dependent missingness in cluster randomised trials

    PubMed Central

    Diaz-Ordaz, Karla; Bartlett, Jonathan W

    2016-01-01

    Attrition is a common occurrence in cluster randomised trials which leads to missing outcome data. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. This paper compares the performance of unadjusted cluster-level analysis, baseline covariate adjusted cluster-level analysis and linear mixed model analysis, under baseline covariate dependent missingness in continuous outcomes, in terms of bias, average estimated standard error and coverage probability. The methods of complete records analysis and multiple imputation are used to handle the missing outcome data. We considered four scenarios, with the missingness mechanism and baseline covariate effect on outcome either the same or different between intervention groups. We show that both unadjusted cluster-level analysis and baseline covariate adjusted cluster-level analysis give unbiased estimates of the intervention effect only if both intervention groups have the same missingness mechanisms and there is no interaction between baseline covariate and intervention group. Linear mixed model and multiple imputation give unbiased estimates under all four considered scenarios, provided that an interaction of intervention and baseline covariate is included in the model when appropriate. Cluster mean imputation has been proposed as a valid approach for handling missing outcomes in cluster randomised trials. We show that cluster mean imputation only gives unbiased estimates when missingness mechanism is the same between the intervention groups and there is no interaction between baseline covariate and intervention group. Multiple imputation shows overcoverage for small number of clusters in each intervention group. PMID:27177885

  2. Missing continuous outcomes under covariate dependent missingness in cluster randomised trials.

    PubMed

    Hossain, Anower; Diaz-Ordaz, Karla; Bartlett, Jonathan W

    2017-06-01

    Attrition is a common occurrence in cluster randomised trials which leads to missing outcome data. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. This paper compares the performance of unadjusted cluster-level analysis, baseline covariate adjusted cluster-level analysis and linear mixed model analysis, under baseline covariate dependent missingness in continuous outcomes, in terms of bias, average estimated standard error and coverage probability. The methods of complete records analysis and multiple imputation are used to handle the missing outcome data. We considered four scenarios, with the missingness mechanism and baseline covariate effect on outcome either the same or different between intervention groups. We show that both unadjusted cluster-level analysis and baseline covariate adjusted cluster-level analysis give unbiased estimates of the intervention effect only if both intervention groups have the same missingness mechanisms and there is no interaction between baseline covariate and intervention group. Linear mixed model and multiple imputation give unbiased estimates under all four considered scenarios, provided that an interaction of intervention and baseline covariate is included in the model when appropriate. Cluster mean imputation has been proposed as a valid approach for handling missing outcomes in cluster randomised trials. We show that cluster mean imputation only gives unbiased estimates when missingness mechanism is the same between the intervention groups and there is no interaction between baseline covariate and intervention group. Multiple imputation shows overcoverage for small number of clusters in each intervention group.

  3. Design, modeling and simulations of a Cabinet Safe System for a linear particle accelerator of intermediate-low energy by optimization of the beam optics

    NASA Astrophysics Data System (ADS)

    Maidana, Carlos Omar

    As part of an accelerator based Cargo Inspection System, studies were made to develop a Cabinet Safe System by Optimization of the Beam Optics of Microwave Linear Accelerators of the IAC-Varian series working on the S-band and standing wave pi/2 mode. Measurements, modeling and simulations of the main subsystems were done and a Multiple Solenoidal System was designed. This Cabinet Safe System based on a Multiple Solenoidal System minimizes the radiation field generated by the low efficiency of the microwave accelerators by optimizing the RF waveguide system and by also trapping secondaries generated in the accelerator head. These secondaries are generated mainly due to instabilities in the exit window region and particles backscattered from the target. The electron gun was also studied and software for its right mechanical design and for its optimization was developed as well. Besides the standard design method, an optimization of the injection process is accomplished by slightly modifying the gun configuration and by placing a solenoid on the waist position while avoiding threading the cathode with the magnetic flux generated. The Multiple Solenoidal System and the electron gun optimization are the backbone of a Cabinet Safe System that could be applied not only to the 25 MeV IAC-Varian microwave accelerators but, by extension, to machines of different manufacturers as well. Thus, they constitute the main topic of this dissertation.

  4. [Prediction model of health workforce and beds in county hospitals of Hunan by multiple linear regression].

    PubMed

    Ling, Ru; Liu, Jiawang

    2011-12-01

    To construct prediction model for health workforce and hospital beds in county hospitals of Hunan by multiple linear regression. We surveyed 16 counties in Hunan with stratified random sampling according to uniform questionnaires,and multiple linear regression analysis with 20 quotas selected by literature view was done. Independent variables in the multiple linear regression model on medical personnels in county hospitals included the counties' urban residents' income, crude death rate, medical beds, business occupancy, professional equipment value, the number of devices valued above 10 000 yuan, fixed assets, long-term debt, medical income, medical expenses, outpatient and emergency visits, hospital visits, actual available bed days, and utilization rate of hospital beds. Independent variables in the multiple linear regression model on county hospital beds included the the population of aged 65 and above in the counties, disposable income of urban residents, medical personnel of medical institutions in county area, business occupancy, the total value of professional equipment, fixed assets, long-term debt, medical income, medical expenses, outpatient and emergency visits, hospital visits, actual available bed days, utilization rate of hospital beds, and length of hospitalization. The prediction model shows good explanatory and fitting, and may be used for short- and mid-term forecasting.

  5. An Occupational Performance Test Validation Program for Fire Fighters at the Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Schonfeld, Brian R.; Doerr, Donald F.; Convertino, Victor A.

    1990-01-01

    We evaluated performance of a modified Combat Task Test (CTT) and of standard fitness tests in 20 male subjects to assess the prediction of occupational performance standards for Kennedy Space Center fire fighters. The CTT consisted of stair-climbing, a chopping simulation, and a victim rescue simulation. Average CTT performance time was 3.61 +/- 0.25 min (SEM) and all CTT tasks required 93% to 97% maximal heart rate. By using scores from the standard fitness tests, a multiple linear regression model was fitted to each parameter: the stairclimb (r(exp 2) = .905, P less than .05), the chopping performance time (r(exp 2) = .582, P less than .05), the victim rescue time (r(exp 2) = .218, P = not significant), and the total performance time (r(exp 2) = .769, P less than .05). Treadmill time was the predominant variable, being the major predictor in two of four models. These results indicated that standardized fitness tests can predict performance on some CTT tasks and that test predictors were amenable to exercise training.

  6. Life cycle cost optimization of biofuel supply chains under uncertainties based on interval linear programming.

    PubMed

    Ren, Jingzheng; Dong, Liang; Sun, Lu; Goodsite, Michael Evan; Tan, Shiyu; Dong, Lichun

    2015-01-01

    The aim of this work was to develop a model for optimizing the life cycle cost of biofuel supply chain under uncertainties. Multiple agriculture zones, multiple transportation modes for the transport of grain and biofuel, multiple biofuel plants, and multiple market centers were considered in this model, and the price of the resources, the yield of grain and the market demands were regarded as interval numbers instead of constants. An interval linear programming was developed, and a method for solving interval linear programming was presented. An illustrative case was studied by the proposed model, and the results showed that the proposed model is feasible for designing biofuel supply chain under uncertainties. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Validity of an ankle joint motion and position sense measurement system and its application in healthy subjects and patients with ankle sprain.

    PubMed

    Lin, Chueh-Ho; Chiang, Shang-Lin; Lu, Liang-Hsuan; Wei, Shun-Hwa; Sung, Wen-Hsu

    2016-07-01

    Ankle motion and proprioception in multiple axis movements are crucial for daily activities. However, few studies have developed and used a multiple axis system for measuring ankle motion and proprioception. This study was designed to validate a novel ankle haptic interface system that measures the ankle range of motion (ROM) and joint position sense in multiple plane movements, investigating the proprioception deficits during joint position sense tasks for patients with ankle instability. Eleven healthy adults (mean ± standard deviation; age, 24.7 ± 1.9 years) and thirteen patients with ankle instability were recruited in this study. All subjects were asked to perform tests to evaluate the validity of the ankle ROM measurements and underwent tests for validating the joint position sense measurements conducted during multiple axis movements of the ankle joint. Pearson correlation was used for validating the angular position measurements obtained using the developed system; the independent t test was used to investigate the differences in joint position sense task performance for people with or without ankle instability. The ROM measurements of the device were linearly correlated with the criterion standards (r = 0.99). The ankle instability and healthy groups were significantly different in direction, absolute, and variable errors of plantar flexion, dorsiflexion, inversion, and eversion (p < 0.05). The results demonstrate that the novel ankle joint motion and position sense measurement system is valid and can be used for measuring the ankle ROM and joint position sense in multiple planes and indicate proprioception deficits for people with ankle instability. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Diagnostic Accuracy of Full-Body Linear X-Ray Scanning in Multiple Trauma Patients in Comparison to Computed Tomography.

    PubMed

    Jöres, A P W; Heverhagen, J T; Bonél, H; Exadaktylos, A; Klink, T

    2016-02-01

    The purpose of this study was to evaluate the diagnostic accuracy of full-body linear X-ray scanning (LS) in multiple trauma patients in comparison to 128-multislice computed tomography (MSCT). 106 multiple trauma patients (female: 33; male: 73) were retrospectively included in this study. All patients underwent LS of the whole body, including extremities, and MSCT covering the neck, thorax, abdomen, and pelvis. The diagnostic accuracy of LS for the detection of fractures of the truncal skeleton and pneumothoraces was evaluated in comparison to MSCT by two observers in consensus. Extremity fractures detected by LS were documented. The overall sensitivity of LS was 49.2 %, the specificity was 93.3 %, the positive predictive value was 91 %, and the negative predictive value was 57.5 %. The overall sensitivity for vertebral fractures was 16.7 %, and the specificity was 100 %. The sensitivity was 48.7 % and the specificity 98.2 % for all other fractures. Pneumothoraces were detected in 12 patients by CT, but not by LS. 40 extremity fractures were detected by LS, of which 4 fractures were dislocated, and 2 were fully covered by MSCT. The diagnostic accuracy of LS is limited in the evaluation of acute trauma of the truncal skeleton. LS allows fast whole-body X-ray imaging, and may be valuable for detecting extremity fractures in trauma patients in addition to MSCT.  The overall sensitivity of LS for truncal skeleton injuries in multiple-trauma patients was < 50 %. The diagnostic reference standard MSCT is the preferred and reliable imaging modality. LS may be valuable for quick detection of extremity fractures. © Georg Thieme Verlag KG Stuttgart · New York.

  9. A simple linear regression method for quantitative trait loci linkage analysis with censored observations.

    PubMed

    Anderson, Carl A; McRae, Allan F; Visscher, Peter M

    2006-07-01

    Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.

  10. An ultrahigh-performance liquid chromatography method with electrospray ionization tandem mass spectrometry for simultaneous quantification of five phytohormones in medicinal plant Glycyrrhiza uralensis under abscisic acid stress.

    PubMed

    Xiang, Yu; Song, Xiaona; Qiao, Jing; Zang, Yimei; Li, Yanpeng; Liu, Yong; Liu, Chunsheng

    2015-07-01

    An efficient simplified method was developed to determine multiple classes of phytohormones simultaneously in the medicinal plant Glycyrrhiza uralensis. Ultrahigh-performance liquid chromatography electrospray ionization tandem mass spectrometry (UPLC/ESI-MS/MS) with multiple reaction monitoring (MRM) in negative mode was used for quantification. The five studied phytohormones are gibberellic acid (GA3), abscisic acid (ABA), jasmonic acid (JA), indole-3-acetic acid, and salicylic acid (SA). Only 100 mg of fresh leaves was needed, with one purification step based on C18 solid-phase extraction. Cinnamic acid was chosen as the internal standard instead of isotope-labeled internal standards. Under the optimized conditions, the five phytohormones with internal standard were separated within 4 min, with good linearities and high sensitivity. The validated method was applied to monitor the spatial and temporal changes of the five phytohormones in G. uralensis under ABA stress. The levels of GA3, ABA, JA, and SA in leaves of G. uralensis were increased at different times and with different tendencies in the reported stress mode. These changes in phytohormone levels are discussed in the context of a possible feedback regulation mechanism. Understanding this mechanism will provide a good chance of revealing the mutual interplay between different biosynthetic routes, which could further help elucidate the mechanisms of effective composition accumulation in medicinal plants.

  11. Advanced statistics: linear regression, part I: simple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  12. Fast linear feature detection using multiple directional non-maximum suppression.

    PubMed

    Sun, C; Vallotton, P

    2009-05-01

    The capacity to detect linear features is central to image analysis, computer vision and pattern recognition and has practical applications in areas such as neurite outgrowth detection, retinal vessel extraction, skin hair removal, plant root analysis and road detection. Linear feature detection often represents the starting point for image segmentation and image interpretation. In this paper, we present a new algorithm for linear feature detection using multiple directional non-maximum suppression with symmetry checking and gap linking. Given its low computational complexity, the algorithm is very fast. We show in several examples that it performs very well in terms of both sensitivity and continuity of detected linear features.

  13. CUDAICA: GPU Optimization of Infomax-ICA EEG Analysis

    PubMed Central

    Raimondo, Federico; Kamienkowski, Juan E.; Sigman, Mariano; Fernandez Slezak, Diego

    2012-01-01

    In recent years, Independent Component Analysis (ICA) has become a standard to identify relevant dimensions of the data in neuroscience. ICA is a very reliable method to analyze data but it is, computationally, very costly. The use of ICA for online analysis of the data, used in brain computing interfaces, results are almost completely prohibitive. We show an increase with almost no cost (a rapid video card) of speed of ICA by about 25 fold. The EEG data, which is a repetition of many independent signals in multiple channels, is very suitable for processing using the vector processors included in the graphical units. We profiled the implementation of this algorithm and detected two main types of operations responsible of the processing bottleneck and taking almost 80% of computing time: vector-matrix and matrix-matrix multiplications. By replacing function calls to basic linear algebra functions to the standard CUBLAS routines provided by GPU manufacturers, it does not increase performance due to CUDA kernel launch overhead. Instead, we developed a GPU-based solution that, comparing with the original BLAS and CUBLAS versions, obtains a 25x increase of performance for the ICA calculation. PMID:22811699

  14. Improved classification accuracy by feature extraction using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Patriarche, Julia; Manduca, Armando; Erickson, Bradley J.

    2003-05-01

    A feature extraction algorithm has been developed for the purposes of improving classification accuracy. The algorithm uses a genetic algorithm / hill-climber hybrid to generate a set of linearly recombined features, which may be of reduced dimensionality compared with the original set. The genetic algorithm performs the global exploration, and a hill climber explores local neighborhoods. Hybridizing the genetic algorithm with a hill climber improves both the rate of convergence, and the final overall cost function value; it also reduces the sensitivity of the genetic algorithm to parameter selection. The genetic algorithm includes the operators: crossover, mutation, and deletion / reactivation - the last of these effects dimensionality reduction. The feature extractor is supervised, and is capable of deriving a separate feature space for each tissue (which are reintegrated during classification). A non-anatomical digital phantom was developed as a gold standard for testing purposes. In tests with the phantom, and with images of multiple sclerosis patients, classification with feature extractor derived features yielded lower error rates than using standard pulse sequences, and with features derived using principal components analysis. Using the multiple sclerosis patient data, the algorithm resulted in a mean 31% reduction in classification error of pure tissues.

  15. Mixed linear-non-linear inversion of crustal deformation data: Bayesian inference of model, weighting and regularization parameters

    NASA Astrophysics Data System (ADS)

    Fukuda, Jun'ichi; Johnson, Kaj M.

    2010-06-01

    We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.

  16. The Primordial Inflation Explorer (PIXIE): A Nulling Polarimeter for Cosmic Microwave Background Observations

    NASA Technical Reports Server (NTRS)

    Kogut, Alan J.; Fixsen, D. J.; Chuss, D. T.; Dotson, J.; Dwek, E.; Halpern, M.; Hinshaw, G. F.; Meyer, S. M.; Moseley, S. H.; Seiffert, M. D.; hide

    2011-01-01

    The Primordial Inflation Explorer (PIXIE) is a concept for an Explorer-class mission to measure the gravity-wave signature of primordial inflation through its distinctive imprint on the linear polarization of the cosmic microwave background. The instrument consists of a polarizing Michelson interferometer configured as a nulling polarimeter to measure the difference spectrum between orthogonal linear polarizations from two co-aligned beams. Either input can view the sky or a temperature-controlled absolute reference blackbody calibrator. Rhe proposed instrument can map the absolute intensity and linear polarization (Stokes I, Q, and U parameters) over the full sky in 400 spectral channels spanning 2.5 decades in frequency from 30 GHz to 6 THz (1 cm to 50 micron wavelength). Multi-moded optics provide background-limited sensitivity using only 4 detectors, while the highly symmetric design and multiple signal modulations provide robust rejection of potential systematic errors. The principal science goal is the detection and characterization of linear polarization from an inflationary epoch in the early universe, with tensor-to-scalar ratio r < 10..3 at 5 standard deviations. The rich PIXIE data set can also constrain physical processes ranging from Big Bang cosmology to the nature of the first stars to physical conditions within the interstellar medium of the Galaxy.

  17. Quantitative determination of galantamine in human plasma by sensitive liquid chromatography-tandem mass spectrometry using loratadine as an internal standard.

    PubMed

    Nirogi, Ramakrishna V S; Kandikere, Vishwottam N; Mudigonda, Koteshwara; Maurya, Santosh

    2007-02-01

    A simple, rapid, sensitive, and selective liquid chromatography-tandem mass spectrometry method is developed and validated for the quantitation of galantamine, an acetylcholinesterase inhibitor in human plasma, using a commercially available compound, loratadine, as the internal standard. Following liquid-liquid extraction, the analytes are separated using an isocratic mobile phase on a reverse-phase C18 column and analyzed by mass spectrometry in the multiple reaction monitoring mode using the respective (M+H)+ ions, m/z 288 to 213 for galantamine and m/z 383 and 337 for the internal standard. The assay exhibit a linear dynamic range of 0.5-100 ng/mL for galantamine in human plasma. The lower limit of quantitation is 0.5 ng/mL, with a relative standard deviation of less than 8%. Acceptable precision and accuracy are obtained for concentrations over the standard curve range. A run time of 2.5 min for each sample makes it possible to analyze more than 400 human plasma samples per day. The validated method is successfully used to analyze human plasma samples for application in pharmacokinetic, bioavailability, or bioequivalence studies.

  18. Ranking contributing areas of salt and selenium in the Lower Gunnison River Basin, Colorado, using multiple linear regression models

    USGS Publications Warehouse

    Linard, Joshua I.

    2013-01-01

    Mitigating the effects of salt and selenium on water quality in the Grand Valley and lower Gunnison River Basin in western Colorado is a major concern for land managers. Previous modeling indicated means to improve the models by including more detailed geospatial data and a more rigorous method for developing the models. After evaluating all possible combinations of geospatial variables, four multiple linear regression models resulted that could estimate irrigation-season salt yield, nonirrigation-season salt yield, irrigation-season selenium yield, and nonirrigation-season selenium yield. The adjusted r-squared and the residual standard error (in units of log-transformed yield) of the models were, respectively, 0.87 and 2.03 for the irrigation-season salt model, 0.90 and 1.25 for the nonirrigation-season salt model, 0.85 and 2.94 for the irrigation-season selenium model, and 0.93 and 1.75 for the nonirrigation-season selenium model. The four models were used to estimate yields and loads from contributing areas corresponding to 12-digit hydrologic unit codes in the lower Gunnison River Basin study area. Each of the 175 contributing areas was ranked according to its estimated mean seasonal yield of salt and selenium.

  19. Increased statistical power with combined independent randomization tests used with multiple-baseline design.

    PubMed

    Tyrrell, Pascal N; Corey, Paul N; Feldman, Brian M; Silverman, Earl D

    2013-06-01

    Physicians often assess the effectiveness of treatments on a small number of patients. Multiple-baseline designs (MBDs), based on the Wampold-Worsham (WW) method of randomization and applied to four subjects, have relatively low power. Our objective was to propose another approach with greater power that does not suffer from the time requirements of the WW method applied to a greater number of subjects. The power of a design that involves the combination of two four-subject MBDs was estimated using computer simulation and compared with the four- and eight-subject designs. The effect of a delayed linear response to treatment on the power of the test was also investigated. Power was found to be adequate (>80%) for a standardized mean difference (SMD) greater than 0.8. The effect size associated with 80% power from combined tests was smaller than that of the single four-subject MBD (SMD=1.3) and comparable with the eight-subject MBD (SMD=0.6). A delayed linear response to the treatment resulted in important reductions in power (20-35%). By combining two four-subject MBD tests, an investigator can detect better effect sizes (SMD=0.8) and be able to complete a comparatively timelier and feasible study. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Automated agar plate streaker: a linear plater on Society for Biomolecular Sciences standard plates.

    PubMed

    King, Gregory W; Kath, Gary S; Siciliano, Sal; Simpson, Neal; Masurekar, Prakash; Sigmund, Jan; Polishook, Jon; Skwish, Stephen; Bills, Gerald; Genilloud, Olga; Peláez, Fernando; Martín, Jesus; Dufresne, Claude

    2006-09-01

    Several protocols for bacterial isolation and techniques for aerobic plate counting rely on the use of a spiral plater to deposit concentration gradients of microbial suspensions onto a circular agar plate to isolate colony growth. The advantage of applying a gradient of concentrations across the agar surface is that the original microbiological sample can be applied at a single concentration rather than as multiple serial dilutions. The spiral plater gradually dilutes the sample across a compact area and therefore saves time preparing dilutions and multiple agar plates. Commercial spiral platers are not automated and require manual sample loading. Dispensing of the sample volume and rate of gradients are often very limited in range. Furthermore, the spiral sample application cannot be used with rectangular microplates. Another limitation of commercial spiral platers is that they are useful only for dilute, filtered suspensions and cannot plate suspensions of coarse organic particles therefore precluding the use of many kinds of microorganism-containing substrata. An automated agar plate spreader capable of processing 99 rectangular microplates in unattended mode is described. This novel instrument is capable of dispensing discrete volumes of sample in a linear pattern. It can be programmed to dispense a sample suspense at a uniform application rate or across a decreasing concentration gradient.

  1. Plasma amino acid profile associated with fatty liver disease and co-occurrence of metabolic risk factors.

    PubMed

    Yamakado, Minoru; Tanaka, Takayuki; Nagao, Kenji; Imaizumi, Akira; Komatsu, Michiharu; Daimon, Takashi; Miyano, Hiroshi; Tani, Mizuki; Toda, Akiko; Yamamoto, Hiroshi; Horimoto, Katsuhisa; Ishizaka, Yuko

    2017-11-03

    Fatty liver disease (FLD) increases the risk of diabetes, cardiovascular disease, and steatohepatitis, which leads to fibrosis, cirrhosis, and hepatocellular carcinoma. Thus, the early detection of FLD is necessary. We aimed to find a quantitative and feasible model for discriminating the FLD, based on plasma free amino acid (PFAA) profiles. We constructed models of the relationship between PFAA levels in 2,000 generally healthy Japanese subjects and the diagnosis of FLD by abdominal ultrasound scan by multiple logistic regression analysis with variable selection. The performance of these models for FLD discrimination was validated using an independent data set of 2,160 subjects. The generated PFAA-based model was able to identify FLD patients. The area under the receiver operating characteristic curve for the model was 0.83, which was higher than those of other existing liver function-associated markers ranging from 0.53 to 0.80. The value of the linear discriminant in the model yielded the adjusted odds ratio (with 95% confidence intervals) for a 1 standard deviation increase of 2.63 (2.14-3.25) in the multiple logistic regression analysis with known liver function-associated covariates. Interestingly, the linear discriminant values were significantly associated with the progression of FLD, and patients with nonalcoholic steatohepatitis also exhibited higher values.

  2. Estimation of the quantification uncertainty from flow injection and liquid chromatography transient signals in inductively coupled plasma mass spectrometry

    NASA Astrophysics Data System (ADS)

    Laborda, Francisco; Medrano, Jesús; Castillo, Juan R.

    2004-06-01

    The quality of the quantitative results obtained from transient signals in high-performance liquid chromatography-inductively coupled plasma mass spectrometry (HPLC-ICPMS) and flow injection-inductively coupled plasma mass spectrometry (FI-ICPMS) was investigated under multielement conditions. Quantification methods were based on multiple-point calibration by simple and weighted linear regression, and double-point calibration (measurement of the baseline and one standard). An uncertainty model, which includes the main sources of uncertainty from FI-ICPMS and HPLC-ICPMS (signal measurement, sample flow rate and injection volume), was developed to estimate peak area uncertainties and statistical weights used in weighted linear regression. The behaviour of the ICPMS instrument was characterized in order to be considered in the model, concluding that the instrument works as a concentration detector when it is used to monitorize transient signals from flow injection or chromatographic separations. Proper quantification by the three calibration methods was achieved when compared to reference materials, although the double-point calibration allowed to obtain results of the same quality as the multiple-point calibration, shortening the calibration time. Relative expanded uncertainties ranged from 10-20% for concentrations around the LOQ to 5% for concentrations higher than 100 times the LOQ.

  3. The relationship between quality of work life and turnover intention of primary health care nurses in Saudi Arabia.

    PubMed

    Almalki, Mohammed J; FitzGerald, Gerry; Clark, Michele

    2012-09-12

    Quality of work life (QWL) has been found to influence the commitment of health professionals, including nurses. However, reliable information on QWL and turnover intention of primary health care (PHC) nurses is limited. The aim of this study was to examine the relationship between QWL and turnover intention of PHC nurses in Saudi Arabia. A cross-sectional survey was used in this study. Data were collected using Brooks' survey of Quality of Nursing Work Life, the Anticipated Turnover Scale and demographic data questions. A total of 508 PHC nurses in the Jazan Region, Saudi Arabia, completed the questionnaire (RR = 87%). Descriptive statistics, t-test, ANOVA, General Linear Model (GLM) univariate analysis, standard multiple regression, and hierarchical multiple regression were applied for analysis using SPSS v17 for Windows. Findings suggested that the respondents were dissatisfied with their work life, with almost 40% indicating a turnover intention from their current PHC centres. Turnover intention was significantly related to QWL. Using standard multiple regression, 26% of the variance in turnover intention was explained by QWL, p < 0.001, with R2 = .263. Further analysis using hierarchical multiple regression found that the total variance explained by the model as a whole (demographics and QWL) was 32.1%, p < 0.001. QWL explained an additional 19% of the variance in turnover intention, after controlling for demographic variables. Creating and maintaining a healthy work life for PHC nurses is very important to improve their work satisfaction, reduce turnover, enhance productivity and improve nursing care outcomes.

  4. The relationship between quality of work life and turnover intention of primary health care nurses in Saudi Arabia

    PubMed Central

    2012-01-01

    Background Quality of work life (QWL) has been found to influence the commitment of health professionals, including nurses. However, reliable information on QWL and turnover intention of primary health care (PHC) nurses is limited. The aim of this study was to examine the relationship between QWL and turnover intention of PHC nurses in Saudi Arabia. Methods A cross-sectional survey was used in this study. Data were collected using Brooks’ survey of Quality of Nursing Work Life, the Anticipated Turnover Scale and demographic data questions. A total of 508 PHC nurses in the Jazan Region, Saudi Arabia, completed the questionnaire (RR = 87%). Descriptive statistics, t-test, ANOVA, General Linear Model (GLM) univariate analysis, standard multiple regression, and hierarchical multiple regression were applied for analysis using SPSS v17 for Windows. Results Findings suggested that the respondents were dissatisfied with their work life, with almost 40% indicating a turnover intention from their current PHC centres. Turnover intention was significantly related to QWL. Using standard multiple regression, 26% of the variance in turnover intention was explained by QWL, p < 0.001, with R2 = .263. Further analysis using hierarchical multiple regression found that the total variance explained by the model as a whole (demographics and QWL) was 32.1%, p < 0.001. QWL explained an additional 19% of the variance in turnover intention, after controlling for demographic variables. Conclusions Creating and maintaining a healthy work life for PHC nurses is very important to improve their work satisfaction, reduce turnover, enhance productivity and improve nursing care outcomes. PMID:22970764

  5. SOME STATISTICAL ISSUES RELATED TO MULTIPLE LINEAR REGRESSION MODELING OF BEACH BACTERIA CONCENTRATIONS

    EPA Science Inventory

    As a fast and effective technique, the multiple linear regression (MLR) method has been widely used in modeling and prediction of beach bacteria concentrations. Among previous works on this subject, however, several issues were insufficiently or inconsistently addressed. Those is...

  6. Maximum Marginal Likelihood Estimation of a Monotonic Polynomial Generalized Partial Credit Model with Applications to Multiple Group Analysis.

    PubMed

    Falk, Carl F; Cai, Li

    2016-06-01

    We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.

  7. A Practical, Hardware Friendly MMSE Detector for MIMO-OFDM-Based Systems

    NASA Astrophysics Data System (ADS)

    Kim, Hun Seok; Zhu, Weijun; Bhatia, Jatin; Mohammed, Karim; Shah, Anish; Daneshrad, Babak

    2008-12-01

    Design and implementation of a highly optimized MIMO (multiple-input multiple-output) detector requires cooptimization of the algorithm with the underlying hardware architecture. Special attention must be paid to application requirements such as throughput, latency, and resource constraints. In this work, we focus on a highly optimized matrix inversion free [InlineEquation not available: see fulltext.] MMSE (minimum mean square error) MIMO detector implementation. The work has resulted in a real-time field-programmable gate array-based implementation (FPGA-) on a Xilinx Virtex-2 6000 using only 9003 logic slices, 66 multipliers, and 24 Block RAMs (less than 33% of the overall resources of this part). The design delivers over 420 Mbps sustained throughput with a small 2.77-microsecond latency. The designed [InlineEquation not available: see fulltext.] linear MMSE MIMO detector is capable of complying with the proposed IEEE 802.11n standard.

  8. S-Boxes Based on Affine Mapping and Orbit of Power Function

    NASA Astrophysics Data System (ADS)

    Khan, Mubashar; Azam, Naveed Ahmed

    2015-06-01

    The demand of data security against computational attacks such as algebraic, differential, linear and interpolation attacks has been increased as a result of rapid advancement in the field of computation. It is, therefore, necessary to develop such cryptosystems which can resist current cryptanalysis and more computational attacks in future. In this paper, we present a multiple S-boxes scheme based on affine mapping and orbit of the power function used in Advanced Encryption Standard (AES). The proposed technique results in 256 different S-boxes named as orbital S-boxes. Rigorous tests and comparisons are performed to analyse the cryptographic strength of each of the orbital S-boxes. Furthermore, gray scale images are encrypted by using multiple orbital S-boxes. Results and simulations show that the encryption strength of the orbital S-boxes against computational attacks is better than that of the existing S-boxes.

  9. Compressed Sensing in On-Grid MIMO Radar.

    PubMed

    Minner, Michael F

    2015-01-01

    The accurate detection of targets is a significant problem in multiple-input multiple-output (MIMO) radar. Recent advances of Compressive Sensing offer a means of efficiently accomplishing this task. The sparsity constraints needed to apply the techniques of Compressive Sensing to problems in radar systems have led to discretizations of the target scene in various domains, such as azimuth, time delay, and Doppler. Building upon recent work, we investigate the feasibility of on-grid Compressive Sensing-based MIMO radar via a threefold azimuth-delay-Doppler discretization for target detection and parameter estimation. We utilize a colocated random sensor array and transmit distinct linear chirps to a small scene with few, slowly moving targets. Relying upon standard far-field and narrowband assumptions, we analyze the efficacy of various recovery algorithms in determining the parameters of the scene through numerical simulations, with particular focus on the ℓ 1-squared Nonnegative Regularization method.

  10. Estimation of sex and stature using anthropometry of the upper extremity in an Australian population.

    PubMed

    Howley, Donna; Howley, Peter; Oxenham, Marc F

    2018-06-01

    Stature and a further 8 anthropometric dimensions were recorded from the arms and hands of a sample of 96 staff and students from the Australian National University and The University of Newcastle, Australia. These dimensions were used to create simple and multiple logistic regression models for sex estimation and simple and multiple linear regression equations for stature estimation of a contemporary Australian population. Overall sex classification accuracies using the models created were comparable to similar studies. The stature estimation models achieved standard errors of estimates (SEE) which were comparable to and in many cases lower than those achieved in similar research. Generic, non sex-specific models achieved similar SEEs and R 2 values to the sex-specific models indicating stature may be accurately estimated when sex is unknown. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Integration of a Decentralized Linear-Quadratic-Gaussian Control into GSFC's Universal 3-D Autonomous Formation Flying Algorithm

    NASA Technical Reports Server (NTRS)

    Folta, David C.; Carpenter, J. Russell

    1999-01-01

    A decentralized control is investigated for applicability to the autonomous formation flying control algorithm developed by GSFC for the New Millenium Program Earth Observer-1 (EO-1) mission. This decentralized framework has the following characteristics: The approach is non-hierarchical, and coordination by a central supervisor is not required; Detected failures degrade the system performance gracefully; Each node in the decentralized network processes only its own measurement data, in parallel with the other nodes; Although the total computational burden over the entire network is greater than it would be for a single, centralized controller, fewer computations are required locally at each node; Requirements for data transmission between nodes are limited to only the dimension of the control vector, at the cost of maintaining a local additional data vector. The data vector compresses all past measurement history from all the nodes into a single vector of the dimension of the state; and The approach is optimal with respect to standard cost functions. The current approach is valid for linear time-invariant systems only. Similar to the GSFC formation flying algorithm, the extension to linear LQG time-varying systems requires that each node propagate its filter covariance forward (navigation) and controller Riccati matrix backward (guidance) at each time step. Extension of the GSFC algorithm to non-linear systems can also be accomplished via linearization about a reference trajectory in the standard fashion, or linearization about the current state estimate as with the extended Kalman filter. To investigate the feasibility of the decentralized integration with the GSFC algorithm, an existing centralized LQG design for a single spacecraft orbit control problem is adapted to the decentralized framework while using the GSFC algorithm's state transition matrices and framework. The existing GSFC design uses both reference trajectories of each spacecraft in formation and by appropriate choice of coordinates and simplified measurement modeling is formulated as a linear time-invariant system. Results for improvements to the GSFC algorithm and a multiple satellite formation will be addressed. The goal of this investigation is to progressively relax the assumptions that result in linear time-invariance, ultimately to the point of linearization of the non-linear dynamics about the current state estimate as in the extended Kalman filter. An assessment will then be made about the feasibility of the decentralized approach to the realistic formation flying application of the EO-1/Landsat 7 formation flying experiment.

  12. Utilizing the Zero-One Linear Programming Constraints to Draw Multiple Sets of Matched Samples from a Non-Treatment Population as Control Groups for the Quasi-Experimental Design

    ERIC Educational Resources Information Center

    Li, Yuan H.; Yang, Yu N.; Tompkins, Leroy J.; Modarresi, Shahpar

    2005-01-01

    The statistical technique, "Zero-One Linear Programming," that has successfully been used to create multiple tests with similar characteristics (e.g., item difficulties, test information and test specifications) in the area of educational measurement, was deemed to be a suitable method for creating multiple sets of matched samples to be…

  13. Stroop Color-Word Interference Test: Normative data for Spanish-speaking pediatric population.

    PubMed

    Rivera, D; Morlett-Paredes, A; Peñalver Guia, A I; Irías Escher, M J; Soto-Añari, M; Aguayo Arelis, A; Rute-Pérez, S; Rodríguez-Lorenzana, A; Rodríguez-Agudelo, Y; Albaladejo-Blázquez, N; García de la Cadena, C; Ibáñez-Alfonso, J A; Rodriguez-Irizarry, W; García-Guerrero, C E; Delgado-Mejía, I D; Padilla-López, A; Vergara-Moragues, E; Barrios Nevado, M D; Saracostti Schwartzman, M; Arango-Lasprilla, J C

    2017-01-01

    To generate normative data for the Stroop Word-Color Interference test in Spanish-speaking pediatric populations. The sample consisted of 4,373 healthy children from nine countries in Latin America (Chile, Cuba, Ecuador, Guatemala, Honduras, Mexico, Paraguay, Peru, and Puerto Rico) and Spain. Each participant was administered the Stroop Word-Color Interference test as part of a larger neuropsychological battery. The Stroop Word, Stroop Color, Stroop Word-Color, and Stroop Interference scores were normed using multiple linear regressions and standard deviations of residual values. Age, age2, sex, and mean level of parental education (MLPE) were included as predictors in the analyses. The final multiple linear regression models showed main effects for age on all scores, except on Stroop Interference for Guatemala, such that scores increased linearly as a function of age. Age2 affected Stroop Word scores for all countries, Stroop Color scores for Ecuador, Mexico, Peru, and Spain; Stroop Word-Color scores for Ecuador, Mexico, and Paraguay; and Stroop Interference scores for Cuba, Guatemala, and Spain. MLPE affected Stroop Word scores for Chile, Mexico, and Puerto Rico; Stroop Color scores for Mexico, Puerto Rico, and Spain; Stroop Word-Color scores for Ecuador, Guatemala, Mexico, Puerto Rico and Spain; and Stroop-Interference scores for Ecuador, Mexico, and Spain. Sex affected Stroop Word scores for Spain, Stroop Color scores for Mexico, and Stroop Interference for Honduras. This is the largest Spanish-speaking pediatric normative study in the world, and it will allow neuropsychologists from these countries to have a more accurate approach to interpret the Stroop Word-Color Interference test in pediatric populations.

  14. Linear mixed-effects modeling approach to FMRI group analysis

    PubMed Central

    Chen, Gang; Saad, Ziad S.; Britton, Jennifer C.; Pine, Daniel S.; Cox, Robert W.

    2013-01-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance–covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance–covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity for activation detection. The importance of hypothesis formulation is also illustrated in the simulations. Comparisons with alternative group analysis approaches and the limitations of LME are discussed in details. PMID:23376789

  15. Linear mixed-effects modeling approach to FMRI group analysis.

    PubMed

    Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W

    2013-06-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity for activation detection. The importance of hypothesis formulation is also illustrated in the simulations. Comparisons with alternative group analysis approaches and the limitations of LME are discussed in details. Published by Elsevier Inc.

  16. Theory of chromatic noise masking applied to testing linearity of S-cone detection mechanisms.

    PubMed

    Giulianini, Franco; Eskew, Rhea T

    2007-09-01

    A method for testing the linearity of cone combination of chromatic detection mechanisms is applied to S-cone detection. This approach uses the concept of mechanism noise, the noise as seen by a postreceptoral neural mechanism, to represent the effects of superposing chromatic noise components in elevating thresholds and leads to a parameter-free prediction for a linear mechanism. The method also provides a test for the presence of multiple linear detectors and off-axis looking. No evidence for multiple linear mechanisms was found when using either S-cone increment or decrement tests. The results for both S-cone test polarities demonstrate that these mechanisms combine their cone inputs nonlinearly.

  17. INTRODUCTION TO A COMBINED MULTIPLE LINEAR REGRESSION AND ARMA MODELING APPROACH FOR BEACH BACTERIA PREDICTION

    EPA Science Inventory

    Due to the complexity of the processes contributing to beach bacteria concentrations, many researchers rely on statistical modeling, among which multiple linear regression (MLR) modeling is most widely used. Despite its ease of use and interpretation, there may be time dependence...

  18. General methodology for simultaneous representation and discrimination of multiple object classes

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1998-03-01

    We address a new general method for linear and nonlinear feature extraction for simultaneous representation and classification. We call this approach the maximum representation and discrimination feature (MRDF) method. We develop a novel nonlinear eigenfeature extraction technique to represent data with closed-form solutions and use it to derive a nonlinear MRDF algorithm. Results of the MRDF method on synthetic databases are shown and compared with results from standard Fukunaga-Koontz transform and Fisher discriminant function methods. The method is also applied to an automated product inspection problem and for classification and pose estimation of two similar objects under 3D aspect angle variations.

  19. Analysis of the thermal comfort model in an environment of metal mechanical branch.

    PubMed

    Pinto, N M; Xavier, A A P; do Amaral, Regiane T

    2012-01-01

    This study aims to identify the correlation between the Predicted Mean Vote (PMV) with the thermal sensation (S) of 55 employees, establishing a linear multiple regression equation. The measurement of environmental variables followed established standards. The survey was conducted in a metal industry located in Ponta Grossa of the State of Parana in Brazil. It was applied the physical model of thermal comfort to the environmental variables and also to the subjective data on the thermal sensations of employees. The survey was conducted from May to November, 2010, with 48 measurements. This study will serve as the basis for a dissertation consisting of 72 measurements.

  20. Aperiodic linear networked control considering variable channel delays: application to robots coordination.

    PubMed

    Santos, Carlos; Espinosa, Felipe; Santiso, Enrique; Mazo, Manuel

    2015-05-27

    One of the main challenges in wireless cyber-physical systems is to reduce the load of the communication channel while preserving the control performance. In this way, communication resources are liberated for other applications sharing the channel bandwidth. The main contribution of this work is the design of a remote control solution based on an aperiodic and adaptive triggering mechanism considering the current network delay of multiple robotics units. Working with the actual network delay instead of the maximum one leads to abandoning this conservative assumption, since the triggering condition is fixed depending on the current state of the network. This way, the controller manages the usage of the wireless channel in order to reduce the channel delay and to improve the availability of the communication resources. The communication standard under study is the widespread IEEE 802.11g, whose channel delay is clearly uncertain. First, the adaptive self-triggered control is validated through the TrueTime simulation tool configured for the mentioned WiFi standard. Implementation results applying the aperiodic linear control laws on four P3-DX robots are also included. Both of them demonstrate the advantage of this solution in terms of network accessing and control performance with respect to periodic and non-adaptive self-triggered alternatives.

  1. Estimating seasonal evapotranspiration from temporal satellite images

    USGS Publications Warehouse

    Singh, Ramesh K.; Liu, Shu-Guang; Tieszen, Larry L.; Suyker, Andrew E.; Verma, Shashi B.

    2012-01-01

    Estimating seasonal evapotranspiration (ET) has many applications in water resources planning and management, including hydrological and ecological modeling. Availability of satellite remote sensing images is limited due to repeat cycle of satellite or cloud cover. This study was conducted to determine the suitability of different methods namely cubic spline, fixed, and linear for estimating seasonal ET from temporal remotely sensed images. Mapping Evapotranspiration at high Resolution with Internalized Calibration (METRIC) model in conjunction with the wet METRIC (wMETRIC), a modified version of the METRIC model, was used to estimate ET on the days of satellite overpass using eight Landsat images during the 2001 crop growing season in Midwest USA. The model-estimated daily ET was in good agreement (R2 = 0.91) with the eddy covariance tower-measured daily ET. The standard error of daily ET was 0.6 mm (20%) at three validation sites in Nebraska, USA. There was no statistically significant difference (P > 0.05) among the cubic spline, fixed, and linear methods for computing seasonal (July–December) ET from temporal ET estimates. Overall, the cubic spline resulted in the lowest standard error of 6 mm (1.67%) for seasonal ET. However, further testing of this method for multiple years is necessary to determine its suitability.

  2. [Determination of 25 quinolones in cosmetics by liquid chromatography-tandem mass spectrometry].

    PubMed

    Lin, Li; Zhang, Yi; Tu, Xiaoke; Xie, Liqi; Yue, Zhenfeng; Kang, Haining; Wu, Weidong; Luo, Yao

    2015-03-01

    An analytical method was developed for the simultaneous determination of 25 quinolones, including danofloxacin mesylate, enrofloxacin, flumequine, oxloinic acid, ciprofloxacin, sarafloxacin, nalidixic acid, norfloxacin, and ofloxacin etc in cosmetics using direct extraction and liquid chromatography-electrospray ionization tandem mass spectrometry (LC-ESI-MS/MS). Cosmetic sample was extracted by acidified acetonitrile, defatted by n-hexane and separated on Poroshell EC-C18 column with gradient elution program using acetonitrile and water (both containing 0. 1% formic acid) as the mobile phases and analyzed by LC-ESI-MS/MS under the positive mode using multiple reaction monitoring (MRM). The interference of matrix was reduced by the matrix-matched calibration standard curve. The method showed good linearities over the range of 1-200 mg/kg for the 25 quinolones with good linear correlation coefficients (r ≥ 0.999). The method detection limit of the 25 quinolones was 1.0 mg/kg, and the recoveries of all analytes in lotion, milky and cream cosmetics matrices ranged from 87.4% to 105% at the spiked levels of 1, 5 and 10 mg/kg with the relative standard deviations (RSD) of 4.54%-19.7% (n = 6). The results indicated that this method is simple, fast and credible, and suitable for the simultaneous determination of the quinolones in the above three types of cosmetics.

  3. Ranking Forestry Investments With Parametric Linear Programming

    Treesearch

    Paul A. Murphy

    1976-01-01

    Parametric linear programming is introduced as a technique for ranking forestry investments under multiple constraints; it combines the advantages of simple tanking and linear programming as capital budgeting tools.

  4. Father and adolescent son variables related to son's HIV prevention.

    PubMed

    Glenn, Betty L; Demi, Alice; Kimble, Laura P

    2008-02-01

    The purpose of this study was to examine the relationship between fathers' influences and African American male adolescents' perceptions of self-efficacy to reduce high-risk sexual behavior. A convenience sample of 70 fathers was recruited from churches in a large metropolitan area in the South. Hierarchical multiple linear regression analysis indicated father-related factors and son-related factors were associated with 26.1% of the variance in son's self-efficacy to be abstinent. In the regression model greater son's perception of the communication of sexual standards and greater father's perception of his son's self-efficacy were significantly related to greater son's self-efficacy for abstinence. The second regression model with son's self-efficacy for safer sex as the criterion was not statistically significant. Data support the need for fathers to express confidence in their sons' ability to be abstinent or practice safer sex and to communicate with their sons regarding sexual issues and standards.

  5. EEG and MEG data analysis in SPM8.

    PubMed

    Litvak, Vladimir; Mattout, Jérémie; Kiebel, Stefan; Phillips, Christophe; Henson, Richard; Kilner, James; Barnes, Gareth; Oostenveld, Robert; Daunizeau, Jean; Flandin, Guillaume; Penny, Will; Friston, Karl

    2011-01-01

    SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools.

  6. EEG and MEG Data Analysis in SPM8

    PubMed Central

    Litvak, Vladimir; Mattout, Jérémie; Kiebel, Stefan; Phillips, Christophe; Henson, Richard; Kilner, James; Barnes, Gareth; Oostenveld, Robert; Daunizeau, Jean; Flandin, Guillaume; Penny, Will; Friston, Karl

    2011-01-01

    SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools. PMID:21437221

  7. [Determination of arbutin in apple juice concentrate by ultra performance liquid chromatography with electrospray ionization tandem mass spectrometry].

    PubMed

    Kong, Xianghong; He, Qiang; Yue, Aishan; Wu, Shuangmin; Li, Jianhua

    2010-06-01

    An ultra performance liquid chromatography-tandem mass spectrometry (UPLC-MS/ MS) method was developed for the determination of arbutin in apple juice concentrate. Samples were diluted with water, then cleaned-up with a PS-DVB column. Quantitation was carried out using an external standard method. UPLC was performed on an Eclipse Plus C, column (100 mm x 2.1 mm, 1.8 microm) using a gradient solvent system (methanol-water). MS/MS was performed with multiple reaction monitoring (MRM) mode. The detection limit of arbutin was 0.02 mg/L. The method showed good linear relationship at the range of 0.04-2.0 mg/L. The recoveries ranged from 75.2% to 102.7% with relative standard deviations (RSDs) less than 8.9%. The method is simple, fast and sensitive. It's suitable for quantitative and qualitative analysis of arbutin in apple juice concentrate.

  8. Statistical analysis of water-quality data containing multiple detection limits: S-language software for regression on order statistics

    USGS Publications Warehouse

    Lee, L.; Helsel, D.

    2005-01-01

    Trace contaminants in water, including metals and organics, often are measured at sufficiently low concentrations to be reported only as values below the instrument detection limit. Interpretation of these "less thans" is complicated when multiple detection limits occur. Statistical methods for multiply censored, or multiple-detection limit, datasets have been developed for medical and industrial statistics, and can be employed to estimate summary statistics or model the distributions of trace-level environmental data. We describe S-language-based software tools that perform robust linear regression on order statistics (ROS). The ROS method has been evaluated as one of the most reliable procedures for developing summary statistics of multiply censored data. It is applicable to any dataset that has 0 to 80% of its values censored. These tools are a part of a software library, or add-on package, for the R environment for statistical computing. This library can be used to generate ROS models and associated summary statistics, plot modeled distributions, and predict exceedance probabilities of water-quality standards. ?? 2005 Elsevier Ltd. All rights reserved.

  9. Nonlinear Deep Kernel Learning for Image Annotation.

    PubMed

    Jiu, Mingyuan; Sahbi, Hichem

    2017-02-08

    Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.

  10. All set! Evidence of simultaneous attentional control settings for multiple target colors.

    PubMed

    Irons, Jessica L; Folk, Charles L; Remington, Roger W

    2012-06-01

    Although models of visual search have often assumed that attention can only be set for a single feature or property at a time, recent studies have suggested that it may be possible to maintain more than one attentional control setting. The aim of the present study was to investigate whether spatial attention could be guided by multiple attentional control settings for color. In a standard spatial cueing task, participants searched for either of two colored targets accompanied by an irrelevantly colored distractor. Across five experiments, results consistently showed that nonpredictive cues matching either target color produced a significant spatial cueing effect, while irrelevantly colored cues did not. This was the case even when the target colors could not be linearly separated from irrelevantly cue colors in color space, suggesting that participants were not simply adopting one general color set that included both target colors. The results could not be explained by intertrial priming by previous targets, nor could they be explained by a single inhibitory set for the distractor color. Overall, the results are most consistent with the maintenance of multiple attentional control settings.

  11. Quality of search strategies reported in systematic reviews published in stereotactic radiosurgery.

    PubMed

    Faggion, Clovis M; Wu, Yun-Chun; Tu, Yu-Kang; Wasiak, Jason

    2016-06-01

    Systematic reviews require comprehensive literature search strategies to avoid publication bias. This study aimed to assess and evaluate the reporting quality of search strategies within systematic reviews published in the field of stereotactic radiosurgery (SRS). Three electronic databases (Ovid MEDLINE(®), Ovid EMBASE(®) and the Cochrane Library) were searched to identify systematic reviews addressing SRS interventions, with the last search performed in October 2014. Manual searches of the reference lists of included systematic reviews were conducted. The search strategies of the included systematic reviews were assessed using a standardized nine-question form based on the Cochrane Collaboration guidelines and Assessment of Multiple Systematic Reviews checklist. Multiple linear regression analyses were performed to identify the important predictors of search quality. A total of 85 systematic reviews were included. The median quality score of search strategies was 2 (interquartile range = 2). Whilst 89% of systematic reviews reported the use of search terms, only 14% of systematic reviews reported searching the grey literature. Multiple linear regression analyses identified publication year (continuous variable), meta-analysis performance and journal impact factor (continuous variable) as predictors of higher mean quality scores. This study identified the urgent need to improve the quality of search strategies within systematic reviews published in the field of SRS. This study is the first to address how authors performed searches to select clinical studies for inclusion in their systematic reviews. Comprehensive and well-implemented search strategies are pivotal to reduce the chance of publication bias and consequently generate more reliable systematic review findings.

  12. Multiple monolithic fiber solid-phase microextraction based on a polymeric ionic liquid with high-performance liquid chromatography for the determination of steroid sex hormones in water and urine.

    PubMed

    Liao, Keren; Mei, Meng; Li, Haonan; Huang, Xiaojia; Wu, Cuiqin

    2016-02-01

    The development of a simple and sensitive analytical approach that combines multiple monolithic fiber solid-phase microextraction with liquid desorption followed by high-performance liquid chromatography with diode array detection is proposed for the determination of trace levels of seven steroid sex hormones (estriol, 17β-estradiol, testosterone, ethinylestradiol, estrone, progesterone and mestranol) in water and urine matrices. To extract the target analytes effectively, multiple monolithic fiber solid-phase microextraction based on a polymeric ionic liquid was used to concentrate hormones. Several key extraction parameters including desorption solvent, extraction and desorption time, pH value and ionic strength in sample matrix were investigated in detail. Under the optimal experimental conditions, the limits of detection were found to be in the range of 0.027-0.12 μg/L. The linear range was 0.10-200 μg/L for 17β-estradiol, 0.25-200 μg/L estriol, ethinylestradiol and estrone, and 0.50-200 μg/L for the other hormones. Satisfactory linearities were achieved for analytes with the correlation coefficients above 0.99. Acceptable method reproducibility was achieved by evaluating the repeatability and intermediate precision with relative standard deviations of both less than 8%. The enrichment factors ranged from 54- to 74-fold. Finally, the proposed method was successfully applied to the analysis of steroid sex hormones in environmental water samples and human urines with spiking recoveries ranged from 75.6 to 116%. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Hypothesis testing in functional linear regression models with Neyman's truncation and wavelet thresholding for longitudinal data.

    PubMed

    Yang, Xiaowei; Nie, Kun

    2008-03-15

    Longitudinal data sets in biomedical research often consist of large numbers of repeated measures. In many cases, the trajectories do not look globally linear or polynomial, making it difficult to summarize the data or test hypotheses using standard longitudinal data analysis based on various linear models. An alternative approach is to apply the approaches of functional data analysis, which directly target the continuous nonlinear curves underlying discretely sampled repeated measures. For the purposes of data exploration, many functional data analysis strategies have been developed based on various schemes of smoothing, but fewer options are available for making causal inferences regarding predictor-outcome relationships, a common task seen in hypothesis-driven medical studies. To compare groups of curves, two testing strategies with good power have been proposed for high-dimensional analysis of variance: the Fourier-based adaptive Neyman test and the wavelet-based thresholding test. Using a smoking cessation clinical trial data set, this paper demonstrates how to extend the strategies for hypothesis testing into the framework of functional linear regression models (FLRMs) with continuous functional responses and categorical or continuous scalar predictors. The analysis procedure consists of three steps: first, apply the Fourier or wavelet transform to the original repeated measures; then fit a multivariate linear model in the transformed domain; and finally, test the regression coefficients using either adaptive Neyman or thresholding statistics. Since a FLRM can be viewed as a natural extension of the traditional multiple linear regression model, the development of this model and computational tools should enhance the capacity of medical statistics for longitudinal data.

  14. Multiple solution of linear algebraic systems by an iterative method with recomputed preconditioner in the analysis of microstrip structures

    NASA Astrophysics Data System (ADS)

    Ahunov, Roman R.; Kuksenko, Sergey P.; Gazizov, Talgat R.

    2016-06-01

    A multiple solution of linear algebraic systems with dense matrix by iterative methods is considered. To accelerate the process, the recomputing of the preconditioning matrix is used. A priory condition of the recomputing based on change of the arithmetic mean of the current solution time during the multiple solution is proposed. To confirm the effectiveness of the proposed approach, the numerical experiments using iterative methods BiCGStab and CGS for four different sets of matrices on two examples of microstrip structures are carried out. For solution of 100 linear systems the acceleration up to 1.6 times, compared to the approach without recomputing, is obtained.

  15. A partially penalty immersed Crouzeix-Raviart finite element method for interface problems.

    PubMed

    An, Na; Yu, Xijun; Chen, Huanzhen; Huang, Chaobao; Liu, Zhongyan

    2017-01-01

    The elliptic equations with discontinuous coefficients are often used to describe the problems of the multiple materials or fluids with different densities or conductivities or diffusivities. In this paper we develop a partially penalty immersed finite element (PIFE) method on triangular grids for anisotropic flow models, in which the diffusion coefficient is a piecewise definite-positive matrix. The standard linear Crouzeix-Raviart type finite element space is used on non-interface elements and the piecewise linear Crouzeix-Raviart type immersed finite element (IFE) space is constructed on interface elements. The piecewise linear functions satisfying the interface jump conditions are uniquely determined by the integral averages on the edges as degrees of freedom. The PIFE scheme is given based on the symmetric, nonsymmetric or incomplete interior penalty discontinuous Galerkin formulation. The solvability of the method is proved and the optimal error estimates in the energy norm are obtained. Numerical experiments are presented to confirm our theoretical analysis and show that the newly developed PIFE method has optimal-order convergence in the [Formula: see text] norm as well. In addition, numerical examples also indicate that this method is valid for both the isotropic and the anisotropic elliptic interface problems.

  16. Developing a multipoint titration method with a variable dose implementation for anaerobic digestion monitoring.

    PubMed

    Salonen, K; Leisola, M; Eerikäinen, T

    2009-01-01

    Determination of metabolites from an anaerobic digester with an acid base titration is considered as superior method for many reasons. This paper describes a practical at line compatible multipoint titration method. The titration procedure was improved by speed and data quality. A simple and novel control algorithm for estimating a variable titrant dose was derived for this purpose. This non-linear PI-controller like algorithm does not require any preliminary information from sample. Performance of this controller is superior compared to traditional linear PI-controllers. In addition, simplification for presenting polyprotic acids as a sum of multiple monoprotic acids is introduced along with a mathematical error examination. A method for inclusion of the ionic strength effect with stepwise iteration is shown. The titration model is presented with matrix notations enabling simple computation of all concentration estimates. All methods and algorithms are illustrated in the experimental part. A linear correlation better than 0.999 was obtained for both acetate and phosphate used as model compounds with slopes of 0.98 and 1.00 and average standard deviations of 0.6% and 0.8%, respectively. Furthermore, insensitivity of the presented method for overlapping buffer capacity curves was shown.

  17. How does non-linear dynamics affect the baryon acoustic oscillation?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sugiyama, Naonori S.; Spergel, David N., E-mail: nao.s.sugiyama@gmail.com, E-mail: dns@astro.princeton.edu

    2014-02-01

    We study the non-linear behavior of the baryon acoustic oscillation in the power spectrum and the correlation function by decomposing the dark matter perturbations into the short- and long-wavelength modes. The evolution of the dark matter fluctuations can be described as a global coordinate transformation caused by the long-wavelength displacement vector acting on short-wavelength matter perturbation undergoing non-linear growth. Using this feature, we investigate the well known cancellation of the high-k solutions in the standard perturbation theory. While the standard perturbation theory naturally satisfies the cancellation of the high-k solutions, some of the recently proposed improved perturbation theories do notmore » guarantee the cancellation. We show that this cancellation clarifies the success of the standard perturbation theory at the 2-loop order in describing the amplitude of the non-linear power spectrum even at high-k regions. We propose an extension of the standard 2-loop level perturbation theory model of the non-linear power spectrum that more accurately models the non-linear evolution of the baryon acoustic oscillation than the standard perturbation theory. The model consists of simple and intuitive parts: the non-linear evolution of the smoothed power spectrum without the baryon acoustic oscillations and the non-linear evolution of the baryon acoustic oscillations due to the large-scale velocity of dark matter and due to the gravitational attraction between dark matter particles. Our extended model predicts the smoothing parameter of the baryon acoustic oscillation peak at z = 0.35 as ∼ 7.7Mpc/h and describes the small non-linear shift in the peak position due to the galaxy random motions.« less

  18. A Simple and Convenient Method of Multiple Linear Regression to Calculate Iodine Molecular Constants

    ERIC Educational Resources Information Center

    Cooper, Paul D.

    2010-01-01

    A new procedure using a student-friendly least-squares multiple linear-regression technique utilizing a function within Microsoft Excel is described that enables students to calculate molecular constants from the vibronic spectrum of iodine. This method is advantageous pedagogically as it calculates molecular constants for ground and excited…

  19. Conjoint Analysis: A Study of the Effects of Using Person Variables.

    ERIC Educational Resources Information Center

    Fraas, John W.; Newman, Isadore

    Three statistical techniques--conjoint analysis, a multiple linear regression model, and a multiple linear regression model with a surrogate person variable--were used to estimate the relative importance of five university attributes for students in the process of selecting a college. The five attributes include: availability and variety of…

  20. Using Audit Information to Adjust Parameter Estimates for Data Errors in Clinical Trials

    PubMed Central

    Shepherd, Bryan E.; Shaw, Pamela A.; Dodd, Lori E.

    2013-01-01

    Background Audits are often performed to assess the quality of clinical trial data, but beyond detecting fraud or sloppiness, the audit data is generally ignored. In earlier work using data from a non-randomized study, Shepherd and Yu (2011) developed statistical methods to incorporate audit results into study estimates, and demonstrated that audit data could be used to eliminate bias. Purpose In this manuscript we examine the usefulness of audit-based error-correction methods in clinical trial settings where a continuous outcome is of primary interest. Methods We demonstrate the bias of multiple linear regression estimates in general settings with an outcome that may have errors and a set of covariates for which some may have errors and others, including treatment assignment, are recorded correctly for all subjects. We study this bias under different assumptions including independence between treatment assignment, covariates, and data errors (conceivable in a double-blinded randomized trial) and independence between treatment assignment and covariates but not data errors (possible in an unblinded randomized trial). We review moment-based estimators to incorporate the audit data and propose new multiple imputation estimators. The performance of estimators is studied in simulations. Results When treatment is randomized and unrelated to data errors, estimates of the treatment effect using the original error-prone data (i.e., ignoring the audit results) are unbiased. In this setting, both moment and multiple imputation estimators incorporating audit data are more variable than standard analyses using the original data. In contrast, in settings where treatment is randomized but correlated with data errors and in settings where treatment is not randomized, standard treatment effect estimates will be biased. And in all settings, parameter estimates for the original, error-prone covariates will be biased. Treatment and covariate effect estimates can be corrected by incorporating audit data using either the multiple imputation or moment-based approaches. Bias, precision, and coverage of confidence intervals improve as the audit size increases. Limitations The extent of bias and the performance of methods depend on the extent and nature of the error as well as the size of the audit. This work only considers methods for the linear model. Settings much different than those considered here need further study. Conclusions In randomized trials with continuous outcomes and treatment assignment independent of data errors, standard analyses of treatment effects will be unbiased and are recommended. However, if treatment assignment is correlated with data errors or other covariates, naive analyses may be biased. In these settings, and when covariate effects are of interest, approaches for incorporating audit results should be considered. PMID:22848072

  1. Anomalous diffusion with linear reaction dynamics: from continuous time random walks to fractional reaction-diffusion equations.

    PubMed

    Henry, B I; Langlands, T A M; Wearne, S L

    2006-09-01

    We have revisited the problem of anomalously diffusing species, modeled at the mesoscopic level using continuous time random walks, to include linear reaction dynamics. If a constant proportion of walkers are added or removed instantaneously at the start of each step then the long time asymptotic limit yields a fractional reaction-diffusion equation with a fractional order temporal derivative operating on both the standard diffusion term and a linear reaction kinetics term. If the walkers are added or removed at a constant per capita rate during the waiting time between steps then the long time asymptotic limit has a standard linear reaction kinetics term but a fractional order temporal derivative operating on a nonstandard diffusion term. Results from the above two models are compared with a phenomenological model with standard linear reaction kinetics and a fractional order temporal derivative operating on a standard diffusion term. We have also developed further extensions of the CTRW model to include more general reaction dynamics.

  2. Rapid detection and identification of N-acetyl-L-cysteine thioethers using constant neutral loss and theoretical multiple reaction monitoring combined with enhanced product-ion scans on a linear ion trap mass spectrometer.

    PubMed

    Scholz, Karoline; Dekant, Wolfgang; Völkel, Wolfgang; Pähler, Axel

    2005-12-01

    A sensitive and specific liquid chromatography-mass spectrometry (LC-MS) method based on the combination of constant neutral loss scans (CNL) with product ion scans was developed on a linear ion trap. The method is applicable for the detection and identification of analytes with identical chemical substructures (such as conjugates of xenobiotics formed in biological systems) which give common CNLs. A specific CNL was observed for thioethers of N-acetyl-L-cysteine (mercapturic acids, MA) by LC-MS/MS. MS and HPLC parameters were optimized with 16 MAs available as reference compounds. All of these provided a CNL of 129 Da in the negative-ion mode. To assess sensitivity, a multiple reaction monitoring (MRM) mode with 251 theoretical transitions using the CNL of 129 Da combined with a product ion scan (IDA thMRM) was compared with CNL combined with a product ion scan (IDA CNL). An information-dependent acquisition (IDA) uses a survey scan such as MRM (multiple reaction monitoring) to generate "informations" and starting a second acquisition experiment such as a product ion scan using these "informations." Th-MRM means calculated transitions and not transitions generated from an available standard in the tuning mode. The product ion spectra provide additional information on the chemical structure of the unknown analytes. All MA standards were spiked in low concentrations to rat urines and were detected with both methods with LODs ranging from 60 pmol/mL to 1.63 nmol/mL with IDA thMRM. The expected product ion spectra were observed in urine. Application of this screening method to biological samples indicated the presence of a number of MAs in urine of unexposed rats, and resulted in the identification of 1,4-dihydroxynonene mercapturic acid as one of these MAs by negative and positive product ion spectra. These results show that the developed methods have a high potential to serve as both a prescreen to detect unknown MAs and to identify these analytes in complex matrix.

  3. Linear versus non-linear measures of temporal variability in finger tapping and their relation to performance on open- versus closed-loop motor tasks: comparing standard deviations to Lyapunov exponents.

    PubMed

    Christman, Stephen D; Weaver, Ryan

    2008-05-01

    The nature of temporal variability during speeded finger tapping was examined using linear (standard deviation) and non-linear (Lyapunov exponent) measures. Experiment 1 found that right hand tapping was characterised by lower amounts of both linear and non-linear measures of variability than left hand tapping, and that linear and non-linear measures of variability were often negatively correlated with one another. Experiment 2 found that increased non-linear variability was associated with relatively enhanced performance on a closed-loop motor task (mirror tracing) and relatively impaired performance on an open-loop motor task (pointing in a dark room), especially for left hand performance. The potential uses and significance of measures of non-linear variability are discussed.

  4. Analyzing Multilevel Data: An Empirical Comparison of Parameter Estimates of Hierarchical Linear Modeling and Ordinary Least Squares Regression

    ERIC Educational Resources Information Center

    Rocconi, Louis M.

    2011-01-01

    Hierarchical linear models (HLM) solve the problems associated with the unit of analysis problem such as misestimated standard errors, heterogeneity of regression and aggregation bias by modeling all levels of interest simultaneously. Hierarchical linear modeling resolves the problem of misestimated standard errors by incorporating a unique random…

  5. Simultaneous identification and quantification of tetrodotoxin in fresh pufferfish and pufferfish-based products using immunoaffinity columns and liquid chromatography/quadrupole-linear ion trap mass spectrometry

    NASA Astrophysics Data System (ADS)

    Guo, Mengmeng; Wu, Haiyan; Jiang, Tao; Tan, Zhijun; Zhao, Chunxia; Zheng, Guanchao; Li, Zhaoxin; Zhai, Yuxiu

    2017-07-01

    In this study, we established a comprehensive method for simultaneous identification and quantification of tetrodotoxin (TTX) in fresh pufferfish tissues and pufferfish-based products using liquid chromatography/quadrupole-linear ion trap mass spectrometry (LC-QqLIT-MS). TTX was extracted by 1% acetic acid-methanol, and most of the lipids were then removed by freezing lipid precipitation, followed by purification and concentration using immunoaffinity columns (IACs). Matrix effects were substantially reduced due to the high specificity of the IACs, and thus, background interference was avoided. Quantitation analysis was therefore performed using an external calibration curve with standards prepared in mobile phase. The method was evaluated by fortifying samples at 1, 10, and 100 ng/g, respectively, and the recoveries ranged from 75.8%-107%, with a relative standard deviation of less than 15%. The TTX calibration curves were linear over the range of 1-1 000 μg/L, with a detection limit of 0.3 ng/g and a quantification limit of 1 ng/g. Using this method, samples can be further analyzed using an information-dependent acquisition (IDA) experiment, in the positive mode, from a single liquid chromatography-tandem mass spectrometry injection, which can provide an extra level of confirmation by matching the full product ion spectra acquired for a standard sample with those from an enhanced product ion (EPI) library. The scheduled multiple reaction monitoring method enabled TTX to be screened for, and TTX was positively identified using the IDA and EPI spectra. This method was successfully applied to analyze a total of 206 samples of fresh pufferfish tissues and pufferfish-based products. The results from this study show that the proposed method can be used to quantify and identify TTX in a single run with excellent sensitivity and reproducibility, and is suitable for the analysis of complex matrix pufferfish samples.

  6. Non-linear molecular pattern classification using molecular beacons with multiple targets.

    PubMed

    Lee, In-Hee; Lee, Seung Hwan; Park, Tai Hyun; Zhang, Byoung-Tak

    2013-12-01

    In vitro pattern classification has been highlighted as an important future application of DNA computing. Previous work has demonstrated the feasibility of linear classifiers using DNA-based molecular computing. However, complex tasks require non-linear classification capability. Here we design a molecular beacon that can interact with multiple targets and experimentally shows that its fluorescent signals form a complex radial-basis function, enabling it to be used as a building block for non-linear molecular classification in vitro. The proposed method was successfully applied to solving artificial and real-world classification problems: XOR and microRNA expression patterns. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. Multiple imputation of rainfall missing data in the Iberian Mediterranean context

    NASA Astrophysics Data System (ADS)

    Miró, Juan Javier; Caselles, Vicente; Estrela, María José

    2017-11-01

    Given the increasing need for complete rainfall data networks, in recent years have been proposed diverse methods for filling gaps in observed precipitation series, progressively more advanced that traditional approaches to overcome the problem. The present study has consisted in validate 10 methods (6 linear, 2 non-linear and 2 hybrid) that allow multiple imputation, i.e., fill at the same time missing data of multiple incomplete series in a dense network of neighboring stations. These were applied for daily and monthly rainfall in two sectors in the Júcar River Basin Authority (east Iberian Peninsula), which is characterized by a high spatial irregularity and difficulty of rainfall estimation. A classification of precipitation according to their genetic origin was applied as pre-processing, and a quantile-mapping adjusting as post-processing technique. The results showed in general a better performance for the non-linear and hybrid methods, highlighting that the non-linear PCA (NLPCA) method outperforms considerably the Self Organizing Maps (SOM) method within non-linear approaches. On linear methods, the Regularized Expectation Maximization method (RegEM) was the best, but far from NLPCA. Applying EOF filtering as post-processing of NLPCA (hybrid approach) yielded the best results.

  8. A Procedure to Simultaneously Determine the Calcium, Chromium, and Titanium Isotopic Compositions of Astromaterials

    NASA Technical Reports Server (NTRS)

    Tappa, M. J.; Simon, J. I; Jordan, M. K.; Young, E. D.

    2015-01-01

    Many elements display both linear (mass-dependent) and non-linear (mass-independent) isotope anomalies (relative to a common reservoir). In early Solar System objects, with the exception of oxygen, mass-dependent isotope anomalies are most commonly thought to result from phase separation processes such as evaporation and condensation, whereas many mass-independent isotope anomalies likely reflect radiogenic ingrowth or incomplete mixing of presolar components in the proto-planetary disk. Coupling the isotopic characterization of multiple elements with differing volatilities in single objects may provide information regarding the location, source material, and/or processes involved in the formation of early Solar System solids. Here, we follow up on the work presented in, and detail new procedures developed to make high-precision multi-isotope measurements of Calcium, Chromium, and Titanium with small or limited amounts of sample using thermal ionization mass spectrometry and multi-collector ICP-MS, and characterize a suite of chondritic and terrestrial standards.

  9. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression.

    PubMed

    Meng, Yilin; Roux, Benoît

    2015-08-11

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost.

  10. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression

    PubMed Central

    2015-01-01

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost. PMID:26574437

  11. Communication: Modeling charge-sign asymmetric solvation free energies with nonlinear boundary conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bardhan, Jaydeep P.; Knepley, Matthew G.

    2014-10-07

    We show that charge-sign-dependent asymmetric hydration can be modeled accurately using linear Poisson theory after replacing the standard electric-displacement boundary condition with a simple nonlinear boundary condition. Using a single multiplicative scaling factor to determine atomic radii from molecular dynamics Lennard-Jones parameters, the new model accurately reproduces MD free-energy calculations of hydration asymmetries for: (i) monatomic ions, (ii) titratable amino acids in both their protonated and unprotonated states, and (iii) the Mobley “bracelet” and “rod” test problems [D. L. Mobley, A. E. Barber II, C. J. Fennell, and K. A. Dill, “Charge asymmetries in hydration of polar solutes,” J. Phys.more » Chem. B 112, 2405–2414 (2008)]. Remarkably, the model also justifies the use of linear response expressions for charging free energies. Our boundary-element method implementation demonstrates the ease with which other continuum-electrostatic solvers can be extended to include asymmetry.« less

  12. Interpolation problem for the solutions of linear elasticity equations based on monogenic functions

    NASA Astrophysics Data System (ADS)

    Grigor'ev, Yuri; Gürlebeck, Klaus; Legatiuk, Dmitrii

    2017-11-01

    Interpolation is an important tool for many practical applications, and very often it is beneficial to interpolate not only with a simple basis system, but rather with solutions of a certain differential equation, e.g. elasticity equation. A typical example for such type of interpolation are collocation methods widely used in practice. It is known, that interpolation theory is fully developed in the framework of the classical complex analysis. However, in quaternionic analysis, which shows a lot of analogies to complex analysis, the situation is more complicated due to the non-commutative multiplication. Thus, a fundamental theorem of algebra is not available, and standard tools from linear algebra cannot be applied in the usual way. To overcome these problems, a special system of monogenic polynomials the so-called Pseudo Complex Polynomials, sharing some properties of complex powers, is used. In this paper, we present an approach to deal with the interpolation problem, where solutions of elasticity equations in three dimensions are used as an interpolation basis.

  13. Using the Multiplicative Schwarz Alternating Algorithm (MSAA) for Solving the Large Linear System of Equations Related to Global Gravity Field Recovery up to Degree and Order 120

    NASA Astrophysics Data System (ADS)

    Safari, A.; Sharifi, M. A.; Amjadiparvar, B.

    2010-05-01

    The GRACE mission has substantiated the low-low satellite-to-satellite tracking (LL-SST) concept. The LL-SST configuration can be combined with the previously realized high-low SST concept in the CHAMP mission to provide a much higher accuracy. The line of sight (LOS) acceleration difference between the GRACE satellite pair is the mostly used observable for mapping the global gravity field of the Earth in terms of spherical harmonic coefficients. In this paper, mathematical formulae for LOS acceleration difference observations have been derived and the corresponding linear system of equations has been set up for spherical harmonic up to degree and order 120. The total number of unknowns is 14641. Such a linear equation system can be solved with iterative solvers or direct solvers. However, the runtime of direct methods or that of iterative solvers without a suitable preconditioner increases tremendously. This is the reason why we need a more sophisticated method to solve the linear system of problems with a large number of unknowns. Multiplicative variant of the Schwarz alternating algorithm is a domain decomposition method, which allows it to split the normal matrix of the system into several smaller overlaped submatrices. In each iteration step the multiplicative variant of the Schwarz alternating algorithm solves linear systems with the matrices obtained from the splitting successively. It reduces both runtime and memory requirements drastically. In this paper we propose the Multiplicative Schwarz Alternating Algorithm (MSAA) for solving the large linear system of gravity field recovery. The proposed algorithm has been tested on the International Association of Geodesy (IAG)-simulated data of the GRACE mission. The achieved results indicate the validity and efficiency of the proposed algorithm in solving the linear system of equations from accuracy and runtime points of view. Keywords: Gravity field recovery, Multiplicative Schwarz Alternating Algorithm, Low-Low Satellite-to-Satellite Tracking

  14. Identifying maternal and infant factors associated with newborn size in rural Bangladesh by partial least squares (PLS) regression analysis

    PubMed Central

    Rahman, Md. Jahanur; Shamim, Abu Ahmed; Klemm, Rolf D. W.; Labrique, Alain B.; Rashid, Mahbubur; Christian, Parul; West, Keith P.

    2017-01-01

    Birth weight, length and circumferences of the head, chest and arm are key measures of newborn size and health in developing countries. We assessed maternal socio-demographic factors associated with multiple measures of newborn size in a large rural population in Bangladesh using partial least squares (PLS) regression method. PLS regression, combining features from principal component analysis and multiple linear regression, is a multivariate technique with an ability to handle multicollinearity while simultaneously handling multiple dependent variables. We analyzed maternal and infant data from singletons (n = 14,506) born during a double-masked, cluster-randomized, placebo-controlled maternal vitamin A or β-carotene supplementation trial in rural northwest Bangladesh. PLS regression results identified numerous maternal factors (parity, age, early pregnancy MUAC, living standard index, years of education, number of antenatal care visits, preterm delivery and infant sex) significantly (p<0.001) associated with newborn size. Among them, preterm delivery had the largest negative influence on newborn size (Standardized β = -0.29 − -0.19; p<0.001). Scatter plots of the scores of first two PLS components also revealed an interaction between newborn sex and preterm delivery on birth size. PLS regression was found to be more parsimonious than both ordinary least squares regression and principal component regression. It also provided more stable estimates than the ordinary least squares regression and provided the effect measure of the covariates with greater accuracy as it accounts for the correlation among the covariates and outcomes. Therefore, PLS regression is recommended when either there are multiple outcome measurements in the same study, or the covariates are correlated, or both situations exist in a dataset. PMID:29261760

  15. Identifying maternal and infant factors associated with newborn size in rural Bangladesh by partial least squares (PLS) regression analysis.

    PubMed

    Kabir, Alamgir; Rahman, Md Jahanur; Shamim, Abu Ahmed; Klemm, Rolf D W; Labrique, Alain B; Rashid, Mahbubur; Christian, Parul; West, Keith P

    2017-01-01

    Birth weight, length and circumferences of the head, chest and arm are key measures of newborn size and health in developing countries. We assessed maternal socio-demographic factors associated with multiple measures of newborn size in a large rural population in Bangladesh using partial least squares (PLS) regression method. PLS regression, combining features from principal component analysis and multiple linear regression, is a multivariate technique with an ability to handle multicollinearity while simultaneously handling multiple dependent variables. We analyzed maternal and infant data from singletons (n = 14,506) born during a double-masked, cluster-randomized, placebo-controlled maternal vitamin A or β-carotene supplementation trial in rural northwest Bangladesh. PLS regression results identified numerous maternal factors (parity, age, early pregnancy MUAC, living standard index, years of education, number of antenatal care visits, preterm delivery and infant sex) significantly (p<0.001) associated with newborn size. Among them, preterm delivery had the largest negative influence on newborn size (Standardized β = -0.29 - -0.19; p<0.001). Scatter plots of the scores of first two PLS components also revealed an interaction between newborn sex and preterm delivery on birth size. PLS regression was found to be more parsimonious than both ordinary least squares regression and principal component regression. It also provided more stable estimates than the ordinary least squares regression and provided the effect measure of the covariates with greater accuracy as it accounts for the correlation among the covariates and outcomes. Therefore, PLS regression is recommended when either there are multiple outcome measurements in the same study, or the covariates are correlated, or both situations exist in a dataset.

  16. Solving a mixture of many random linear equations by tensor decomposition and alternating minimization.

    DOT National Transportation Integrated Search

    2016-09-01

    We consider the problem of solving mixed random linear equations with k components. This is the noiseless setting of mixed linear regression. The goal is to estimate multiple linear models from mixed samples in the case where the labels (which sample...

  17. Multiple pass laser amplifier system

    DOEpatents

    Brueckner, Keith A.; Jorna, Siebe; Moncur, N. Kent

    1977-01-01

    A laser amplification method for increasing the energy extraction efficiency from laser amplifiers while reducing the energy flux that passes through a flux limited system which includes apparatus for decomposing a linearly polarized light beam into multiple components, passing the components through an amplifier in delayed time sequence and recombining the amplified components into an in phase linearly polarized beam.

  18. Statistical linearization for multi-input/multi-output nonlinearities

    NASA Technical Reports Server (NTRS)

    Lin, Ching-An; Cheng, Victor H. L.

    1991-01-01

    Formulas are derived for the computation of the random input-describing functions for MIMO nonlinearities; these straightforward and rigorous derivations are based on the optimal mean square linear approximation. The computations involve evaluations of multiple integrals. It is shown that, for certain classes of nonlinearities, multiple-integral evaluations are obviated and the computations are significantly simplified.

  19. What Is Wrong with ANOVA and Multiple Regression? Analyzing Sentence Reading Times with Hierarchical Linear Models

    ERIC Educational Resources Information Center

    Richter, Tobias

    2006-01-01

    Most reading time studies using naturalistic texts yield data sets characterized by a multilevel structure: Sentences (sentence level) are nested within persons (person level). In contrast to analysis of variance and multiple regression techniques, hierarchical linear models take the multilevel structure of reading time data into account. They…

  20. Some Applied Research Concerns Using Multiple Linear Regression Analysis.

    ERIC Educational Resources Information Center

    Newman, Isadore; Fraas, John W.

    The intention of this paper is to provide an overall reference on how a researcher can apply multiple linear regression in order to utilize the advantages that it has to offer. The advantages and some concerns expressed about the technique are examined. A number of practical ways by which researchers can deal with such concerns as…

  1. A new linear least squares method for T1 estimation from SPGR signals with multiple TRs

    NASA Astrophysics Data System (ADS)

    Chang, Lin-Ching; Koay, Cheng Guan; Basser, Peter J.; Pierpaoli, Carlo

    2009-02-01

    The longitudinal relaxation time, T1, can be estimated from two or more spoiled gradient recalled echo x (SPGR) images with two or more flip angles and one or more repetition times (TRs). The function relating signal intensity and the parameters are nonlinear; T1 maps can be computed from SPGR signals using nonlinear least squares regression. A widely-used linear method transforms the nonlinear model by assuming a fixed TR in SPGR images. This constraint is not desirable since multiple TRs are a clinically practical way to reduce the total acquisition time, to satisfy the required resolution, and/or to combine SPGR data acquired at different times. A new linear least squares method is proposed using the first order Taylor expansion. Monte Carlo simulations of SPGR experiments are used to evaluate the accuracy and precision of the estimated T1 from the proposed linear and the nonlinear methods. We show that the new linear least squares method provides T1 estimates comparable in both precision and accuracy to those from the nonlinear method, allowing multiple TRs and reducing computation time significantly.

  2. New methods and results for quantification of lightning-aircraft electrodynamics

    NASA Technical Reports Server (NTRS)

    Pitts, Felix L.; Lee, Larry D.; Perala, Rodney A.; Rudolph, Terence H.

    1987-01-01

    The NASA F-106 collected data on the rates of change of electromagnetic parameters on the aircraft surface during over 700 direct lightning strikes while penetrating thunderstorms at altitudes from 15,000 t0 40,000 ft (4,570 to 12,190 m). These in situ measurements provided the basis for the first statistical quantification of the lightning electromagnetic threat to aircraft appropriate for determining indirect lightning effects on aircraft. These data are used to update previous lightning criteria and standards developed over the years from ground-based measurements. The proposed standards will be the first which reflect actual aircraft responses measured at flight altitudes. Nonparametric maximum likelihood estimates of the distribution of the peak electromagnetic rates of change for consideration in the new standards are obtained based on peak recorder data for multiple-strike flights. The linear and nonlinear modeling techniques developed provide means to interpret and understand the direct-strike electromagnetic data acquired on the F-106. The reasonable results obtained with the models, compared with measured responses, provide increased confidence that the models may be credibly applied to other aircraft.

  3. Determination of tiropramide in human plasma by liquid chromatography-tandem mass spectrometry.

    PubMed

    Lee, Hye Won; Ji, Hye Young; Kim, Hee Hyun; Cho, Hea-Young; Lee, Yong-Bok; Lee, Hye Suk

    2003-11-05

    A rapid, sensitive and selective liquid chromatography-tandem mass spectrometric (LC/MS/MS) method for the determination of tiropramide in human plasma was developed. Tiropramide and internal standard, cisapride were extracted from human plasma by liquid-liquid extraction and analyzed on a Luna C8 column with the mobile phase of acetonitrile-ammonium formate (10mM, pH 4.5) (50:50, v/v). The analytes was detected using an electrospray ionization tandem mass spectrometry in the multiple-reaction-monitoring mode. The standard curve was linear (r=0.998) over the concentration range of 2.0-200 ng/ml. The intra- and inter-assay coefficients of variation ranged from 2.8 to 7.8 and 6.7 to 8.9%, respectively. The recoveries of tiropramide ranged from 50.2 to 53.1%, with that of cisapride (internal standard) being 60.9+/-5.3%. The lower limit of quantification for tiropramide was 2.0 ng/ml using 100 microl plasma sample. This method was applied to the pharmacokinetic study of tiropramide in human.

  4. Sex Differences in Diabetes Mellitus Mortality Trends in Brazil, 1980-2012.

    PubMed

    Malhão, Thainá Alves; Brito, Alexandre Dos Santos; Pinheiro, Rejane Sobrino; Cabral, Cristiane da Silva; Camargo, Thais Medina Coeli Rochel de; Coeli, Claudia Medina

    2016-01-01

    To investigate the hypothesis that the change from the female predominance of diabetes mellitus to a standard of equality or even male preponderance can already be observed in Brazilian mortality statistics. Data on deaths for which diabetes mellitus was listed as the underlying cause were obtained from the Brazilian Mortality Information System for the years 1980 to 2012. The mortality data were also analyzed according to the multiple causes of death approach from 2001 to 2012. The population data came from the Brazilian Institute of Geography and Statistics. The mortality rates were standardized to the world population. We used a log-linear joinpoint regression to evaluate trends in age-standardized mortality rates (ASMR). From 1980 to 2012, we found a marked increment in the diabetes ASMR among Brazilian men and a less sharp increase in the rate among women, with the latter period (2003-2012) showing a slight decrease among women, though it was not statistically significant. The results of this study suggest that diabetes mellitus in Brazil has changed from a pattern of higher mortality among women compared to men to equality or even male predominance.

  5. Serum Folate Shows an Inverse Association with Blood Pressure in a Cohort of Chinese Women of Childbearing Age: A Cross-Sectional Study

    PubMed Central

    Shen, Minxue; Tan, Hongzhuan; Zhou, Shujin; Retnakaran, Ravi; Smith, Graeme N.; Davidge, Sandra T.; Trasler, Jacquetta; Walker, Mark C.; Wen, Shi Wu

    2016-01-01

    Background It has been reported that higher folate intake from food and supplementation is associated with decreased blood pressure (BP). The association between serum folate concentration and BP has been examined in few studies. We aim to examine the association between serum folate and BP levels in a cohort of young Chinese women. Methods We used the baseline data from a pre-conception cohort of women of childbearing age in Liuyang, China, for this study. Demographic data were collected by structured interview. Serum folate concentration was measured by immunoassay, and homocysteine, blood glucose, triglyceride and total cholesterol were measured through standardized clinical procedures. Multiple linear regression and principal component regression model were applied in the analysis. Results A total of 1,532 healthy normotensive non-pregnant women were included in the final analysis. The mean concentration of serum folate was 7.5 ± 5.4 nmol/L and 55% of the women presented with folate deficiency (< 6.8 nmol/L). Multiple linear regression and principal component regression showed that serum folate levels were inversely associated with systolic and diastolic BP, after adjusting for demographic, anthropometric, and biochemical factors. Conclusions Serum folate is inversely associated with BP in non-pregnant women of childbearing age with high prevalence of folate deficiency. PMID:27182603

  6. Determining CDOM Absorption Spectra in Diverse Aquatic Environments Using a Multiple Pathlength, Liquid Core Waveguide System

    NASA Technical Reports Server (NTRS)

    Miller, Richard L.; Belz, Mathias; DelCastillo, Carlos; Trzaska, Rick

    2001-01-01

    We evaluated the accuracy, sensitivity and precision of a multiple pathlength, liquid core waveguide (MPLCW) system for measuring colored dissolved organic matter (CDOM) absorption in the UV-visible spectral range (370-700 nm). The MPLCW has four optical paths (2.0, 9.8, 49.3, and 204 cm) coupled to a single Teflon AF sample cell. Water samples were obtained from inland, coastal and ocean waters ranging in salinity from 0 to 36 PSU. Reference solutions for the MPLCW were made having a refractive index of the sample. CDOM absorption coefficients, aCDOM, and the slope of the log-linearized absorption spectra, S, were compared with values obtained using a dual-beam spectrophotometer. Absorption of phenol red secondary standards measured by the MPLCW at 558 nm were highly correlated with spectrophotometer values and showed a linear response across all four pathlengths. Values of aCDOM measured using the MPLCW were virtually identical to spectrophotometer values over a wide range of concentrations. The dynamic range of aCDOM for MPLCW measurements was 0.002 - 231.5 m-1. At low CDOM concentrations spectrophotometric aCDOM were slightly greater than MPLCW values and showed larger fluctuations at longer wavelengths due to limitations in instrument precision. In contrast, MPLCW spectra followed an exponential to 600 nm for all samples.

  7. Chemical profiling and quantification of Chinese medicinal formula Huang-Lian-Jie-Du decoction, a systematic quality control strategy using ultra high performance liquid chromatography combined with hybrid quadrupole-orbitrap and triple quadrupole mass spectrometers.

    PubMed

    Yang, Yang; Wang, Hong-Jie; Yang, Jian; Brantner, Adelheid H; Lower-Nedza, Agnieszka D; Si, Nan; Song, Jian-Fang; Bai, Bing; Zhao, Hai-Yu; Bian, Bao-Lin

    2013-12-20

    To clarify and quantify the chemical profiling of Huang-Lian-Jie-Du decoction (HLJDD) rapidly, a feasible and accurate strategy was developed by applying high speed LC combined with hybrid quadrupole-orbitrap mass spectrometer (Q-Exactive) and UHPLC-triple quadruple mass spectrometer (UHPLC-QqQ MS). 69 compounds, including iridoids, alkaloids, flavonoids, triterpenoid, monoterpene and phenolic acids, were identified by their characteristic high resolution mass data. Among them, 18 major compounds were unambiguously detected by comparing with reference standards. In the subsequent quantitative analysis, 17 representative compounds, selected as quality control markers, were simultaneously detected in 10 batches of HLJDD samples by UHPLC-QqQ MS. These samples were collected from four different countries (regions). Icariin, swertiamarin and corynoline were employed as internal standards for flavonoids, iridoids and alkaloids respectively. All the analytes were detected within 12min. Polarity switching mode was used in the optimization of multiple reaction monitoring (MRM) conditions. Satisfactory linearity was achieved with wide linear range and fine determination coefficient (r(2)>0.9990). The relative standard deviations (RSD) of inter- and intra-day precisions were less than 5.0%. This method was also validated by repeatability, stability (8h) and recovery, with respective RSDs less than 4.6%, 5.0% and 6.3%. This research established a high sensitive and efficient method for the integrating quality control, including identification and quantification of Chinese medicinal formulas. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Factors that influence standard automated perimetry test results in glaucoma: test reliability, technician experience, time of day, and season.

    PubMed

    Junoy Montolio, Francisco G; Wesselink, Christiaan; Gordijn, Marijke; Jansonius, Nomdo M

    2012-10-09

    To determine the influence of several factors on standard automated perimetry test results in glaucoma. Longitudinal Humphrey field analyzer 30-2 Swedish interactive threshold algorithm data from 160 eyes of 160 glaucoma patients were used. The influence of technician experience, time of day, and season on the mean deviation (MD) was determined by performing linear regression analysis of MD against time on a series of visual fields and subsequently performing a multiple linear regression analysis with the MD residuals as dependent variable and the factors mentioned above as independent variables. Analyses were performed with and without adjustment for the test reliability (fixation losses and false-positive and false-negative answers) and with and without stratification according to disease stage (baseline MD). Mean follow-up was 9.4 years, with on average 10.8 tests per patient. Technician experience, time of day, and season were associated with the MD. Approximately 0.2 dB lower MD values were found for inexperienced technicians (P < 0.001), tests performed after lunch (P < 0.001), and tests performed in the summer or autumn (P < 0.001). The effects of time of day and season appeared to depend on disease stage. Independent of these effects, the percentage of false-positive answers strongly influenced the MD with a 1 dB increase in MD per 10% increase in false-positive answers. Technician experience, time of day, season, and the percentage of false-positive answers have a significant influence on the MD of standard automated perimetry.

  9. Standard Errors of Equating for the Percentile Rank-Based Equipercentile Equating with Log-Linear Presmoothing

    ERIC Educational Resources Information Center

    Wang, Tianyou

    2009-01-01

    Holland and colleagues derived a formula for analytical standard error of equating using the delta-method for the kernel equating method. Extending their derivation, this article derives an analytical standard error of equating procedure for the conventional percentile rank-based equipercentile equating with log-linear smoothing. This procedure is…

  10. Stochastic Swift-Hohenberg Equation with Degenerate Linear Multiplicative Noise

    NASA Astrophysics Data System (ADS)

    Hernández, Marco; Ong, Kiah Wah

    2018-03-01

    We study the dynamic transition of the Swift-Hohenberg equation (SHE) when linear multiplicative noise acting on a finite set of modes of the dominant linear flow is introduced. Existence of a stochastic flow and a local stochastic invariant manifold for this stochastic form of SHE are both addressed in this work. We show that the approximate reduced system corresponding to the invariant manifold undergoes a stochastic pitchfork bifurcation, and obtain numerical evidence suggesting that this picture is a good approximation for the full system as well.

  11. A methodology based on reduced complexity algorithm for system applications using microprocessors

    NASA Technical Reports Server (NTRS)

    Yan, T. Y.; Yao, K.

    1988-01-01

    The paper considers a methodology on the analysis and design of a minimum mean-square error criterion linear system incorporating a tapped delay line (TDL) where all the full-precision multiplications in the TDL are constrained to be powers of two. A linear equalizer based on the dispersive and additive noise channel is presented. This microprocessor implementation with optimized power of two TDL coefficients achieves a system performance comparable to the optimum linear equalization with full-precision multiplications for an input data rate of 300 baud.

  12. Verification of spectrophotometric method for nitrate analysis in water samples

    NASA Astrophysics Data System (ADS)

    Kurniawati, Puji; Gusrianti, Reny; Dwisiwi, Bledug Bernanti; Purbaningtias, Tri Esti; Wiyantoko, Bayu

    2017-12-01

    The aim of this research was to verify the spectrophotometric method to analyze nitrate in water samples using APHA 2012 Section 4500 NO3-B method. The verification parameters used were: linearity, method detection limit, level of quantitation, level of linearity, accuracy and precision. Linearity was obtained by using 0 to 50 mg/L nitrate standard solution and the correlation coefficient of standard calibration linear regression equation was 0.9981. The method detection limit (MDL) was defined as 0,1294 mg/L and limit of quantitation (LOQ) was 0,4117 mg/L. The result of a level of linearity (LOL) was 50 mg/L and nitrate concentration 10 to 50 mg/L was linear with a level of confidence was 99%. The accuracy was determined through recovery value was 109.1907%. The precision value was observed using % relative standard deviation (%RSD) from repeatability and its result was 1.0886%. The tested performance criteria showed that the methodology was verified under the laboratory conditions.

  13. Modern methods and systems for precise control of the quality of agricultural and food production

    NASA Astrophysics Data System (ADS)

    Bednarjevsky, Sergey S.; Veryasov, Yuri V.; Akinina, Evgeniya V.; Smirnov, Gennady I.

    1999-01-01

    The results on the modeling of non-linear dynamics of strong continuous and impulse radiation in the laser nephelometry of polydisperse biological systems, important from the viewpoint of applications in biotechnologies, are presented. The processes of nonlinear self-action of the laser radiation by the multiple scattering in the disperse biological agro-media are considered. The simplified algorithms of the calculation of the parameters of the biological media under investigation are indicated and the estimates of the errors of the laser-nephelometric measurements are given. The universal high-informative optical analyzers and the standard etalon specimens of agro- objects make the technological foundation of the considered methods and systems.

  14. Determination of virginiamycin M1 residue in tissues of swine and chicken by ultra-performance liquid chromatography tandem mass spectrometry.

    PubMed

    Wang, Xiaoyang; Wang, Mi; Zhang, Keyu; Hou, Ting; Zhang, Lifang; Fei, Chenzong; Xue, Feiqun; Hang, Taijun

    2018-06-01

    A reliable UPLC-MS/MS method with high sensitivity was developed and validated for the determination of virginiamycin M1 in muscle, fat, liver, and kidney samples of chicken and swine. Analytes were extracted using acetonitrile and extracts were defatted by N-hexane. Chromatographic separation was performed on a BEH C18 liquid chromatography column. The analytes were then detected using triplequadrupole mass spectrometry in positive electrospray ionization and multiple reaction monitoring mode. Calibration plots were constructed using standard working solutions and showed good linearity. Limits of quantification ranged from 2 to 60 ng mL -1 . Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Generation of Single Photons and Entangled Photon Pairs from a Quantum Dot

    NASA Astrophysics Data System (ADS)

    Yamamoto, Y.; Pelton, M.; Santori, C.; Solomon, G. S.

    2002-10-01

    Current quantum cryptography systems are limited by the Poissonian photon statistics of a standard light source: a security loophole is opened up by the possibility of multiple-photon pulses. By replacing the source with a single-photon emitter, transmission rates of secure information can be improved. A single photon source is also essential to implement a linear optics quantum computer. We have investigated the use of single self-assembled InAs/GaAs quantum dots as such single-photon sources, and have seen a hundred-fold reduction in the multi-photon probability as compared to Poissonian pulses. An extension of our experiment should also allow for the generation of triggered, polarizationentangled photon pairs.

  16. Improved Linear-Ion-Trap Frequency Standard

    NASA Technical Reports Server (NTRS)

    Prestage, John D.

    1995-01-01

    Improved design concept for linear-ion-trap (LIT) frequency-standard apparatus proposed. Apparatus contains lengthened linear ion trap, and ions processed alternately in two regions: ions prepared in upper region of trap, then transported to lower region for exposure to microwave radiation, then returned to upper region for optical interrogation. Improved design intended to increase long-term frequency stability of apparatus while reducing size, mass, and cost.

  17. A Quantitative and Combinatorial Approach to Non-Linear Meanings of Multiplication

    ERIC Educational Resources Information Center

    Tillema, Erik; Gatza, Andrew

    2016-01-01

    We provide a conceptual analysis of how combinatorics problems have the potential to support students to establish non-linear meanings of multiplication (NLMM). The problems we analyze we have used in a series of studies with 6th, 8th, and 10th grade students. We situate the analysis in prior work on students' quantitative and multiplicative…

  18. On the linear relation between the mean and the standard deviation of a response time distribution.

    PubMed

    Wagenmakers, Eric-Jan; Brown, Scott

    2007-07-01

    Although it is generally accepted that the spread of a response time (RT) distribution increases with the mean, the precise nature of this relation remains relatively unexplored. The authors show that in several descriptive RT distributions, the standard deviation increases linearly with the mean. Results from a wide range of tasks from different experimental paradigms support a linear relation between RT mean and RT standard deviation. Both R. Ratcliff's (1978) diffusion model and G. D. Logan's (1988) instance theory of automatization provide explanations for this linear relation. The authors identify and discuss 3 specific boundary conditions for the linear law to hold. The law constrains RT models and supports the use of the coefficient of variation to (a) compare variability while controlling for differences in baseline speed of processing and (b) assess whether changes in performance with practice are due to quantitative speedup or qualitative reorganization. Copyright 2007 APA.

  19. Consistency between hydrological models and field observations: Linking processes at the hillslope scale to hydrological responses at the watershed scale

    USGS Publications Warehouse

    Clark, M.P.; Rupp, D.E.; Woods, R.A.; Tromp-van, Meerveld; Peters, N.E.; Freer, J.E.

    2009-01-01

    The purpose of this paper is to identify simple connections between observations of hydrological processes at the hillslope scale and observations of the response of watersheds following rainfall, with a view to building a parsimonious model of catchment processes. The focus is on the well-studied Panola Mountain Research Watershed (PMRW), Georgia, USA. Recession analysis of discharge Q shows that while the relationship between dQ/dt and Q is approximately consistent with a linear reservoir for the hillslope, there is a deviation from linearity that becomes progressively larger with increasing spatial scale. To account for these scale differences conceptual models of streamflow recession are defined at both the hillslope scale and the watershed scale, and an assessment made as to whether models at the hillslope scale can be aggregated to be consistent with models at the watershed scale. Results from this study show that a model with parallel linear reservoirs provides the most plausible explanation (of those tested) for both the linear hillslope response to rainfall and non-linear recession behaviour observed at the watershed outlet. In this model each linear reservoir is associated with a landscape type. The parallel reservoir model is consistent with both geochemical analyses of hydrological flow paths and water balance estimates of bedrock recharge. Overall, this study demonstrates that standard approaches of using recession analysis to identify the functional form of storage-discharge relationships identify model structures that are inconsistent with field evidence, and that recession analysis at multiple spatial scales can provide useful insights into catchment behaviour. Copyright ?? 2008 John Wiley & Sons, Ltd.

  20. Specification for Teaching Machines and Programmes (Interchangeability of Programmes). Part 1, Linear Machines and Programmes.

    ERIC Educational Resources Information Center

    British Standards Institution, London (England).

    To promote interchangeability of teaching machines and programs, so that the user is not so limited in his choice of programs, the British Standards Institute has offered a standard. Part I of the standard deals with linear teaching machines and programs that make use of the roll or sheet methods of presentation. Requirements cover: spools,…

  1. [Determination of biurea in flour and its products by liquid chromatography-tandem mass spectrometry].

    PubMed

    Wang, Ya; Wang, Junsu; Xiang, Lu; Xi, Cunxian; Chen, Dongdong; Peng, Tao; Wang, Guomin; Mu, Zhaode

    2014-05-01

    A novel method was established for the determination and identification of biurea in flour and its products using liquid chromatography-tandem mass spectrometry (LC-MS/MS). The biurea was extracted with water and oxidized to azodicarbonamide by potassium permanganate. The azodicarbonamide was then derivatized using sodium p-toluene sulfinate solution. The separation was performed on a Shimpak XR-ODS II column (150 mm x 2.0 mm, 2.2 microm) using the mobile phase composed of acetonitrile and 2 mmol/L ammonium acetate aqueous solution (containing 0.2% (v/v) formic acid) with a gradient elution program. Tandem mass spectrometric detection was performed in multiple reaction monitoring (MRM) scan mode with a positive electrospray ionization (ESI(+)) source. The method used stable isotope internal standard quantitation. The calibration curve showed good linearity over the range of 1-20 000 microg/kg (R2 = 0.999 9). The limit of quantification was 5 microg/kg for biurea spiked in flour and its products. At the spiking levels of 5.0, 10.0 and 50.0 microg/kg in different matrices, the average recovery o biurea was 78.3%-108.0% with the relative standard deviations (RSDs) < or = 5.73%. The method developed is novel, reliable and sensitive with wide linear range, and can be used to determine the biurea in flour and its products.

  2. Validated method for determination of bromopride in human plasma by liquid chromatography--electrospray tandem mass spectrometry: application to the bioequivalence study.

    PubMed

    Nazare, P; Massaroti, P; Duarte, L F; Campos, D R; Marchioretto, M A M; Bernasconi, G; Calafatti, S; Barros, F A P; Meurer, E C; Pedrazzoli, J; Moraes, L A B

    2005-09-01

    A simple, sensitive and specific liquid chromatography-tandem mass spectrometry method for the quantification of bromopride I in human plasma is presented. Sample preparation consisted of the addition of procainamide II as the internal standard, liquid-liquid extraction in alkaline conditions using hexane-ethyl acetate (1 : 1, v/v) as the extracting solvent, followed by centrifugation, evaporation of the solvent and sample reconstitution in acetonitrile. Both I and II (internal standard, IS) were analyzed using a C18 column and the mobile-phase acetonitrile-water (formic acid 0.1%). The eluted compounds were monitored using electrospray tandem mass spectrometry. The analyses were carried out by multiple reaction monitoring (MRM) using the parent-to-daughter combinations of m/z 344.20 > 271.00 and m/z 236.30 > 163.10. The areas of peaks from analyte and IS were used for quantification of I. The achieved limit of quantification was 1.0 ng/ml and the assay exhibited a linear dynamic range of 1-100.0 ng/ml and gave a correlation coefficient (r) of 0.995 or better. Validation results on linearity, specificity, accuracy, precision and stability, as well as application to the analysis of samples taken up to 24 h after oral administration of 10 mg of I in healthy volunteers demonstrated the applicability to bioequivalence studies.

  3. Detector Outline Document for the Fourth Concept Detector ("4th") at the International Linear Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbareschi, Daniele; et al.

    We describe a general purpose detector ( "Fourth Concept") at the International Linear Collider (ILC) that can measure with high precision all the fundamental fermions and bosons of the standard model, and thereby access all known physics processes. The 4th concept consists of four basic subsystems: a pixel vertex detector for high precision vertex definitions, impact parameter tagging and near-beam occupancy reduction; a Time Projection Chamber for robust pattern recognition augmented with three high-precision pad rows for precision momentum measurement; a high precision multiple-readout fiber calorimeter, complemented with an EM dual-readout crystal calorimeter, for the energy measurement of hadrons, jets,more » electrons, photons, missing momentum, and the tagging of muons; and, an iron-free dual-solenoid muon system for the inverse direction bending of muons in a gas volume to achieve high acceptance and good muon momentum resolution. The pixel vertex chamber, TPC and calorimeter are inside the solenoidal magnetic field. All four subsytems separately achieve the important scientific goal to be 2-to-10 times better than the already excellent LEP detectors, ALEPH, DELPHI, L3 and OPAL. All four basic subsystems contribute to the identification of standard model partons, some in unique ways, such that consequent physics studies are cogent. As an integrated detector concept, we achieve comprehensive physics capabilities that puts all conceivable physics at the ILC within reach.« less

  4. Method development towards qualitative and semi-quantitative analysis of multiple pesticides from food surfaces and extracts by desorption electrospray ionization mass spectrometry as a preselective tool for food control.

    PubMed

    Gerbig, Stefanie; Stern, Gerold; Brunn, Hubertus E; Düring, Rolf-Alexander; Spengler, Bernhard; Schulz, Sabine

    2017-03-01

    Direct analysis of fruit and vegetable surfaces is an important tool for in situ detection of food contaminants such as pesticides. We tested three different ways to prepare samples for the qualitative desorption electrospray ionization mass spectrometry (DESI-MS) analysis of 32 pesticides found on nine authentic fruits collected from food control. Best recovery rates for topically applied pesticides (88%) were found by analyzing the surface of a glass slide which had been rubbed against the surface of the food. Pesticide concentration in all samples was at or below the maximum residue level allowed. In addition to the high sensitivity of the method for qualitative analysis, quantitative or, at least, semi-quantitative information is needed in food control. We developed a DESI-MS method for the simultaneous determination of linear calibration curves of multiple pesticides of the same chemical class using normalization to one internal standard (ISTD). The method was first optimized for food extracts and subsequently evaluated for the quantification of pesticides in three authentic food extracts. Next, pesticides and the ISTD were applied directly onto food surfaces, and the corresponding calibration curves were obtained. The determination of linear calibration curves was still feasible, as demonstrated for three different food surfaces. This proof-of-principle method was used to simultaneously quantify two pesticides on an authentic sample, showing that the method developed could serve as a fast and simple preselective tool for disclosure of pesticide regulation violations. Graphical Abstract Multiple pesticide residues were detected and quantified in-situ from an authentic set of food items and extracts in a proof of principle study.

  5. Determining CDOM Absorption Spectra in Diverse Coastal Environments Using a Multiple Pathlength, Liquid Core Waveguide System. Measuring the Absorption of CDOM in the Field Using a Multiple Pathlength Liquid Waveguide System

    NASA Technical Reports Server (NTRS)

    Miller, Richard L.; Belz, Mathias; DelCastillo, Carlos; Trzaska, Rick

    2000-01-01

    We evaluated the accuracy, sensitivity and precision of a multiple pathlength, liquid core waveguide (MPLCW) system for measuring colored dissolved organic matter (CDOM) absorption in the UV-visible spectral range (370-700 nm). The MPLCW has four optical paths (2.0, 9.8, 49.3, and 204 cm) coupled to a single Teflon AF sample cell. Water samples were obtained from inland, coastal and ocean waters ranging in salinity from 0 to 36 PSU. Reference solutions for the MPLCW were made having a refractive index of the sample. CDOM absorption coefficients, a(sub CDOM), and the slope of the log-linearized absorption spectra, S, were compared with values obtained using a dual-beam spectrophotometer. Absorption of phenol red secondary standards measured by the MPLCW at 558 nm were highly correlated with spectrophotometer values (r > 0.99) and showed a linear response across all four pathlengths. Values of a(sub CDOM) measured using the MPLCW were virtually identical to spectrophotometer values over a wide range of concentrations. The dynamic range of a(sub CDOM) for MPLCW measurements was 0.002 - 231.5/m. At low CDOM concentrations (a(sub 370) < 0.1/m) spectrophotometric a(sub CDOM) were slightly greater than MPLCW values and showed larger fluctuations at longer wavelengths due to limitations in instrument precision. In contrast, MPLCW spectra followed an exponential to 600 nm for all samples. The maximum deviation in replicate MPLCW spectra was less than 0.001 absorbance units. The portability, sampling, and optical characteristics of a MPLCW system provide significant enhancements for routine CDOM absorption measurements in a broad range of natural waters.

  6. Stature estimation from the lengths of the growing foot-a study on North Indian adolescents.

    PubMed

    Krishan, Kewal; Kanchan, Tanuj; Passi, Neelam; DiMaggio, John A

    2012-12-01

    Stature estimation is considered as one of the basic parameters of the investigation process in unknown and commingled human remains in medico-legal case work. Race, age and sex are the other parameters which help in this process. Stature estimation is of the utmost importance as it completes the biological profile of a person along with the other three parameters of identification. The present research is intended to formulate standards for stature estimation from foot dimensions in adolescent males from North India and study the pattern of foot growth during the growing years. 154 male adolescents from the Northern part of India were included in the study. Besides stature, five anthropometric measurements that included the length of the foot from each toe (T1, T2, T3, T4, and T5 respectively) to pternion were measured on each foot. The data was analyzed statistically using Student's t-test, Pearson's correlation, linear and multiple regression analysis for estimation of stature and growth of foot during ages 13-18 years. Correlation coefficients between stature and all the foot measurements were found to be highly significant and positively correlated. Linear regression models and multiple regression models (with age as a co-variable) were derived for estimation of stature from the different measurements of the foot. Multiple regression models (with age as a co-variable) estimate stature with greater accuracy than the regression models for 13-18 years age group. The study shows the growth pattern of feet in North Indian adolescents and indicates that anthropometric measurements of the foot and its segments are valuable in estimation of stature in growing individuals of that population. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. [Stature estimation for Sichuan Han nationality female based on X-ray technology with measurement of lumbar vertebrae].

    PubMed

    Qing, Si-han; Chang, Yun-feng; Dong, Xiao-ai; Li, Yuan; Chen, Xiao-gang; Shu, Yong-kang; Deng, Zhen-hua

    2013-10-01

    To establish the mathematical models of stature estimation for Sichuan Han female with measurement of lumbar vertebrae by X-ray to provide essential data for forensic anthropology research. The samples, 206 Sichuan Han females, were divided into three groups including group A, B and C according to the ages. Group A (206 samples) consisted of all ages, group B (116 samples) were 20-45 years old and 90 samples over 45 years old were group C. All the samples were examined lumbar vertebrae through CR technology, including the parameters of five centrums (L1-L5) as anterior border, posterior border and central heights (x1-x15), total central height of lumbar spine (x16), and the real height of every sample. The linear regression analysis was produced using the parameters to establish the mathematical models of stature estimation. Sixty-two trained subjects were tested to verify the accuracy of the mathematical models. The established mathematical models by hypothesis test of linear regression equation model were statistically significant (P<0.05). The standard errors of the equation were 2.982-5.004 cm, while correlation coefficients were 0.370-0.779 and multiple correlation coefficients were 0.533-0.834. The return tests of the highest correlation coefficient and multiple correlation coefficient of each group showed that the highest accuracy of the multiple regression equation, y = 100.33 + 1.489 x3 - 0.548 x6 + 0.772 x9 + 0.058 x12 + 0.645 x15, in group A were 80.6% (+/- lSE) and 100% (+/- 2SE). The established mathematical models in this study could be applied for the stature estimation for Sichuan Han females.

  8. Shared Dosimetry Error in Epidemiological Dose-Response Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stram, Daniel O.; Preston, Dale L.; Sokolnikov, Mikhail

    2015-03-23

    Radiation dose reconstruction systems for large-scale epidemiological studies are sophisticated both in providing estimates of dose and in representing dosimetry uncertainty. For example, a computer program was used by the Hanford Thyroid Disease Study to provide 100 realizations of possible dose to study participants. The variation in realizations reflected the range of possible dose for each cohort member consistent with the data on dose determinates in the cohort. Another example is the Mayak Worker Dosimetry System 2013 which estimates both external and internal exposures and provides multiple realizations of "possible" dose history to workers given dose determinants. This paper takesmore » up the problem of dealing with complex dosimetry systems that provide multiple realizations of dose in an epidemiologic analysis. In this paper we derive expected scores and the information matrix for a model used widely in radiation epidemiology, namely the linear excess relative risk (ERR) model that allows for a linear dose response (risk in relation to radiation) and distinguishes between modifiers of background rates and of the excess risk due to exposure. We show that treating the mean dose for each individual (calculated by averaging over the realizations) as if it was true dose (ignoring both shared and unshared dosimetry errors) gives asymptotically unbiased estimates (i.e. the score has expectation zero) and valid tests of the null hypothesis that the ERR slope β is zero. Although the score is unbiased the information matrix (and hence the standard errors of the estimate of β) is biased for β≠0 when ignoring errors in dose estimates, and we show how to adjust the information matrix to remove this bias, using the multiple realizations of dose. Use of these methods for several studies, including the Mayak Worker Cohort and the U.S. Atomic Veterans Study, is discussed.« less

  9. Multi-site assessment of the precision and reproducibility of multiple reaction monitoring–based measurements of proteins in plasma

    PubMed Central

    Addona, Terri A; Abbatiello, Susan E; Schilling, Birgit; Skates, Steven J; Mani, D R; Bunk, David M; Spiegelman, Clifford H; Zimmerman, Lisa J; Ham, Amy-Joan L; Keshishian, Hasmik; Hall, Steven C; Allen, Simon; Blackman, Ronald K; Borchers, Christoph H; Buck, Charles; Cardasis, Helene L; Cusack, Michael P; Dodder, Nathan G; Gibson, Bradford W; Held, Jason M; Hiltke, Tara; Jackson, Angela; Johansen, Eric B; Kinsinger, Christopher R; Li, Jing; Mesri, Mehdi; Neubert, Thomas A; Niles, Richard K; Pulsipher, Trenton C; Ransohoff, David; Rodriguez, Henry; Rudnick, Paul A; Smith, Derek; Tabb, David L; Tegeler, Tony J; Variyath, Asokan M; Vega-Montoto, Lorenzo J; Wahlander, Åsa; Waldemarson, Sofia; Wang, Mu; Whiteaker, Jeffrey R; Zhao, Lei; Anderson, N Leigh; Fisher, Susan J; Liebler, Daniel C; Paulovich, Amanda G; Regnier, Fred E; Tempst, Paul; Carr, Steven A

    2010-01-01

    Verification of candidate biomarkers relies upon specific, quantitative assays optimized for selective detection of target proteins, and is increasingly viewed as a critical step in the discovery pipeline that bridges unbiased biomarker discovery to preclinical validation. Although individual laboratories have demonstrated that multiple reaction monitoring (MRM) coupled with isotope dilution mass spectrometry can quantify candidate protein biomarkers in plasma, reproducibility and transferability of these assays between laboratories have not been demonstrated. We describe a multilaboratory study to assess reproducibility, recovery, linear dynamic range and limits of detection and quantification of multiplexed, MRM-based assays, conducted by NCI-CPTAC. Using common materials and standardized protocols, we demonstrate that these assays can be highly reproducible within and across laboratories and instrument platforms, and are sensitive to low µg/ml protein concentrations in unfractionated plasma. We provide data and benchmarks against which individual laboratories can compare their performance and evaluate new technologies for biomarker verification in plasma. PMID:19561596

  10. Writing and compiling code into biochemistry.

    PubMed

    Shea, Adam; Fett, Brian; Riedel, Marc D; Parhi, Keshab

    2010-01-01

    This paper presents a methodology for translating iterative arithmetic computation, specified as high-level programming constructs, into biochemical reactions. From an input/output specification, we generate biochemical reactions that produce output quantities of proteins as a function of input quantities performing operations such as addition, subtraction, and scalar multiplication. Iterative constructs such as "while" loops and "for" loops are implemented by transferring quantities between protein types, based on a clocking mechanism. Synthesis first is performed at a conceptual level, in terms of abstract biochemical reactions - a task analogous to high-level program compilation. Then the results are mapped onto specific biochemical reactions selected from libraries - a task analogous to machine language compilation. We demonstrate our approach through the compilation of a variety of standard iterative functions: multiplication, exponentiation, discrete logarithms, raising to a power, and linear transforms on time series. The designs are validated through transient stochastic simulation of the chemical kinetics. We are exploring DNA-based computation via strand displacement as a possible experimental chassis.

  11. A photonic circuit for complementary frequency shifting, in-phase quadrature/single sideband modulation and frequency multiplication: analysis and integration feasibility

    NASA Astrophysics Data System (ADS)

    Hasan, Mehedi; Hu, Jianqi; Nikkhah, Hamdam; Hall, Trevor

    2017-08-01

    A novel photonic integrated circuit architecture for implementing orthogonal frequency division multiplexing by means of photonic generation of phase-correlated sub-carriers is proposed. The circuit can also be used for implementing complex modulation, frequency up-conversion of the electrical signal to the optical domain and frequency multiplication. The principles of operation of the circuit are expounded using transmission matrices and the predictions of the analysis are verified by computer simulation using an industry-standard software tool. Non-ideal scenarios that may affect the correct function of the circuit are taken into consideration and quantified. The discussion of integration feasibility is illustrated by a photonic integrated circuit that has been fabricated using 'library' components and which features most of the elements of the proposed circuit architecture. The circuit is found to be practical and may be fabricated in any material platform that offers a linear electro-optic modulator such as organic or ferroelectric thin films hybridized with silicon photonics.

  12. Insights from Classifying Visual Concepts with Multiple Kernel Learning

    PubMed Central

    Binder, Alexander; Nakajima, Shinichi; Kloft, Marius; Müller, Christina; Samek, Wojciech; Brefeld, Ulf; Müller, Klaus-Robert; Kawanabe, Motoaki

    2012-01-01

    Combining information from various image features has become a standard technique in concept recognition tasks. However, the optimal way of fusing the resulting kernel functions is usually unknown in practical applications. Multiple kernel learning (MKL) techniques allow to determine an optimal linear combination of such similarity matrices. Classical approaches to MKL promote sparse mixtures. Unfortunately, 1-norm regularized MKL variants are often observed to be outperformed by an unweighted sum kernel. The main contributions of this paper are the following: we apply a recently developed non-sparse MKL variant to state-of-the-art concept recognition tasks from the application domain of computer vision. We provide insights on benefits and limits of non-sparse MKL and compare it against its direct competitors, the sum-kernel SVM and sparse MKL. We report empirical results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo Annotation challenge data sets. Data sets (kernel matrices) as well as further information are available at http://doc.ml.tu-berlin.de/image_mkl/(Accessed 2012 Jun 25). PMID:22936970

  13. A consensus embedding approach for segmentation of high resolution in vivo prostate magnetic resonance imagery

    NASA Astrophysics Data System (ADS)

    Viswanath, Satish; Rosen, Mark; Madabhushi, Anant

    2008-03-01

    Current techniques for localization of prostatic adenocarcinoma (CaP) via blinded trans-rectal ultrasound biopsy are associated with a high false negative detection rate. While high resolution endorectal in vivo Magnetic Resonance (MR) prostate imaging has been shown to have improved contrast and resolution for CaP detection over ultrasound, similarity in intensity characteristics between benign and cancerous regions on MR images contribute to a high false positive detection rate. In this paper, we present a novel unsupervised segmentation method that employs manifold learning via consensus schemes for detection of cancerous regions from high resolution 1.5 Tesla (T) endorectal in vivo prostate MRI. A significant contribution of this paper is a method to combine multiple weak, lower-dimensional representations of high dimensional feature data in a way analogous to classifier ensemble schemes, and hence create a stable and accurate reduced dimensional representation. After correcting for MR image intensity artifacts, such as bias field inhomogeneity and intensity non-standardness, our algorithm extracts over 350 3D texture features at every spatial location in the MR scene at multiple scales and orientations. Non-linear dimensionality reduction schemes such as Locally Linear Embedding (LLE) and Graph Embedding (GE) are employed to create multiple low dimensional data representations of this high dimensional texture feature space. Our novel consensus embedding method is used to average object adjacencies from within the multiple low dimensional projections so that class relationships are preserved. Unsupervised consensus clustering is then used to partition the objects in this consensus embedding space into distinct classes. Quantitative evaluation on 18 1.5 T prostate MR data against corresponding histology obtained from the multi-site ACRIN trials show a sensitivity of 92.65% and a specificity of 82.06%, which suggests that our method is successfully able to detect suspicious regions in the prostate.

  14. Simulated bi-SQUID Arrays Performing Direction Finding

    DTIC Science & Technology

    2015-09-01

    First, we applied the multiple signal classification ( MUSIC ) algorithm on linearly polarized signals. We included multiple signals in the output...both of the same frequency and different fre- quencies. Next, we explored a modified MUSIC algorithm called dimensionality reduction MUSIC (DR- MUSIC ... MUSIC algorithm is able to determine the AoA from the simulated SQUID data for linearly polarized signals. The MUSIC algorithm could accurately find

  15. Estimate the contribution of incubation parameters influence egg hatchability using multiple linear regression analysis

    PubMed Central

    Khalil, Mohamed H.; Shebl, Mostafa K.; Kosba, Mohamed A.; El-Sabrout, Karim; Zaki, Nesma

    2016-01-01

    Aim: This research was conducted to determine the most affecting parameters on hatchability of indigenous and improved local chickens’ eggs. Materials and Methods: Five parameters were studied (fertility, early and late embryonic mortalities, shape index, egg weight, and egg weight loss) on four strains, namely Fayoumi, Alexandria, Matrouh, and Montazah. Multiple linear regression was performed on the studied parameters to determine the most influencing one on hatchability. Results: The results showed significant differences in commercial and scientific hatchability among strains. Alexandria strain has the highest significant commercial hatchability (80.70%). Regarding the studied strains, highly significant differences in hatching chick weight among strains were observed. Using multiple linear regression analysis, fertility made the greatest percent contribution (71.31%) to hatchability, and the lowest percent contributions were made by shape index and egg weight loss. Conclusion: A prediction of hatchability using multiple regression analysis could be a good tool to improve hatchability percentage in chickens. PMID:27651666

  16. A non-linear regression analysis program for describing electrophysiological data with multiple functions using Microsoft Excel.

    PubMed

    Brown, Angus M

    2006-04-01

    The objective of this present study was to demonstrate a method for fitting complex electrophysiological data with multiple functions using the SOLVER add-in of the ubiquitous spreadsheet Microsoft Excel. SOLVER minimizes the difference between the sum of the squares of the data to be fit and the function(s) describing the data using an iterative generalized reduced gradient method. While it is a straightforward procedure to fit data with linear functions, and we have previously demonstrated a method of non-linear regression analysis of experimental data based upon a single function, it is more complex to fit data with multiple functions, usually requiring specialized expensive computer software. In this paper we describe an easily understood program for fitting experimentally acquired data, in this case the stimulus-evoked compound action potential from the mouse optic nerve, with multiple Gaussian functions. The program is flexible and can be applied to describe data with a wide variety of user-input functions.

  17. A new adaptive multiple modelling approach for non-linear and non-stationary systems

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Gong, Yu; Hong, Xia

    2016-07-01

    This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.

  18. Ensemble Clustering using Semidefinite Programming with Applications

    PubMed Central

    Singh, Vikas; Mukherjee, Lopamudra; Peng, Jiming; Xu, Jinhui

    2011-01-01

    In this paper, we study the ensemble clustering problem, where the input is in the form of multiple clustering solutions. The goal of ensemble clustering algorithms is to aggregate the solutions into one solution that maximizes the agreement in the input ensemble. We obtain several new results for this problem. Specifically, we show that the notion of agreement under such circumstances can be better captured using a 2D string encoding rather than a voting strategy, which is common among existing approaches. Our optimization proceeds by first constructing a non-linear objective function which is then transformed into a 0–1 Semidefinite program (SDP) using novel convexification techniques. This model can be subsequently relaxed to a polynomial time solvable SDP. In addition to the theoretical contributions, our experimental results on standard machine learning and synthetic datasets show that this approach leads to improvements not only in terms of the proposed agreement measure but also the existing agreement measures based on voting strategies. In addition, we identify several new application scenarios for this problem. These include combining multiple image segmentations and generating tissue maps from multiple-channel Diffusion Tensor brain images to identify the underlying structure of the brain. PMID:21927539

  19. Ensemble Clustering using Semidefinite Programming with Applications.

    PubMed

    Singh, Vikas; Mukherjee, Lopamudra; Peng, Jiming; Xu, Jinhui

    2010-05-01

    In this paper, we study the ensemble clustering problem, where the input is in the form of multiple clustering solutions. The goal of ensemble clustering algorithms is to aggregate the solutions into one solution that maximizes the agreement in the input ensemble. We obtain several new results for this problem. Specifically, we show that the notion of agreement under such circumstances can be better captured using a 2D string encoding rather than a voting strategy, which is common among existing approaches. Our optimization proceeds by first constructing a non-linear objective function which is then transformed into a 0-1 Semidefinite program (SDP) using novel convexification techniques. This model can be subsequently relaxed to a polynomial time solvable SDP. In addition to the theoretical contributions, our experimental results on standard machine learning and synthetic datasets show that this approach leads to improvements not only in terms of the proposed agreement measure but also the existing agreement measures based on voting strategies. In addition, we identify several new application scenarios for this problem. These include combining multiple image segmentations and generating tissue maps from multiple-channel Diffusion Tensor brain images to identify the underlying structure of the brain.

  20. Recovering hidden diagonal structures via non-negative matrix factorization with multiple constraints.

    PubMed

    Yang, Xi; Han, Guoqiang; Cai, Hongmin; Song, Yan

    2017-03-31

    Revealing data with intrinsically diagonal block structures is particularly useful for analyzing groups of highly correlated variables. Earlier researches based on non-negative matrix factorization (NMF) have been shown to be effective in representing such data by decomposing the observed data into two factors, where one factor is considered to be the feature and the other the expansion loading from a linear algebra perspective. If the data are sampled from multiple independent subspaces, the loading factor would possess a diagonal structure under an ideal matrix decomposition. However, the standard NMF method and its variants have not been reported to exploit this type of data via direct estimation. To address this issue, a non-negative matrix factorization with multiple constraints model is proposed in this paper. The constraints include an sparsity norm on the feature matrix and a total variational norm on each column of the loading matrix. The proposed model is shown to be capable of efficiently recovering diagonal block structures hidden in observed samples. An efficient numerical algorithm using the alternating direction method of multipliers model is proposed for optimizing the new model. Compared with several benchmark models, the proposed method performs robustly and effectively for simulated and real biological data.

  1. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies

    PubMed Central

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-01-01

    Abstract Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. PMID:29106476

  2. Robust control of a parallel hybrid drivetrain with a CVT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayer, T.; Schroeder, D.

    1996-09-01

    In this paper the design of a robust control system for a parallel hybrid drivetrain is presented. The drivetrain is based on a continuously variable transmission (CVT) and is therefore a highly nonlinear multiple-input-multiple-output system (MIMO-System). Input-Output-Linearization offers the possibility of linearizing and of decoupling the system. Since for example the vehicle mass varies with the load and the efficiency of the gearbox depends strongly on the actual working point, an exact linearization of the plant will mostly fail. Therefore a robust control algorithm based on sliding mode is used to control the drivetrain.

  3. Quantitative imaging biomarkers: Effect of sample size and bias on confidence interval coverage.

    PubMed

    Obuchowski, Nancy A; Bullen, Jennifer

    2017-01-01

    Introduction Quantitative imaging biomarkers (QIBs) are being increasingly used in medical practice and clinical trials. An essential first step in the adoption of a quantitative imaging biomarker is the characterization of its technical performance, i.e. precision and bias, through one or more performance studies. Then, given the technical performance, a confidence interval for a new patient's true biomarker value can be constructed. Estimating bias and precision can be problematic because rarely are both estimated in the same study, precision studies are usually quite small, and bias cannot be measured when there is no reference standard. Methods A Monte Carlo simulation study was conducted to assess factors affecting nominal coverage of confidence intervals for a new patient's quantitative imaging biomarker measurement and for change in the quantitative imaging biomarker over time. Factors considered include sample size for estimating bias and precision, effect of fixed and non-proportional bias, clustered data, and absence of a reference standard. Results Technical performance studies of a quantitative imaging biomarker should include at least 35 test-retest subjects to estimate precision and 65 cases to estimate bias. Confidence intervals for a new patient's quantitative imaging biomarker measurement constructed under the no-bias assumption provide nominal coverage as long as the fixed bias is <12%. For confidence intervals of the true change over time, linearity must hold and the slope of the regression of the measurements vs. true values should be between 0.95 and 1.05. The regression slope can be assessed adequately as long as fixed multiples of the measurand can be generated. Even small non-proportional bias greatly reduces confidence interval coverage. Multiple lesions in the same subject can be treated as independent when estimating precision. Conclusion Technical performance studies of quantitative imaging biomarkers require moderate sample sizes in order to provide robust estimates of bias and precision for constructing confidence intervals for new patients. Assumptions of linearity and non-proportional bias should be assessed thoroughly.

  4. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process.

    PubMed

    Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori

    2018-01-12

    To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke - . Readout noise under the highest pixel gain condition is 1 e - with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7", 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach.

  5. A spatial domain decomposition approach to distributed H ∞ observer design of a linear unstable parabolic distributed parameter system with spatially discrete sensors

    NASA Astrophysics Data System (ADS)

    Wang, Jun-Wei; Liu, Ya-Qiang; Hu, Yan-Yan; Sun, Chang-Yin

    2017-12-01

    This paper discusses the design problem of distributed H∞ Luenberger-type partial differential equation (PDE) observer for state estimation of a linear unstable parabolic distributed parameter system (DPS) with external disturbance and measurement disturbance. Both pointwise measurement in space and local piecewise uniform measurement in space are considered; that is, sensors are only active at some specified points or applied at part thereof of the spatial domain. The spatial domain is decomposed into multiple subdomains according to the location of the sensors such that only one sensor is located at each subdomain. By using Lyapunov technique, Wirtinger's inequality at each subdomain, and integration by parts, a Lyapunov-based design of Luenberger-type PDE observer is developed such that the resulting estimation error system is exponentially stable with an H∞ performance constraint, and presented in terms of standard linear matrix inequalities (LMIs). For the case of local piecewise uniform measurement in space, the first mean value theorem for integrals is utilised in the observer design development. Moreover, the problem of optimal H∞ observer design is also addressed in the sense of minimising the attenuation level. Numerical simulation results are presented to show the satisfactory performance of the proposed design method.

  6. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process †

    PubMed Central

    Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori

    2018-01-01

    To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke−. Readout noise under the highest pixel gain condition is 1 e− with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7”, 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach. PMID:29329210

  7. Unscented Kalman Filter for Brain-Machine Interfaces

    PubMed Central

    Li, Zheng; O'Doherty, Joseph E.; Hanson, Timothy L.; Lebedev, Mikhail A.; Henriquez, Craig S.; Nicolelis, Miguel A. L.

    2009-01-01

    Brain machine interfaces (BMIs) are devices that convert neural signals into commands to directly control artificial actuators, such as limb prostheses. Previous real-time methods applied to decoding behavioral commands from the activity of populations of neurons have generally relied upon linear models of neural tuning and were limited in the way they used the abundant statistical information contained in the movement profiles of motor tasks. Here, we propose an n-th order unscented Kalman filter which implements two key features: (1) use of a non-linear (quadratic) model of neural tuning which describes neural activity significantly better than commonly-used linear tuning models, and (2) augmentation of the movement state variables with a history of n-1 recent states, which improves prediction of the desired command even before incorporating neural activity information and allows the tuning model to capture relationships between neural activity and movement at multiple time offsets simultaneously. This new filter was tested in BMI experiments in which rhesus monkeys used their cortical activity, recorded through chronically implanted multielectrode arrays, to directly control computer cursors. The 10th order unscented Kalman filter outperformed the standard Kalman filter and the Wiener filter in both off-line reconstruction of movement trajectories and real-time, closed-loop BMI operation. PMID:19603074

  8. Monitoring design for assessing compliance with numeric nutrient standards for rivers and streams using geospatial variables.

    PubMed

    Williams, Rachel E; Arabi, Mazdak; Loftis, Jim; Elmund, G Keith

    2014-09-01

    Implementation of numeric nutrient standards in Colorado has prompted a need for greater understanding of human impacts on ambient nutrient levels. This study explored the variability of annual nutrient concentrations due to upstream anthropogenic influences and developed a mathematical expression for the number of samples required to estimate median concentrations for standard compliance. A procedure grounded in statistical hypothesis testing was developed to estimate the number of annual samples required at monitoring locations while taking into account the difference between the median concentrations and the water quality standard for a lognormal population. For the Cache La Poudre River in northern Colorado, the relationship between the median and standard deviation of total N (TN) and total P (TP) concentrations and the upstream point and nonpoint concentrations and general hydrologic descriptors was explored using multiple linear regression models. Very strong relationships were evident between the upstream anthropogenic influences and annual medians for TN and TP ( > 0.85, < 0.001) and corresponding standard deviations ( > 0.7, < 0.001). Sample sizes required to demonstrate (non)compliance with the standard depend on the measured water quality conditions. When the median concentration differs from the standard by >20%, few samples are needed to reach a 95% confidence level. When the median is within 20% of the corresponding water quality standard, however, the required sample size increases rapidly, and hundreds of samples may be required. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  9. Automating linear accelerator quality assurance.

    PubMed

    Eckhause, Tobias; Al-Hallaq, Hania; Ritter, Timothy; DeMarco, John; Farrey, Karl; Pawlicki, Todd; Kim, Gwe-Ya; Popple, Richard; Sharma, Vijeshwar; Perez, Mario; Park, SungYong; Booth, Jeremy T; Thorwarth, Ryan; Moran, Jean M

    2015-10-01

    The purpose of this study was 2-fold. One purpose was to develop an automated, streamlined quality assurance (QA) program for use by multiple centers. The second purpose was to evaluate machine performance over time for multiple centers using linear accelerator (Linac) log files and electronic portal images. The authors sought to evaluate variations in Linac performance to establish as a reference for other centers. The authors developed analytical software tools for a QA program using both log files and electronic portal imaging device (EPID) measurements. The first tool is a general analysis tool which can read and visually represent data in the log file. This tool, which can be used to automatically analyze patient treatment or QA log files, examines the files for Linac deviations which exceed thresholds. The second set of tools consists of a test suite of QA fields, a standard phantom, and software to collect information from the log files on deviations from the expected values. The test suite was designed to focus on the mechanical tests of the Linac to include jaw, MLC, and collimator positions during static, IMRT, and volumetric modulated arc therapy delivery. A consortium of eight institutions delivered the test suite at monthly or weekly intervals on each Linac using a standard phantom. The behavior of various components was analyzed for eight TrueBeam Linacs. For the EPID and trajectory log file analysis, all observed deviations which exceeded established thresholds for Linac behavior resulted in a beam hold off. In the absence of an interlock-triggering event, the maximum observed log file deviations between the expected and actual component positions (such as MLC leaves) varied from less than 1% to 26% of published tolerance thresholds. The maximum and standard deviations of the variations due to gantry sag, collimator angle, jaw position, and MLC positions are presented. Gantry sag among Linacs was 0.336 ± 0.072 mm. The standard deviation in MLC position, as determined by EPID measurements, across the consortium was 0.33 mm for IMRT fields. With respect to the log files, the deviations between expected and actual positions for parameters were small (<0.12 mm) for all Linacs. Considering both log files and EPID measurements, all parameters were well within published tolerance values. Variations in collimator angle, MLC position, and gantry sag were also evaluated for all Linacs. The performance of the TrueBeam Linac model was shown to be consistent based on automated analysis of trajectory log files and EPID images acquired during delivery of a standardized test suite. The results can be compared directly to tolerance thresholds. In addition, sharing of results from standard tests across institutions can facilitate the identification of QA process and Linac changes. These reference values are presented along with the standard deviation for common tests so that the test suite can be used by other centers to evaluate their Linac performance against those in this consortium.

  10. Generating Linear Equations Based on Quantitative Reasoning

    ERIC Educational Resources Information Center

    Lee, Mi Yeon

    2017-01-01

    The Common Core's Standards for Mathematical Practice encourage teachers to develop their students' ability to reason abstractly and quantitatively by helping students make sense of quantities and their relationships within problem situations. The seventh-grade content standards include objectives pertaining to developing linear equations in…

  11. Modified Hyperspheres Algorithm to Trace Homotopy Curves of Nonlinear Circuits Composed by Piecewise Linear Modelled Devices

    PubMed Central

    Vazquez-Leal, H.; Jimenez-Fernandez, V. M.; Benhammouda, B.; Filobello-Nino, U.; Sarmiento-Reyes, A.; Ramirez-Pinero, A.; Marin-Hernandez, A.; Huerta-Chua, J.

    2014-01-01

    We present a homotopy continuation method (HCM) for finding multiple operating points of nonlinear circuits composed of devices modelled by using piecewise linear (PWL) representations. We propose an adaptation of the modified spheres path tracking algorithm to trace the homotopy trajectories of PWL circuits. In order to assess the benefits of this proposal, four nonlinear circuits composed of piecewise linear modelled devices are analysed to determine their multiple operating points. The results show that HCM can find multiple solutions within a single homotopy trajectory. Furthermore, we take advantage of the fact that homotopy trajectories are PWL curves meant to replace the multidimensional interpolation and fine tuning stages of the path tracking algorithm with a simple and highly accurate procedure based on the parametric straight line equation. PMID:25184157

  12. Features in visual search combine linearly

    PubMed Central

    Pramod, R. T.; Arun, S. P.

    2014-01-01

    Single features such as line orientation and length are known to guide visual search, but relatively little is known about how multiple features combine in search. To address this question, we investigated how search for targets differing in multiple features (intensity, length, orientation) from the distracters is related to searches for targets differing in each of the individual features. We tested race models (based on reaction times) and co-activation models (based on reciprocal of reaction times) for their ability to predict multiple feature searches. Multiple feature searches were best accounted for by a co-activation model in which feature information combined linearly (r = 0.95). This result agrees with the classic finding that these features are separable i.e., subjective dissimilarity ratings sum linearly. We then replicated the classical finding that the length and width of a rectangle are integral features—in other words, they combine nonlinearly in visual search. However, to our surprise, upon including aspect ratio as an additional feature, length and width combined linearly and this model outperformed all other models. Thus, length and width of a rectangle became separable when considered together with aspect ratio. This finding predicts that searches involving shapes with identical aspect ratio should be more difficult than searches where shapes differ in aspect ratio. We confirmed this prediction on a variety of shapes. We conclude that features in visual search co-activate linearly and demonstrate for the first time that aspect ratio is a novel feature that guides visual search. PMID:24715328

  13. GKS. Minimal Graphical Kernel System C Binding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simons, R.W.

    1985-10-01

    GKS (the Graphical Kernel System) is both an American National Standard (ANS) and an ISO international standard graphics package. It conforms to ANS X3.124-1985 and to the May 1985 draft proposal for the GKS C Language Binding standard under development by the X3H3 Technical Committee. This implementation includes level ma (the lowest level of the ANS) and some routines from level mb. The following graphics capabilities are supported: two-dimensional lines, markers, text, and filled areas; control over color, line type, and character height and alignment; multiple simultaneous workstations and multiple transformations; and locator and choice input. Tektronix 4014 and 4115more » terminals are supported, and support for other devices may be added. Since this implementation was developed under UNIX, it uses makefiles, C shell scripts, the ar library maintainer, editor scripts, and other UNIX utilities. Therefore, implementing it under another operating system may require considerable effort. Also included with GKS is the small plot package (SPP), a direct descendant of the WEASEL plot package developed at Sandia. SPP is built on the GKS; therefore, all of the capabilities of GKS are available. It is not necessary to use GKS functions, since entire plots can be produced using only SPP functions, but the addition of GKS will give the programmer added power and flexibility. SPP provides single-call plot commands, linear and logarithmic axis commands, control for optional plotting of tick marks and tick mark labels, and permits plotting of data with or without markers and connecting lines.« less

  14. Automated Assessment of Child Vocalization Development Using LENA.

    PubMed

    Richards, Jeffrey A; Xu, Dongxin; Gilkerson, Jill; Yapanel, Umit; Gray, Sharmistha; Paul, Terrance

    2017-07-12

    To produce a novel, efficient measure of children's expressive vocal development on the basis of automatic vocalization assessment (AVA), child vocalizations were automatically identified and extracted from audio recordings using Language Environment Analysis (LENA) System technology. Assessment was based on full-day audio recordings collected in a child's unrestricted, natural language environment. AVA estimates were derived using automatic speech recognition modeling techniques to categorize and quantify the sounds in child vocalizations (e.g., protophones and phonemes). These were expressed as phone and biphone frequencies, reduced to principal components, and inputted to age-based multiple linear regression models to predict independently collected criterion-expressive language scores. From these models, we generated vocal development AVA estimates as age-standardized scores and development age estimates. AVA estimates demonstrated strong statistical reliability and validity when compared with standard criterion expressive language assessments. Automated analysis of child vocalizations extracted from full-day recordings in natural settings offers a novel and efficient means to assess children's expressive vocal development. More research remains to identify specific mechanisms of operation.

  15. Liquid chromatography tandem mass spectrometry assay to determine the pharmacokinetics of aildenafil in human plasma.

    PubMed

    Wang, Jiang; Jiang, Yao; Wang, Yingwu; Zhao, Xia; Cui, Yimin; Gu, Jingkai

    2007-05-09

    A simple, sensitive and specific liquid chromatography/tandem mass spectrometry method for the quantitation of aildenafil, a new phosphodiesterase V inhibitor, in human plasma is presented. The analyte and internal standard, sildenafil, were extracted by a one-step liquid-liquid extraction in alkaline conditions and separated on a C(18) column using ammonia:10mM ammonium acetate buffer:methanol (0.1:15:85, v/v/v) as the mobile phase. The detection by an API 4000 triple quadrupole mass spectrometer in multiple-reaction monitoring mode was completed within 2.5 min. The calibration curve exhibited a linear dynamic range of 0.05-100 ng/ml with a 10 pg/ml limit of detection. The intra- and inter-day precisions measured as relative standard deviation were within 8.04% and 5.72%, respectively. This method has been used in a pharmacokinetic study of aildenafil in healthy male volunteers each given an oral administration of one of the three dosages.

  16. Comparing methods of analysing datasets with small clusters: case studies using four paediatric datasets.

    PubMed

    Marston, Louise; Peacock, Janet L; Yu, Keming; Brocklehurst, Peter; Calvert, Sandra A; Greenough, Anne; Marlow, Neil

    2009-07-01

    Studies of prematurely born infants contain a relatively large percentage of multiple births, so the resulting data have a hierarchical structure with small clusters of size 1, 2 or 3. Ignoring the clustering may lead to incorrect inferences. The aim of this study was to compare statistical methods which can be used to analyse such data: generalised estimating equations, multilevel models, multiple linear regression and logistic regression. Four datasets which differed in total size and in percentage of multiple births (n = 254, multiple 18%; n = 176, multiple 9%; n = 10 098, multiple 3%; n = 1585, multiple 8%) were analysed. With the continuous outcome, two-level models produced similar results in the larger dataset, while generalised least squares multilevel modelling (ML GLS 'xtreg' in Stata) and maximum likelihood multilevel modelling (ML MLE 'xtmixed' in Stata) produced divergent estimates using the smaller dataset. For the dichotomous outcome, most methods, except generalised least squares multilevel modelling (ML GH 'xtlogit' in Stata) gave similar odds ratios and 95% confidence intervals within datasets. For the continuous outcome, our results suggest using multilevel modelling. We conclude that generalised least squares multilevel modelling (ML GLS 'xtreg' in Stata) and maximum likelihood multilevel modelling (ML MLE 'xtmixed' in Stata) should be used with caution when the dataset is small. Where the outcome is dichotomous and there is a relatively large percentage of non-independent data, it is recommended that these are accounted for in analyses using logistic regression with adjusted standard errors or multilevel modelling. If, however, the dataset has a small percentage of clusters greater than size 1 (e.g. a population dataset of children where there are few multiples) there appears to be less need to adjust for clustering.

  17. Multiple imputation of missing fMRI data in whole brain analysis

    PubMed Central

    Vaden, Kenneth I.; Gebregziabher, Mulugeta; Kuchinsky, Stefanie E.; Eckert, Mark A.

    2012-01-01

    Whole brain fMRI analyses rarely include the entire brain because of missing data that result from data acquisition limits and susceptibility artifact, in particular. This missing data problem is typically addressed by omitting voxels from analysis, which may exclude brain regions that are of theoretical interest and increase the potential for Type II error at cortical boundaries or Type I error when spatial thresholds are used to establish significance. Imputation could significantly expand statistical map coverage, increase power, and enhance interpretations of fMRI results. We examined multiple imputation for group level analyses of missing fMRI data using methods that leverage the spatial information in fMRI datasets for both real and simulated data. Available case analysis, neighbor replacement, and regression based imputation approaches were compared in a general linear model framework to determine the extent to which these methods quantitatively (effect size) and qualitatively (spatial coverage) increased the sensitivity of group analyses. In both real and simulated data analysis, multiple imputation provided 1) variance that was most similar to estimates for voxels with no missing data, 2) fewer false positive errors in comparison to mean replacement, and 3) fewer false negative errors in comparison to available case analysis. Compared to the standard analysis approach of omitting voxels with missing data, imputation methods increased brain coverage in this study by 35% (from 33,323 to 45,071 voxels). In addition, multiple imputation increased the size of significant clusters by 58% and number of significant clusters across statistical thresholds, compared to the standard voxel omission approach. While neighbor replacement produced similar results, we recommend multiple imputation because it uses an informed sampling distribution to deal with missing data across subjects that can include neighbor values and other predictors. Multiple imputation is anticipated to be particularly useful for 1) large fMRI data sets with inconsistent missing voxels across subjects and 2) addressing the problem of increased artifact at ultra-high field, which significantly limit the extent of whole brain coverage and interpretations of results. PMID:22500925

  18. Single-Photon-Sensitive HgCdTe Avalanche Photodiode Detector

    NASA Technical Reports Server (NTRS)

    Huntington, Andrew

    2013-01-01

    The purpose of this program was to develop single-photon-sensitive short-wavelength infrared (SWIR) and mid-wavelength infrared (MWIR) avalanche photodiode (APD) receivers based on linear-mode HgCdTe APDs, for application by NASA in light detection and ranging (lidar) sensors. Linear-mode photon-counting APDs are desired for lidar because they have a shorter pixel dead time than Geiger APDs, and can detect sequential pulse returns from multiple objects that are closely spaced in range. Linear-mode APDs can also measure photon number, which Geiger APDs cannot, adding an extra dimension to lidar scene data for multi-photon returns. High-gain APDs with low multiplication noise are required for efficient linear-mode detection of single photons because of APD gain statistics -- a low-excess-noise APD will generate detectible current pulses from single photon input at a much higher rate of occurrence than will a noisy APD operated at the same average gain. MWIR and LWIR electron-avalanche HgCdTe APDs have been shown to operate in linear mode at high average avalanche gain (M > 1000) without excess multiplication noise (F = 1), and are therefore very good candidates for linear-mode photon counting. However, detectors fashioned from these narrow-bandgap alloys require aggressive cooling to control thermal dark current. Wider-bandgap SWIR HgCdTe APDs were investigated in this program as a strategy to reduce detector cooling requirements.

  19. Minimizing energy dissipation of matrix multiplication kernel on Virtex-II

    NASA Astrophysics Data System (ADS)

    Choi, Seonil; Prasanna, Viktor K.; Jang, Ju-wook

    2002-07-01

    In this paper, we develop energy-efficient designs for matrix multiplication on FPGAs. To analyze the energy dissipation, we develop a high-level model using domain-specific modeling techniques. In this model, we identify architecture parameters that significantly affect the total energy (system-wide energy) dissipation. Then, we explore design trade-offs by varying these parameters to minimize the system-wide energy. For matrix multiplication, we consider a uniprocessor architecture and a linear array architecture to develop energy-efficient designs. For the uniprocessor architecture, the cache size is a parameter that affects the I/O complexity and the system-wide energy. For the linear array architecture, the amount of storage per processing element is a parameter affecting the system-wide energy. By using maximum amount of storage per processing element and minimum number of multipliers, we obtain a design that minimizes the system-wide energy. We develop several energy-efficient designs for matrix multiplication. For example, for 6×6 matrix multiplication, energy savings of upto 52% for the uniprocessor architecture and 36% for the linear arrary architecture is achieved over an optimized library for Virtex-II FPGA from Xilinx.

  20. Linear and Nonlinear Thinking: A Multidimensional Model and Measure

    ERIC Educational Resources Information Center

    Groves, Kevin S.; Vance, Charles M.

    2015-01-01

    Building upon previously developed and more general dual-process models, this paper provides empirical support for a multidimensional thinking style construct comprised of linear thinking and multiple dimensions of nonlinear thinking. A self-report assessment instrument (Linear/Nonlinear Thinking Style Profile; LNTSP) is presented and…

  1. Computer Program For Linear Algebra

    NASA Technical Reports Server (NTRS)

    Krogh, F. T.; Hanson, R. J.

    1987-01-01

    Collection of routines provided for basic vector operations. Basic Linear Algebra Subprogram (BLAS) library is collection from FORTRAN-callable routines for employing standard techniques to perform basic operations of numerical linear algebra.

  2. Multiple Intelligence Scores of Science Stream Students and Their Relation with Reading Competency in Malaysian University English Test (MUET)

    ERIC Educational Resources Information Center

    Razak, Norizan Abdul; Zaini, Nuramirah

    2014-01-01

    Many researches have shown that different approach needed in analysing linear and non-linear reading comprehension texts and different cognitive skills are required. This research attempts to discover the relationship between Science Stream students' reading competency on linear and non-linear texts in Malaysian University English Test (MUET) with…

  3. The role of stress sensitization in progression of posttraumatic distress following deployment.

    PubMed

    Smid, Geert E; Kleber, Rolf J; Rademaker, Arthur R; van Zuiden, Mirjam; Vermetten, Eric

    2013-11-01

    Military personnel exposed to combat are at risk for experiencing post-traumatic distress that can progress over time following deployment. We hypothesized that progression of post-traumatic distress may be related to enhanced susceptibility to post-deployment stressors. This study aimed at examining the concept of stress sensitization prospectively in a sample of Dutch military personnel deployed in support of the conflicts in Afghanistan. In a cohort of soldiers (N = 814), symptoms of post-traumatic stress disorder (PTSD) were assessed before deployment as well as 2, 7, 14, and 26 months (N = 433; 53 %) after their return. Data were analyzed using latent growth modeling. Using multiple group analysis, we examined whether high combat stress exposure during deployment moderated the relation between post-deployment stressors and linear change in post-traumatic distress after deployment. A higher baseline level of post-traumatic distress was associated with more early life stressors (standardized regression coefficient = 0.30, p < 0.001). In addition, a stronger increase in posttraumatic distress during deployment was associated with more deployment stressors (standardized coefficient = 0.21, p < 0.001). A steeper linear increase in posttraumatic distress post-deployment (from 2 to 26 months) was predicted by more post-deployment stressors (standardized coefficient = 0.29, p < 0.001) in high combat stress exposed soldiers, but not in a less combat stress exposed group. The group difference in the predictive effect of post-deployment stressors on progression of post-traumatic distress was significant (χ²(1) = 7.85, p = 0.005). Progression of post-traumatic distress following combat exposure may be related to sensitization to the effects of post-deployment stressors during the first year following return from deployment.

  4. DYGABCD: A program for calculating linear A, B, C, and D matrices from a nonlinear dynamic engine simulation

    NASA Technical Reports Server (NTRS)

    Geyser, L. C.

    1978-01-01

    A digital computer program, DYGABCD, was developed that generates linearized, dynamic models of simulated turbofan and turbojet engines. DYGABCD is based on an earlier computer program, DYNGEN, that is capable of calculating simulated nonlinear steady-state and transient performance of one- and two-spool turbojet engines or two- and three-spool turbofan engines. Most control design techniques require linear system descriptions. For multiple-input/multiple-output systems such as turbine engines, state space matrix descriptions of the system are often desirable. DYGABCD computes the state space matrices commonly referred to as the A, B, C, and D matrices required for a linear system description. The report discusses the analytical approach and provides a users manual, FORTRAN listings, and a sample case.

  5. Transmit Designs for the MIMO Broadcast Channel With Statistical CSI

    NASA Astrophysics Data System (ADS)

    Wu, Yongpeng; Jin, Shi; Gao, Xiqi; McKay, Matthew R.; Xiao, Chengshan

    2014-09-01

    We investigate the multiple-input multiple-output broadcast channel with statistical channel state information available at the transmitter. The so-called linear assignment operation is employed, and necessary conditions are derived for the optimal transmit design under general fading conditions. Based on this, we introduce an iterative algorithm to maximize the linear assignment weighted sum-rate by applying a gradient descent method. To reduce complexity, we derive an upper bound of the linear assignment achievable rate of each receiver, from which a simplified closed-form expression for a near-optimal linear assignment matrix is derived. This reveals an interesting construction analogous to that of dirty-paper coding. In light of this, a low complexity transmission scheme is provided. Numerical examples illustrate the significant performance of the proposed low complexity scheme.

  6. Non-linear wave-particle interactions and fast ion loss induced by multiple Alfvén eigenmodes in the DIII-D tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xi; Kramer, Gerrit J.; Heidbrink, William W.

    2014-05-21

    A new non-linear feature has been observed in fast-ion loss from tokamak plasmas in the form of oscillations at the sum, difference and second harmonic frequencies of two independent Alfvén eigenmodes (AEs). Full orbit calculations and analytic theory indicate this non-linearity is due to coupling of fast-ion orbital response as it passes through each AE — a change in wave-particle phase k • r by one mode alters the force exerted by the next. Furthermore, the loss measurement is of barely confined, non-resonant particles, while similar non-linear interactions can occur between well-confined particles and multiple AEs leading to enhanced fast-ionmore » transport.« less

  7. Quantitation of low molecular weight sugars by chemical derivatization-liquid chromatography/multiple reaction monitoring/mass spectrometry.

    PubMed

    Han, Jun; Lin, Karen; Sequria, Carita; Yang, Juncong; Borchers, Christoph H

    2016-07-01

    A new method for the separation and quantitation of 13 mono- and disaccharides has been developed by chemical derivatization/ultra-HPLC/negative-ion ESI-multiple-reaction monitoring MS. 3-Nitrophenylhydrazine (at 50°C for 60 min) was shown to be able to quantitatively derivatize low-molecular weight (LMW) reducing sugars. The nonreducing sugar, sucrose, was not derivatized. A pentafluorophenyl-bonded phase column was used for the chromatographic separation of the derivatized sugars. This method exhibits femtomole-level sensitivity, high precision (CVs of ≤ 4.6%) and high accuracy for the quantitation of LMW sugars in wine. Excellent linearity (R(2) ≥ 0.9993) and linear ranges of ∼500-fold for disaccharides and ∼1000-4000-fold for monosaccharides were achieved. With internal calibration ((13) C-labeled internal standards), recoveries were between 93.6% ± 1.6% (xylose) and 104.8% ± 5.2% (glucose). With external calibration, recoveries ranged from 82.5% ± 0.8% (ribulose) to 105.2% ± 2.1% (xylulose). Quantitation of sugars in two red wines and two white wines was performed using this method; quantitation of the central carbon metabolism-related carboxylic acids and tartaric acid was carried out using a previously established derivatization procedure with 3-nitrophenylhydrazine as well. The results showed that these two classes of compounds-both of which have important organoleptic properties-had different compositions in red and white wines. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Simultaneous Determination of Soyasaponins and Isoflavones in Soy (Glycine max L.) Products by HPTLC-densitometry-Multiple Detection.

    PubMed

    Shawky, Eman; Sallam, Shaimaa M

    2017-11-01

    A new high-throughput method was developed for the simultaneous analysis of isoflavones and soyasaponnins in Soy (Glycine max L.) products by high-performance thin-layer chromatography with densitometry and multiple detection. Silica gel was used as the stationary phase and ethyl acetate:methanol:water:acetic acid (100:20:16:1, v/v/v/v) as the mobile phase. After chromatographic development, multi-wavelength scanning was carried out by: (i) UV-absorbance measurement at 265 nm for genistin, daidzin and glycitin, (ii) Vis-absorbance measurement at 650 nm for Soyasaponins I and III, after post-chromatographic derivatization with anisaldehyde/sulfuric acid reagent. Validation of the developed method was found to meet the acceptance criteria delineated by ICH guidelines with respect to linearity, accuracy, precision, specificity and robustness. Calibrations were linear with correlation coefficients of >0.994. Intra-day precisions relative standard deviation (RSD)% of all substances in matrix were determined to be between 0.7 and 0.9%, while inter-day precisions (RSD%) ranged between 1.2 and 1.8%. The validated method was successfully applied for determination of the studied analytes in soy-based infant formula and soybean products. The new method compares favorably to other reported methods in being as accurate and precise and in the same time more feasible and cost-effective. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Determination of grain-size characteristics from electromagnetic seabed mapping data: A NW Iberian shelf study

    NASA Astrophysics Data System (ADS)

    Baasch, Benjamin; Müller, Hendrik; von Dobeneck, Tilo; Oberle, Ferdinand K. J.

    2017-05-01

    The electric conductivity and magnetic susceptibility of sediments are fundamental parameters in environmental geophysics. Both can be derived from marine electromagnetic profiling, a novel, fast and non-invasive seafloor mapping technique. Here we present statistical evidence that electric conductivity and magnetic susceptibility can help to determine physical grain-size characteristics (size, sorting and mud content) of marine surficial sediments. Electromagnetic data acquired with the bottom-towed electromagnetic profiler MARUM NERIDIS III were analysed and compared with grain size data from 33 samples across the NW Iberian continental shelf. A negative correlation between mean grain size and conductivity (R=-0.79) as well as mean grain size and susceptibility (R=-0.78) was found. Simple and multiple linear regression analyses were carried out to predict mean grain size, mud content and the standard deviation of the grain-size distribution from conductivity and susceptibility. The comparison of both methods showed that multiple linear regression models predict the grain-size distribution characteristics better than the simple models. This exemplary study demonstrates that electromagnetic benthic profiling is capable to estimate mean grain size, sorting and mud content of marine surficial sediments at a very high significance level. Transfer functions can be calibrated using grains-size data from a few reference samples and extrapolated along shelf-wide survey lines. This study suggests that electromagnetic benthic profiling should play a larger role for coastal zone management, seafloor contamination and sediment provenance studies in worldwide continental shelf systems.

  10. Pharmacokinetics and Tissue Distribution Study of Chlorogenic Acid from Lonicerae Japonicae Flos Following Oral Administrations in Rats

    PubMed Central

    Zhou, Yulu; Zhou, Ting; Pei, Qi; Liu, Shikun; Yuan, Hong

    2014-01-01

    Chlorogenic acid (ChA) is proposed as the major bioactive compounds of Lonicerae Japonicae Flos (LJF). Forty-two Wistar rats were randomly divided into seven groups to investigate the pharmacokinetics and tissue distribution of ChA, via oral administration of LJF extract, using ibuprofen as internal standard, employing a high performance liquid chromatography in conjunction with tandem mass spectrometry. Analytes were extracted from plasma samples and tissue homogenate by liquid–liquid extraction with acetonitrile, separated on a C 18 column by linear gradient elution, and detected by electrospray ionization mass spectrometry in negative selected multiple reaction monitoring mode. Our results successfully demonstrate that the method has satisfactory selectivity, linearity, extraction recovery, matrix effect, precision, accuracy, and stability. Using noncompartment model to study pharmacokinetics, profile revealed that ChA was rapidly absorbed and eliminated. Tissue study indicated that the highest level was observed in liver, followed by kidney, lung, heart, and spleen. In conclusion, this method was suitable for the study on pharmacokinetics and tissue distribution of ChA after oral administration. PMID:25140190

  11. Fast computation of an optimal controller for large-scale adaptive optics.

    PubMed

    Massioni, Paolo; Kulcsár, Caroline; Raynaud, Henri-François; Conan, Jean-Marc

    2011-11-01

    The linear quadratic Gaussian regulator provides the minimum-variance control solution for a linear time-invariant system. For adaptive optics (AO) applications, under the hypothesis of a deformable mirror with instantaneous response, such a controller boils down to a minimum-variance phase estimator (a Kalman filter) and a projection onto the mirror space. The Kalman filter gain can be computed by solving an algebraic Riccati matrix equation, whose computational complexity grows very quickly with the size of the telescope aperture. This "curse of dimensionality" makes the standard solvers for Riccati equations very slow in the case of extremely large telescopes. In this article, we propose a way of computing the Kalman gain for AO systems by means of an approximation that considers the turbulence phase screen as the cropped version of an infinite-size screen. We demonstrate the advantages of the methods for both off- and on-line computational time, and we evaluate its performance for classical AO as well as for wide-field tomographic AO with multiple natural guide stars. Simulation results are reported.

  12. An in-situ Raman study on pristane at high pressure and ambient temperature

    NASA Astrophysics Data System (ADS)

    Wu, Jia; Ni, Zhiyong; Wang, Shixia; Zheng, Haifei

    2018-01-01

    The Csbnd H Raman spectroscopic band (2800-3000 cm-1) of pristane was measured in a diamond anvil cell at 1.1-1532 MPa and ambient temperature. Three models are used for the peak-fitting of this Csbnd H Raman band, and the linear correlations between pressure and corresponding peak positions are calculated as well. The results demonstrate that 1) the number of peaks that one chooses to fit the spectrum affects the results, which indicates that the application of the spectroscopic barometry with a function group of organic matters suffers significant limitations; and 2) the linear correlation between pressure and fitted peak positions from one-peak model is more superior than that from multiple-peak model, meanwhile the standard error of the latter is much higher than that of the former. It indicates that the Raman shift of Csbnd H band fitted with one-peak model, which could be treated as a spectroscopic barometry, is more realistic in mixture systems than the traditional strategy which uses the Raman characteristic shift of one function group.

  13. Determination of chlorpyrifos and its metabolites in cells and culture media by liquid chromatography-electrospray ionization tandem mass spectrometry.

    PubMed

    Yang, Xiangkun; Wu, Xian; Brown, Kyle A; Le, Thao; Stice, Steven L; Bartlett, Michael G

    2017-09-15

    A sensitive method to simultaneously quantitate chlorpyrifos, chlorpyrifos oxon and the detoxified product 3,5,6-trichloro-2-pyridinol (TCP) was developed using either liquid-liquid extraction for culture media samples, or protein precipitation for cell samples. Multiple reaction monitoring in positive ion mode was applied for the detection of chlorpyrifos and chlorpyrifos oxon, and selected ion recording in negative mode was applied to detect TCP. The method provided linear ranges from 5 to 500, 0.2-20 and 20-2000ng/mL for media samples and from 0.5-50, 0.02-2 and 2-200ng/million cells for CPF, CPO and TCP, respectively. The method was validated using selectivity, linearity, precision, accuracy, recovery, stability and dilution tests. All relative standard deviations (RSDs) and relative errors (REs) for QC samples were within 15% (except for LLOQ, within 20%). This method has been successfully applied to study the neurotoxicity and metabolism of chlorpyrifos in a human neuronal model. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Development and validation of a liquid chromatography-isotope dilution tandem mass spectrometry for determination of olanzapine in human plasma and its application to bioavailability study.

    PubMed

    Zhang, Meng-Qi; Jia, Jing-Ying; Lu, Chuan; Liu, Gang-Yi; Yu, Cheng-Yin; Gui, Yu-Zhou; Liu, Yun; Liu, Yan-Mei; Wang, Wei; Li, Shui-Jun; Yu, Chen

    2010-06-01

    A simple, reliable and sensitive liquid chromatography-isotope dilution mass spectrometry (LC-ID/MS) was developed and validated for quantification of olanzapine in human plasma. Plasma samples (50 microL) were extracted with tert-butyl methyl ether and isotope-labeled internal standard (olanzapine-D3) was used. The chromatographic separation was performed on XBridge Shield RP 18 (100 mm x 2.1 mm, 3.5 microm, Waters). An isocratic program was used at a flow rate of 0.4 m x min(-1) with mobile phase consisting of acetonitrile and ammonium buffer (pH 8). The protonated ions of analytes were detected in positive ionization by multiple reactions monitoring (MRM) mode. The plasma method, with a lower limit of quantification (LLOQ) of 0.1 ng x mL(-1), demonstrated good linearity over a range of 0.1 - 30 ng x mL(-1) of olanzapine. Specificity, linearity, accuracy, precision, recovery, matrix effect and stability were evaluated during method validation. The validated method was successfully applied to analyzing human plasma samples in bioavailability study.

  15. Broadband external cavity quantum cascade laser based sensor for gasoline detection

    NASA Astrophysics Data System (ADS)

    Ding, Junya; He, Tianbo; Zhou, Sheng; Li, Jinsong

    2018-02-01

    A new type of tunable diode spectroscopy sensor based on an external cavity quantum cascade laser (ECQCL) and a quartz crystal tuning fork (QCTF) were used for quantitative analysis of volatile organic compounds. In this work, the sensor system had been tested on different gasoline sample analysis. For signal processing, the self-established interpolation algorithm and multiple linear regression algorithm model were used for quantitative analysis of major volatile organic compounds in gasoline samples. The results were very consistent with that of the standard spectra taken from the Pacific Northwest National Laboratory (PNNL) database. In future, The ECQCL sensor will be used for trace explosive, chemical warfare agent, and toxic industrial chemical detection and spectroscopic analysis, etc.

  16. Improving membrane based multiplex immunoassays for semi-quantitative detection of multiple cytokines in a single sample

    PubMed Central

    2014-01-01

    Background Inflammatory mediators can serve as biomarkers for the monitoring of the disease progression or prognosis in many conditions. In the present study we introduce an adaptation of a membrane-based technique in which the level of up to 40 cytokines and chemokines can be determined in both human and rodent blood in a semi-quantitative way. The planar assay was modified using the LI-COR (R) detection system (fluorescence based) rather than chemiluminescence and semi-quantitative outcomes were achieved by normalizing the outcomes using the automated exposure settings of the Odyssey readout device. The results were compared to the gold standard assay, namely ELISA. Results The improved planar assay allowed the detection of a considerably higher number of analytes (n = 30 and n = 5 for fluorescent and chemiluminescent detection, respectively). The improved planar method showed high sensitivity up to 17 pg/ml and a linear correlation of the normalized fluorescence intensity with the results from the ELISA (r = 0.91). Conclusions The results show that the membrane-based technique is a semi-quantitative assay that correlates satisfactorily to the gold standard when enhanced by the use of fluorescence and subsequent semi-quantitative analysis. This promising technique can be used to investigate inflammatory profiles in multiple conditions, particularly in studies with constraints in sample sizes and/or budget. PMID:25022797

  17. Quantification of methionine and selenomethionine in biological samples using multiple reaction monitoring high performance liquid chromatography tandem mass spectrometry (MRM-HPLC-MS/MS).

    PubMed

    Vu, Dai Long; Ranglová, Karolína; Hájek, Jan; Hrouzek, Pavel

    2018-05-01

    Quantification of selenated amino-acids currently relies on methods employing inductively coupled plasma mass spectrometry (ICP-MS). Although very accurate, these methods do not allow the simultaneous determination of standard amino-acids, hampering the comparison of the content of selenated versus non-selenated species such as methionine (Met) and selenomethionine (SeMet). This paper reports two approaches for the simultaneous quantification of Met and SeMet. In the first approach, standard enzymatic hydrolysis employing Protease XIV was applied for the preparation of samples. The second approach utilized methanesulfonic acid (MA) for the hydrolysis of samples, either in a reflux system or in a microwave oven, followed by derivatization with diethyl ethoxymethylenemalonate. The prepared samples were then analyzed by multiple reaction monitoring high performance liquid chromatography tandem mass spectrometry (MRM-HPLC-MS/MS). Both approaches provided platforms for the accurate determination of selenium/sulfur substitution rate in Met. Moreover the second approach also provided accurate simultaneous quantification of Met and SeMet with a low limit of detection, low limit of quantification and wide linearity range, comparable to the commonly used gas chromatography mass spectrometry (GC-MS) method or ICP-MS. The novel method was validated using certified reference material in conjunction with the GC-MS reference method. Copyright © 2018. Published by Elsevier B.V.

  18. Implied alignment: a synapomorphy-based multiple-sequence alignment method and its use in cladogram search

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    A method to align sequence data based on parsimonious synapomorphy schemes generated by direct optimization (DO; earlier termed optimization alignment) is proposed. DO directly diagnoses sequence data on cladograms without an intervening multiple-alignment step, thereby creating topology-specific, dynamic homology statements. Hence, no multiple-alignment is required to generate cladograms. Unlike general and globally optimal multiple-alignment procedures, the method described here, implied alignment (IA), takes these dynamic homologies and traces them back through a single cladogram, linking the unaligned sequence positions in the terminal taxa via DO transformation series. These "lines of correspondence" link ancestor-descendent states and, when displayed as linearly arrayed columns without hypothetical ancestors, are largely indistinguishable from standard multiple alignment. Since this method is based on synapomorphy, the treatment of certain classes of insertion-deletion (indel) events may be different from that of other alignment procedures. As with all alignment methods, results are dependent on parameter assumptions such as indel cost and transversion:transition ratios. Such an IA could be used as a basis for phylogenetic search, but this would be questionable since the homologies derived from the implied alignment depend on its natal cladogram and any variance, between DO and IA + Search, due to heuristic approach. The utility of this procedure in heuristic cladogram searches using DO and the improvement of heuristic cladogram cost calculations are discussed. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  19. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies.

    PubMed

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-11-01

    Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  20. A Common Mechanism for Resistance to Oxime Reactivation of Acetylcholinesterase Inhibited by Organophosphorus Compounds

    DTIC Science & Technology

    2013-01-01

    application of the Hammett equation with the constants rph in the chemistry of organophosphorus compounds, Russ. Chem. Rev. 38 (1969) 795–811. [13...of oximes and OP compounds and the ability of oximes to reactivate OP- inhibited AChE. Multiple linear regression equations were analyzed using...phosphonate pairs, 21 oxime/ phosphoramidate pairs and 12 oxime/phosphate pairs. The best linear regression equation resulting from multiple regression anal

  1. Pleiotropy Analysis of Quantitative Traits at Gene Level by Multivariate Functional Linear Models

    PubMed Central

    Wang, Yifan; Liu, Aiyi; Mills, James L.; Boehnke, Michael; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao; Wu, Colin O.; Fan, Ruzong

    2015-01-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai–Bartlett trace, Hotelling–Lawley trace, and Wilks’s Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. PMID:25809955

  2. Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.

    PubMed

    Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong

    2015-05-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.

  3. Wavelet regression model in forecasting crude oil price

    NASA Astrophysics Data System (ADS)

    Hamid, Mohd Helmie; Shabri, Ani

    2017-05-01

    This study presents the performance of wavelet multiple linear regression (WMLR) technique in daily crude oil forecasting. WMLR model was developed by integrating the discrete wavelet transform (DWT) and multiple linear regression (MLR) model. The original time series was decomposed to sub-time series with different scales by wavelet theory. Correlation analysis was conducted to assist in the selection of optimal decomposed components as inputs for the WMLR model. The daily WTI crude oil price series has been used in this study to test the prediction capability of the proposed model. The forecasting performance of WMLR model were also compared with regular multiple linear regression (MLR), Autoregressive Moving Average (ARIMA) and Generalized Autoregressive Conditional Heteroscedasticity (GARCH) using root mean square errors (RMSE) and mean absolute errors (MAE). Based on the experimental results, it appears that the WMLR model performs better than the other forecasting technique tested in this study.

  4. New nonlinear control algorithms for multiple robot arms

    NASA Technical Reports Server (NTRS)

    Tarn, T. J.; Bejczy, A. K.; Yun, X.

    1988-01-01

    Multiple coordinated robot arms are modeled by considering the arms as closed kinematic chains and as a force-constrained mechanical system working on the same object simultaneously. In both formulations, a novel dynamic control method is discussed. It is based on feedback linearization and simultaneous output decoupling technique. By applying a nonlinear feedback and a nonlinear coordinate transformation, the complicated model of the multiple robot arms in either formulation is converted into a linear and output decoupled system. The linear system control theory and optimal control theory are used to design robust controllers in the task space. The first formulation has the advantage of automatically handling the coordination and load distribution among the robot arms. In the second formulation, it was found that by choosing a general output equation it became possible simultaneously to superimpose the position and velocity error feedback with the force-torque error feedback in the task space.

  5. Advanced quantitative methods in correlating sarcopenic muscle degeneration with lower extremity function biometrics and comorbidities

    PubMed Central

    Gíslason, Magnús; Sigurðsson, Sigurður; Guðnason, Vilmundur; Harris, Tamara; Carraro, Ugo; Gargiulo, Paolo

    2018-01-01

    Sarcopenic muscular degeneration has been consistently identified as an independent risk factor for mortality in aging populations. Recent investigations have realized the quantitative potential of computed tomography (CT) image analysis to describe skeletal muscle volume and composition; however, the optimum approach to assessing these data remains debated. Current literature reports average Hounsfield unit (HU) values and/or segmented soft tissue cross-sectional areas to investigate muscle quality. However, standardized methods for CT analyses and their utility as a comorbidity index remain undefined, and no existing studies compare these methods to the assessment of entire radiodensitometric distributions. The primary aim of this study was to present a comparison of nonlinear trimodal regression analysis (NTRA) parameters of entire radiodensitometric muscle distributions against extant CT metrics and their correlation with lower extremity function (LEF) biometrics (normal/fast gait speed, timed up-and-go, and isometric leg strength) and biochemical and nutritional parameters, such as total solubilized cholesterol (SCHOL) and body mass index (BMI). Data were obtained from 3,162 subjects, aged 66–96 years, from the population-based AGES-Reykjavik Study. 1-D k-means clustering was employed to discretize each biometric and comorbidity dataset into twelve subpopulations, in accordance with Sturges’ Formula for Class Selection. Dataset linear regressions were performed against eleven NTRA distribution parameters and standard CT analyses (fat/muscle cross-sectional area and average HU value). Parameters from NTRA and CT standards were analogously assembled by age and sex. Analysis of specific NTRA parameters with standard CT results showed linear correlation coefficients greater than 0.85, but multiple regression analysis of correlative NTRA parameters yielded a correlation coefficient of 0.99 (P<0.005). These results highlight the specificities of each muscle quality metric to LEF biometrics, SCHOL, and BMI, and particularly highlight the value of the connective tissue regime in this regard. PMID:29513690

  6. Advanced quantitative methods in correlating sarcopenic muscle degeneration with lower extremity function biometrics and comorbidities.

    PubMed

    Edmunds, Kyle; Gíslason, Magnús; Sigurðsson, Sigurður; Guðnason, Vilmundur; Harris, Tamara; Carraro, Ugo; Gargiulo, Paolo

    2018-01-01

    Sarcopenic muscular degeneration has been consistently identified as an independent risk factor for mortality in aging populations. Recent investigations have realized the quantitative potential of computed tomography (CT) image analysis to describe skeletal muscle volume and composition; however, the optimum approach to assessing these data remains debated. Current literature reports average Hounsfield unit (HU) values and/or segmented soft tissue cross-sectional areas to investigate muscle quality. However, standardized methods for CT analyses and their utility as a comorbidity index remain undefined, and no existing studies compare these methods to the assessment of entire radiodensitometric distributions. The primary aim of this study was to present a comparison of nonlinear trimodal regression analysis (NTRA) parameters of entire radiodensitometric muscle distributions against extant CT metrics and their correlation with lower extremity function (LEF) biometrics (normal/fast gait speed, timed up-and-go, and isometric leg strength) and biochemical and nutritional parameters, such as total solubilized cholesterol (SCHOL) and body mass index (BMI). Data were obtained from 3,162 subjects, aged 66-96 years, from the population-based AGES-Reykjavik Study. 1-D k-means clustering was employed to discretize each biometric and comorbidity dataset into twelve subpopulations, in accordance with Sturges' Formula for Class Selection. Dataset linear regressions were performed against eleven NTRA distribution parameters and standard CT analyses (fat/muscle cross-sectional area and average HU value). Parameters from NTRA and CT standards were analogously assembled by age and sex. Analysis of specific NTRA parameters with standard CT results showed linear correlation coefficients greater than 0.85, but multiple regression analysis of correlative NTRA parameters yielded a correlation coefficient of 0.99 (P<0.005). These results highlight the specificities of each muscle quality metric to LEF biometrics, SCHOL, and BMI, and particularly highlight the value of the connective tissue regime in this regard.

  7. Accurate GM atrophy quantification in MS using lesion-filling with co-registered 2D lesion masks☆

    PubMed Central

    Popescu, V.; Ran, N.C.G.; Barkhof, F.; Chard, D.T.; Wheeler-Kingshott, C.A.; Vrenken, H.

    2014-01-01

    Background In multiple sclerosis (MS), brain atrophy quantification is affected by white matter lesions. LEAP and FSL-lesion_filling, replace lesion voxels with white matter intensities; however, they require precise lesion identification on 3DT1-images. Aim To determine whether 2DT2 lesion masks co-registered to 3DT1 images, yield grey and white matter volumes comparable to precise lesion masks. Methods 2DT2 lesion masks were linearly co-registered to 20 3DT1-images of MS patients, with nearest-neighbor (NNI), and tri-linear interpolation. As gold-standard, lesion masks were manually outlined on 3DT1-images. LEAP and FSL-lesion_filling were applied with each lesion mask. Grey (GM) and white matter (WM) volumes were quantified with FSL-FAST, and deep gray matter (DGM) volumes using FSL-FIRST. Volumes were compared between lesion mask types using paired Wilcoxon tests. Results Lesion-filling with gold-standard lesion masks compared to native images reduced GM overestimation by 1.93 mL (p < .001) for LEAP, and 1.21 mL (p = .002) for FSL-lesion_filling. Similar effects were achieved with NNI lesion masks from 2DT2. Global WM underestimation was not significantly influenced. GM and WM volumes from NNI, did not differ significantly from gold-standard. GM segmentation differed between lesion masks in the lesion area, and also elsewhere. Using the gold-standard, FSL-FAST quantified as GM on average 0.4% of the lesion area with LEAP and 24.5% with FSL-lesion_filling. Lesion-filling did not influence DGM volumes from FSL-FIRST. Discussion These results demonstrate that for global GM volumetry, precise lesion masks on 3DT1 images can be replaced by co-registered 2DT2 lesion masks. This makes lesion-filling a feasible method for GM atrophy measurements in MS. PMID:24567908

  8. Reducing the standard deviation in multiple-assay experiments where the variation matters but the absolute value does not.

    PubMed

    Echenique-Robba, Pablo; Nelo-Bazán, María Alejandra; Carrodeguas, José A

    2013-01-01

    When the value of a quantity x for a number of systems (cells, molecules, people, chunks of metal, DNA vectors, so on) is measured and the aim is to replicate the whole set again for different trials or assays, despite the efforts for a near-equal design, scientists might often obtain quite different measurements. As a consequence, some systems' averages present standard deviations that are too large to render statistically significant results. This work presents a novel correction method of a very low mathematical and numerical complexity that can reduce the standard deviation of such results and increase their statistical significance. Two conditions are to be met: the inter-system variations of x matter while its absolute value does not, and a similar tendency in the values of x must be present in the different assays (or in other words, the results corresponding to different assays must present a high linear correlation). We demonstrate the improvements this method offers with a cell biology experiment, but it can definitely be applied to any problem that conforms to the described structure and requirements and in any quantitative scientific field that deals with data subject to uncertainty.

  9. Sex Differences in Diabetes Mellitus Mortality Trends in Brazil, 1980-2012

    PubMed Central

    Brito, Alexandre dos Santos; Pinheiro, Rejane Sobrino; Cabral, Cristiane da Silva; de Camargo, Thais Medina Coeli Rochel

    2016-01-01

    Aims To investigate the hypothesis that the change from the female predominance of diabetes mellitus to a standard of equality or even male preponderance can already be observed in Brazilian mortality statistics. Methods Data on deaths for which diabetes mellitus was listed as the underlying cause were obtained from the Brazilian Mortality Information System for the years 1980 to 2012. The mortality data were also analyzed according to the multiple causes of death approach from 2001 to 2012. The population data came from the Brazilian Institute of Geography and Statistics. The mortality rates were standardized to the world population. We used a log-linear joinpoint regression to evaluate trends in age-standardized mortality rates (ASMR). Results From 1980 to 2012, we found a marked increment in the diabetes ASMR among Brazilian men and a less sharp increase in the rate among women, with the latter period (2003–2012) showing a slight decrease among women, though it was not statistically significant. Conclusions The results of this study suggest that diabetes mellitus in Brazil has changed from a pattern of higher mortality among women compared to men to equality or even male predominance. PMID:27275600

  10. Comparison of standardized versus individualized caloric prescriptions in the nutritional rehabilitation of inpatients with anorexia nervosa

    PubMed Central

    Haynos, Ann F.; Snipes, Cassandra; Guarda, Angela; Mayer, Laurel E.; Attia, Evelyn

    2015-01-01

    Objective Sparse research informs how caloric prescriptions should be advanced during nutritional rehabilitation of inpatients with anorexia nervosa (AN). This study compared the impact of a standardized caloric increase approach, in which increases occurred on a predetermined schedule, to an individualized approach, in which increases occurred only following insufficient weight gain, on rate, pattern, and cumulative amount of weight gain and other weight restoration outcomes. Method This study followed a natural experiment design comparing AN inpatients consecutively admitted before (n = 35) and after (n = 35) an institutional change from individualized to standardized caloric prescriptions. Authors examined the impact of prescription plan on weekly weight gain in the first treatment month using multilevel modeling. Within a subsample remaining inpatient through weight restoration (n = 40), multiple regressions examined the impact of caloric prescription plan on time to weight restoration, length of hospitalization, maximum caloric prescription, discharge BMI, and incidence of activity restriction and edema. Results There were significant interactions between prescription plan and quadratic time on average weekly weight gain (p = .03) and linear time on cumulative weekly weight gain (p < .001). Under the standardized plan, patients gained in an accelerated curvilinear pattern (p = .04) and, therefore, gained cumulatively greater amounts of weight over time (p < .001). Additionally, 30% fewer patients required activity restriction under the standardized plan. Discussion Standardized caloric prescriptions may confer advantage by facilitating accelerated early weight gain and lower incidence of bed rest without increasing the incidence of refeeding syndrome. PMID:26769581

  11. Understanding Child Stunting in India: A Comprehensive Analysis of Socio-Economic, Nutritional and Environmental Determinants Using Additive Quantile Regression

    PubMed Central

    Fenske, Nora; Burns, Jacob; Hothorn, Torsten; Rehfuess, Eva A.

    2013-01-01

    Background Most attempts to address undernutrition, responsible for one third of global child deaths, have fallen behind expectations. This suggests that the assumptions underlying current modelling and intervention practices should be revisited. Objective We undertook a comprehensive analysis of the determinants of child stunting in India, and explored whether the established focus on linear effects of single risks is appropriate. Design Using cross-sectional data for children aged 0–24 months from the Indian National Family Health Survey for 2005/2006, we populated an evidence-based diagram of immediate, intermediate and underlying determinants of stunting. We modelled linear, non-linear, spatial and age-varying effects of these determinants using additive quantile regression for four quantiles of the Z-score of standardized height-for-age and logistic regression for stunting and severe stunting. Results At least one variable within each of eleven groups of determinants was significantly associated with height-for-age in the 35% Z-score quantile regression. The non-modifiable risk factors child age and sex, and the protective factors household wealth, maternal education and BMI showed the largest effects. Being a twin or multiple birth was associated with dramatically decreased height-for-age. Maternal age, maternal BMI, birth order and number of antenatal visits influenced child stunting in non-linear ways. Findings across the four quantile and two logistic regression models were largely comparable. Conclusions Our analysis confirms the multifactorial nature of child stunting. It emphasizes the need to pursue a systems-based approach and to consider non-linear effects, and suggests that differential effects across the height-for-age distribution do not play a major role. PMID:24223839

  12. Linear and volumetric dimensional changes of injection-molded PMMA denture base resins.

    PubMed

    El Bahra, Shadi; Ludwig, Klaus; Samran, Abdulaziz; Freitag-Wolf, Sandra; Kern, Matthias

    2013-11-01

    The aim of this study was to evaluate the linear and volumetric dimensional changes of six denture base resins processed by their corresponding injection-molding systems at 3 time intervals of water storage. Two heat-curing (SR Ivocap Hi Impact and Lucitone 199) and four auto-curing (IvoBase Hybrid, IvoBase Hi Impact, PalaXpress, and Futura Gen) acrylic resins were used with their specific injection-molding technique to fabricate 6 specimens of each material. Linear and volumetric dimensional changes were determined by means of a digital caliper and an electronic hydrostatic balance, respectively, after water storage of 1, 30, or 90 days. Means and standard deviations of linear and volumetric dimensional changes were calculated in percentage (%). Statistical analysis was done using Student's and Welch's t tests with Bonferroni-Holm correction for multiple comparisons (α=0.05). Statistically significant differences in linear dimensional changes between resins were demonstrated at all three time intervals of water immersion (p≤0.05), with exception of the following comparisons which showed no significant difference: IvoBase Hi Impact/SR Ivocap Hi Impact and PalaXpress/Lucitone 199 after 1 day, Futura Gen/PalaXpress and PalaXpress/Lucitone 199 after 30 days, and IvoBase Hybrid/IvoBase Hi Impact after 90 days. Also, statistically significant differences in volumetric dimensional changes between resins were found at all three time intervals of water immersion (p≤0.05), with exception of the comparison between PalaXpress and Futura Gen. Denture base resins (IvoBase Hybrid and IvoBase Hi Impact) processed by the new injection-molding system (IvoBase), revealed superior dimensional precision. Copyright © 2013 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  13. Understanding child stunting in India: a comprehensive analysis of socio-economic, nutritional and environmental determinants using additive quantile regression.

    PubMed

    Fenske, Nora; Burns, Jacob; Hothorn, Torsten; Rehfuess, Eva A

    2013-01-01

    Most attempts to address undernutrition, responsible for one third of global child deaths, have fallen behind expectations. This suggests that the assumptions underlying current modelling and intervention practices should be revisited. We undertook a comprehensive analysis of the determinants of child stunting in India, and explored whether the established focus on linear effects of single risks is appropriate. Using cross-sectional data for children aged 0-24 months from the Indian National Family Health Survey for 2005/2006, we populated an evidence-based diagram of immediate, intermediate and underlying determinants of stunting. We modelled linear, non-linear, spatial and age-varying effects of these determinants using additive quantile regression for four quantiles of the Z-score of standardized height-for-age and logistic regression for stunting and severe stunting. At least one variable within each of eleven groups of determinants was significantly associated with height-for-age in the 35% Z-score quantile regression. The non-modifiable risk factors child age and sex, and the protective factors household wealth, maternal education and BMI showed the largest effects. Being a twin or multiple birth was associated with dramatically decreased height-for-age. Maternal age, maternal BMI, birth order and number of antenatal visits influenced child stunting in non-linear ways. Findings across the four quantile and two logistic regression models were largely comparable. Our analysis confirms the multifactorial nature of child stunting. It emphasizes the need to pursue a systems-based approach and to consider non-linear effects, and suggests that differential effects across the height-for-age distribution do not play a major role.

  14. Acceleration of Linear Finite-Difference Poisson-Boltzmann Methods on Graphics Processing Units.

    PubMed

    Qi, Ruxi; Botello-Smith, Wesley M; Luo, Ray

    2017-07-11

    Electrostatic interactions play crucial roles in biophysical processes such as protein folding and molecular recognition. Poisson-Boltzmann equation (PBE)-based models have emerged as widely used in modeling these important processes. Though great efforts have been put into developing efficient PBE numerical models, challenges still remain due to the high dimensionality of typical biomolecular systems. In this study, we implemented and analyzed commonly used linear PBE solvers for the ever-improving graphics processing units (GPU) for biomolecular simulations, including both standard and preconditioned conjugate gradient (CG) solvers with several alternative preconditioners. Our implementation utilizes the standard Nvidia CUDA libraries cuSPARSE, cuBLAS, and CUSP. Extensive tests show that good numerical accuracy can be achieved given that the single precision is often used for numerical applications on GPU platforms. The optimal GPU performance was observed with the Jacobi-preconditioned CG solver, with a significant speedup over standard CG solver on CPU in our diversified test cases. Our analysis further shows that different matrix storage formats also considerably affect the efficiency of different linear PBE solvers on GPU, with the diagonal format best suited for our standard finite-difference linear systems. Further efficiency may be possible with matrix-free operations and integrated grid stencil setup specifically tailored for the banded matrices in PBE-specific linear systems.

  15. Ranking of factors determining potassium mass balance in bicarbonate haemodialysis.

    PubMed

    Basile, Carlo; Libutti, Pasquale; Lisi, Piero; Teutonico, Annalisa; Vernaglione, Luigi; Casucci, Francesco; Lomonte, Carlo

    2015-03-01

    One of the most important pathogenetic factors involved in the onset of intradialysis arrhytmias is the alteration in electrolyte concentration, particularly potassium (K(+)). Two studies were performed: Study A was designed to investigate above all the isolated effect of the factor time t on intradialysis K(+) mass balance (K(+)MB): 11 stable prevalent Caucasian anuric patients underwent one standard (∼4 h) and one long-hour (∼8 h) bicarbonate haemodialysis (HD) session. The latter were pair-matched as far as the dialysate and blood volume processed (90 L) and volume of ultrafiltration are concerned. Study B was designed to identify and rank the other factors determining intradialysis K(+)MB: 63 stable prevalent Caucasian anuric patients underwent one 4-h standard bicarbonate HD session. Dialysate K(+) concentration was 2.0 mmol/L in both studies. Blood samples were obtained from the inlet blood tubing immediately before the onset of dialysis and at t60, t120, t180 min and at end of the 4- and 8-h sessions for the measurement of plasma K(+), blood bicarbonates and blood pH. Additional blood samples were obtained at t360 min for the 8 h sessions. Direct dialysate quantification was utilized for K(+)MBs. Direct potentiometry with an ion-selective electrode was used for K(+) measurements. Study A: mean K(+)MBs were significantly higher in the 8-h sessions (4 h: -88.4 ± 23.2 SD mmol versus 8 h: -101.9 ± 32.2 mmol; P = 0.02). Bivariate linear regression analyses showed that only mean plasma K(+), area under the curve (AUC) of the hourly inlet dialyser diffusion concentration gradient of K(+) (hcgAUCK(+)) and AUC of blood bicarbonates and mean blood bicarbonates were significantly related to K(+)MB in both 4- and 8-h sessions. A multiple linear regression output with K(+)MB as dependent variable showed that only mean plasma K(+), hcgAUCK(+) and duration of HD sessions per se remained statistically significant. Study B: mean K(+)MBs were -86.7 ± 22.6 mmol. Bivariate linear regression analyses showed that only mean plasma K(+), hcgAUCK(+) and mean blood bicarbonates were significantly related to K(+)MB. Again, only mean plasma K(+) and hcgAUCK(+) predicted K(+)MB at the multiple linear regression analysis. Our studies enabled to establish the ranking of factors determining intradialysis K(+)MB: plasma K(+) → dialysate K(+) gradient is the main determinant; acid-base balance plays a much less important role. The duration of HD session per se is an independent determinant of K(+)MB. © The Author 2014. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  16. Determination of osthol and its metabolites in a phase I reaction system and the Caco-2 cell model by HPLC-UV and LC-MS/MS

    PubMed Central

    Yuan, Zhenting; Xu, Haiyan; Wang, Ke; Zhao, Zhonghua; Hu, Ming

    2012-01-01

    A straightforward and sensitive reversed-phase high-performance liquid chromatography (HPLC) assay was developed and validated for the analysis of osthol and its phase I metabolites (internal standard: umbelliferone). The method was validated for the determination of osthol with respect to selectivity, precision, linearity, limit of detection, recovery, and stability. The linear response range was 0.47 ~ 60 μM, and the average recoveries ranged from 98 to 101%. The inter-day and intra-day relative standard deviations were both less than 5%. Using this method, we showed that more than 80% of osthol was metabolized in 20 min in a phase I metabolic reaction system. Transport experiments in the Caco-2 cell culture model indicated that osthol was easily absorbed with high absorptive permeability (>10×10-6 cm/sec). The permeability did not display concentration-dependence or vectorial-dependence and is mildly temperature sensitive (activation energy less than 10 Kcal/mole), indicating passive mechanism of transport. When analyzed by LC-MS/MS, five metabolites were detected in a phase I reaction system and in the receiver side of a modified Caco-2 cell model, which was supplemented with the phase I reaction system. The major metabolites appeared to be desmethyl-osthol and multiple isomers of dehydro-osthol. In conclusion, a likely cause of poor osthol bioavailability is rapid phase I metabolism via the cytochrome P-450 pathways. PMID:19304430

  17. Methodology for the development of normative data for Spanish-speaking pediatric populations.

    PubMed

    Rivera, D; Arango-Lasprilla, J C

    2017-01-01

    To describe the methodology utilized to calculate reliability and the generation of norms for 10 neuropsychological tests for children in Spanish-speaking countries. The study sample consisted of over 4,373 healthy children from nine countries in Latin America (Chile, Cuba, Ecuador, Guatemala, Honduras, Mexico, Paraguay, Peru, and Puerto Rico) and Spain. Inclusion criteria for all countries were to have between 6 to 17 years of age, an Intelligence Quotient of≥80 on the Test of Non-Verbal Intelligence (TONI-2), and score of <19 on the Children's Depression Inventory. Participants completed 10 neuropsychological tests. Reliability and norms were calculated for all tests. Test-retest analysis showed excellent or good- reliability on all tests (r's>0.55; p's<0.001) except M-WCST perseverative errors whose coefficient magnitude was fair. All scores were normed using multiple linear regressions and standard deviations of residual values. Age, age2, sex, and mean level of parental education (MLPE) were included as predictors in the models by country. The non-significant variables (p > 0.05) were removed and the analysis were run again. This is the largest Spanish-speaking children and adolescents normative study in the world. For the generation of normative data, the method based on linear regression models and the standard deviation of residual values was used. This method allows determination of the specific variables that predict test scores, helps identify and control for collinearity of predictive variables, and generates continuous and more reliable norms than those of traditional methods.

  18. Associations Between Physician Empathy, Physician Characteristics, and Standardized Measures of Patient Experience.

    PubMed

    Chaitoff, Alexander; Sun, Bob; Windover, Amy; Bokar, Daniel; Featherall, Joseph; Rothberg, Michael B; Misra-Hebert, Anita D

    2017-10-01

    To identify correlates of physician empathy and determine whether physician empathy is related to standardized measures of patient experience. Demographic, professional, and empathy data were collected during 2013-2015 from Cleveland Clinic Health System physicians prior to participation in mandatory communication skills training. Empathy was assessed using the Jefferson Scale of Empathy. Data were also collected for seven measures (six provider communication items and overall provider rating) from the visit-specific and 12-month Consumer Assessment of Healthcare Providers and Systems Clinician and Group (CG-CAHPS) surveys. Associations between empathy and provider characteristics were assessed by linear regression, ANOVA, or a nonparametric equivalent. Significant predictors were included in a multivariable linear regression model. Correlations between empathy and CG-CAHPS scores were assessed using Spearman rank correlation coefficients. In bivariable analysis (n = 847 physicians), female sex (P < .001), specialty (P < .01), outpatient practice setting (P < .05), and DO degree (P < .05) were associated with higher empathy scores. In multivariable analysis, female sex (P < .001) and four specialties (obstetrics-gynecology, pediatrics, psychiatry, and thoracic surgery; all P < .05) were significantly associated with higher empathy scores. Of the seven CG-CAHPS measures, scores on five for the 583 physicians with visit-specific data and on three for the 277 physicians with 12-month data were positively correlated with empathy. Specialty and sex were independently associated with physician empathy. Empathy was correlated with higher scores on multiple CG-CAHPS items, suggesting improving physician empathy might play a role in improving patient experience.

  19. QSAR, docking and ADMET studies of artemisinin derivatives for antimalarial activity targeting plasmepsin II, a hemoglobin-degrading enzyme from P. falciparum.

    PubMed

    Qidwai, Tabish; Yadav, Dharmendra K; Khan, Feroz; Dhawan, Sangeeta; Bhakuni, R S

    2012-01-01

    This work presents the development of quantitative structure activity relationship (QSAR) model to predict the antimalarial activity of artemisinin derivatives. The structures of the molecules are represented by chemical descriptors that encode topological, geometric, and electronic structure features. Screening through QSAR model suggested that compounds A24, A24a, A53, A54, A62 and A64 possess significant antimalarial activity. Linear model is developed by the multiple linear regression method to link structures to their reported antimalarial activity. The correlation in terms of regression coefficient (r(2)) was 0.90 and prediction accuracy of model in terms of cross validation regression coefficient (rCV(2)) was 0.82. This study indicates that chemical properties viz., atom count (all atoms), connectivity index (order 1, standard), ring count (all rings), shape index (basic kappa, order 2), and solvent accessibility surface area are well correlated with antimalarial activity. The docking study showed high binding affinity of predicted active compounds against antimalarial target Plasmepsins (Plm-II). Further studies for oral bioavailability, ADMET and toxicity risk assessment suggest that compound A24, A24a, A53, A54, A62 and A64 exhibits marked antimalarial activity comparable to standard antimalarial drugs. Later one of the predicted active compound A64 was chemically synthesized, structure elucidated by NMR and in vivo tested in multidrug resistant strain of Plasmodium yoelii nigeriensis infected mice. The experimental results obtained agreed well with the predicted values.

  20. AITRAC: Augmented Interactive Transient Radiation Analysis by Computer. User's information manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1977-10-01

    AITRAC is a program designed for on-line, interactive, DC, and transient analysis of electronic circuits. The program solves linear and nonlinear simultaneous equations which characterize the mathematical models used to predict circuit response. The program features 100 external node--200 branch capability; conversional, free-format input language; built-in junction, FET, MOS, and switch models; sparse matrix algorithm with extended-precision H matrix and T vector calculations, for fast and accurate execution; linear transconductances: beta, GM, MU, ZM; accurate and fast radiation effects analysis; special interface for user-defined equations; selective control of multiple outputs; graphical outputs in wide and narrow formats; and on-line parametermore » modification capability. The user describes the problem by entering the circuit topology and part parameters. The program then automatically generates and solves the circuit equations, providing the user with printed or plotted output. The circuit topology and/or part values may then be changed by the user, and a new analysis, requested. Circuit descriptions may be saved on disk files for storage and later use. The program contains built-in standard models for resistors, voltage and current sources, capacitors, inductors including mutual couplings, switches, junction diodes and transistors, FETS, and MOS devices. Nonstandard models may be constructed from standard models or by using the special equations interface. Time functions may be described by straight-line segments or by sine, damped sine, and exponential functions. 42 figures, 1 table. (RWR)« less

  1. Liquid chromatographic tandem mass spectrometric assay for quantification of 97/78 and its metabolite 97/63: a promising trioxane antimalarial in monkey plasma.

    PubMed

    Singh, R P; Sabarinath, S; Gautam, N; Gupta, R C; Singh, S K

    2009-07-15

    The present manuscript describes development and validation of LC-MS/MS assay for the simultaneous quantitation of 97/78 and its active in-vivo metabolite 97/63 in monkey plasma using alpha-arteether as internal standard (IS). The method involves a single step protein precipitation using acetonitrile as extraction method. The analytes were separated on a Columbus C(18) (50 mm x 2 mm i.d., 5 microm particle size) column by isocratic elution with acetonitrile:ammonium acetate buffer (pH 4, 10 mM) (80:20 v/v) at a flow rate of 0.45 mL/min, and analyzed by mass spectrometry in multiple reaction-monitoring (MRM) positive ion mode. The chromatographic run time was 4.0 min and the weighted (1/x(2)) calibration curves were linear over a range of 1.56-200 ng/mL. The method was linear for both the analytes with correlation coefficients >0.995. The intra-day and inter-day accuracy (% bias) and precisions (% RSD) of the assay were less than 6.27%. Both analytes were stable after three freeze-thaw cycles (% deviation <8.2) and also for 30 days in plasma (% deviation <6.7). The absolute recoveries of 97/78, 97/63 and internal standard (IS), from spiked plasma samples were >90%. The validated assay method, described here, was successfully applied to the pharmacokinetic study of 97/78 and its active in-vivo metabolite 97/63 in Rhesus monkeys.

  2. Linear mixed-effects models to describe individual tree crown width for China-fir in Fujian Province, southeast China.

    PubMed

    Hao, Xu; Yujun, Sun; Xinjie, Wang; Jin, Wang; Yao, Fu

    2015-01-01

    A multiple linear model was developed for individual tree crown width of Cunninghamia lanceolata (Lamb.) Hook in Fujian province, southeast China. Data were obtained from 55 sample plots of pure China-fir plantation stands. An Ordinary Linear Least Squares (OLS) regression was used to establish the crown width model. To adjust for correlations between observations from the same sample plots, we developed one level linear mixed-effects (LME) models based on the multiple linear model, which take into account the random effects of plots. The best random effects combinations for the LME models were determined by the Akaike's information criterion, the Bayesian information criterion and the -2logarithm likelihood. Heteroscedasticity was reduced by three residual variance functions: the power function, the exponential function and the constant plus power function. The spatial correlation was modeled by three correlation structures: the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)], and the compound symmetry structure (CS). Then, the LME model was compared to the multiple linear model using the absolute mean residual (AMR), the root mean square error (RMSE), and the adjusted coefficient of determination (adj-R2). For individual tree crown width models, the one level LME model showed the best performance. An independent dataset was used to test the performance of the models and to demonstrate the advantage of calibrating LME models.

  3. A class of stochastic optimization problems with one quadratic & several linear objective functions and extended portfolio selection model

    NASA Astrophysics Data System (ADS)

    Xu, Jiuping; Li, Jun

    2002-09-01

    In this paper a class of stochastic multiple-objective programming problems with one quadratic, several linear objective functions and linear constraints has been introduced. The former model is transformed into a deterministic multiple-objective nonlinear programming model by means of the introduction of random variables' expectation. The reference direction approach is used to deal with linear objectives and results in a linear parametric optimization formula with a single linear objective function. This objective function is combined with the quadratic function using the weighted sums. The quadratic problem is transformed into a linear (parametric) complementary problem, the basic formula for the proposed approach. The sufficient and necessary conditions for (properly, weakly) efficient solutions and some construction characteristics of (weakly) efficient solution sets are obtained. An interactive algorithm is proposed based on reference direction and weighted sums. Varying the parameter vector on the right-hand side of the model, the DM can freely search the efficient frontier with the model. An extended portfolio selection model is formed when liquidity is considered as another objective to be optimized besides expectation and risk. The interactive approach is illustrated with a practical example.

  4. UFLC-ESI-MS/MS analysis of multiple mycotoxins in medicinal and edible Areca catechu.

    PubMed

    Liu, Hongmei; Luo, Jiaoyang; Kong, Weijun; Liu, Qiutao; Hu, Yichen; Yang, Meihua

    2016-05-01

    A robust, sensitive and reliable ultra fast liquid chromatography combined with electrospray ionization tandem mass spectrometry (UFLC-ESI-MS/MS) was optimized and validated for simultaneous identification and quantification of eleven mycotoxins in medicinal and edible Areca catechu, based on one-step extraction without any further clean-up. Separation and quantification were performed in both positive and negative modes under multiple reaction monitoring (MRM) in a single run with zearalanone (ZAN) as internal standard. The chromatographic conditions and MS/MS parameters were carefully optimized. Matrix-matched calibration was recommended to reduce matrix effects and improve accuracy, showing good linearity within wide concentration ranges. Limits of quantification (LOQ) were lower than 50 μg kg(-1), while limits of detection (LOD) were in the range of 0.1-20 μg kg(-1). The accuracy of the developed method was validated for recoveries, ranging from 85% to 115% with relative standard deviation (RSD) ≤14.87% at low level, from 75% to 119% with RSD ≤ 14.43% at medium level and from 61% to 120% with RSD ≤ 13.18% at high level, respectively. Finally, the developed multi-mycotoxin method was applied for screening of these mycotoxins in 24 commercial samples. Only aflatoxin B2 and zearalenone were found in 2 samples. This is the first report on the application of UFLC-ESI(+/-)-MS/MS for multi-class mycotoxins in A. catechu. The developed method with many advantages of simple pretreatment, rapid determination and high sensitivity is a proposed candidate for large-scale detection and quantification of multiple mycotoxins in other complex matrixes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Stepped MS(All) Relied Transition (SMART): An approach to rapidly determine optimal multiple reaction monitoring mass spectrometry parameters for small molecules.

    PubMed

    Ye, Hui; Zhu, Lin; Wang, Lin; Liu, Huiying; Zhang, Jun; Wu, Mengqiu; Wang, Guangji; Hao, Haiping

    2016-02-11

    Multiple reaction monitoring (MRM) is a universal approach for quantitative analysis because of its high specificity and sensitivity. Nevertheless, optimization of MRM parameters remains as a time and labor-intensive task particularly in multiplexed quantitative analysis of small molecules in complex mixtures. In this study, we have developed an approach named Stepped MS(All) Relied Transition (SMART) to predict the optimal MRM parameters of small molecules. SMART requires firstly a rapid and high-throughput analysis of samples using a Stepped MS(All) technique (sMS(All)) on a Q-TOF, which consists of serial MS(All) events acquired from low CE to gradually stepped-up CE values in a cycle. The optimal CE values can then be determined by comparing the extracted ion chromatograms for the ion pairs of interest among serial scans. The SMART-predicted parameters were found to agree well with the parameters optimized on a triple quadrupole from the same vendor using a mixture of standards. The parameters optimized on a triple quadrupole from a different vendor was also employed for comparison, and found to be linearly correlated with the SMART-predicted parameters, suggesting the potential applications of the SMART approach among different instrumental platforms. This approach was further validated by applying to simultaneous quantification of 31 herbal components in the plasma of rats treated with a herbal prescription. Because the sMS(All) acquisition can be accomplished in a single run for multiple components independent of standards, the SMART approach are expected to find its wide application in the multiplexed quantitative analysis of complex mixtures. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Linear summation of outputs in a balanced network model of motor cortex.

    PubMed

    Capaday, Charles; van Vreeswijk, Carl

    2015-01-01

    Given the non-linearities of the neural circuitry's elements, we would expect cortical circuits to respond non-linearly when activated. Surprisingly, when two points in the motor cortex are activated simultaneously, the EMG responses are the linear sum of the responses evoked by each of the points activated separately. Additionally, the corticospinal transfer function is close to linear, implying that the synaptic interactions in motor cortex must be effectively linear. To account for this, here we develop a model of motor cortex composed of multiple interconnected points, each comprised of reciprocally connected excitatory and inhibitory neurons. We show how non-linearities in neuronal transfer functions are eschewed by strong synaptic interactions within each point. Consequently, the simultaneous activation of multiple points results in a linear summation of their respective outputs. We also consider the effects of reduction of inhibition at a cortical point when one or more surrounding points are active. The network response in this condition is linear over an approximately two- to three-fold decrease of inhibitory feedback strength. This result supports the idea that focal disinhibition allows linear coupling of motor cortical points to generate movement related muscle activation patterns; albeit with a limitation on gain control. The model also explains why neural activity does not spread as far out as the axonal connectivity allows, whilst also explaining why distant cortical points can be, nonetheless, functionally coupled by focal disinhibition. Finally, we discuss the advantages that linear interactions at the cortical level afford to motor command synthesis.

  7. Nonlinear aeroservoelastic analysis of a controlled multiple-actuated-wing model with free-play

    NASA Astrophysics Data System (ADS)

    Huang, Rui; Hu, Haiyan; Zhao, Yonghui

    2013-10-01

    In this paper, the effects of structural nonlinearity due to free-play in both leading-edge and trailing-edge outboard control surfaces on the linear flutter control system are analyzed for an aeroelastic model of three-dimensional multiple-actuated-wing. The free-play nonlinearities in the control surfaces are modeled theoretically by using the fictitious mass approach. The nonlinear aeroelastic equations of the presented model can be divided into nine sub-linear modal-based aeroelastic equations according to the different combinations of deflections of the leading-edge and trailing-edge outboard control surfaces. The nonlinear aeroelastic responses can be computed based on these sub-linear aeroelastic systems. To demonstrate the effects of nonlinearity on the linear flutter control system, a single-input and single-output controller and a multi-input and multi-output controller are designed based on the unconstrained optimization techniques. The numerical results indicate that the free-play nonlinearity can lead to either limit cycle oscillations or divergent motions when the linear control system is implemented.

  8. Statistical Methods for Generalized Linear Models with Covariates Subject to Detection Limits.

    PubMed

    Bernhardt, Paul W; Wang, Huixia J; Zhang, Daowen

    2015-05-01

    Censored observations are a common occurrence in biomedical data sets. Although a large amount of research has been devoted to estimation and inference for data with censored responses, very little research has focused on proper statistical procedures when predictors are censored. In this paper, we consider statistical methods for dealing with multiple predictors subject to detection limits within the context of generalized linear models. We investigate and adapt several conventional methods and develop a new multiple imputation approach for analyzing data sets with predictors censored due to detection limits. We establish the consistency and asymptotic normality of the proposed multiple imputation estimator and suggest a computationally simple and consistent variance estimator. We also demonstrate that the conditional mean imputation method often leads to inconsistent estimates in generalized linear models, while several other methods are either computationally intensive or lead to parameter estimates that are biased or more variable compared to the proposed multiple imputation estimator. In an extensive simulation study, we assess the bias and variability of different approaches within the context of a logistic regression model and compare variance estimation methods for the proposed multiple imputation estimator. Lastly, we apply several methods to analyze the data set from a recently-conducted GenIMS study.

  9. Coexistence and local μ-stability of multiple equilibrium points for memristive neural networks with nonmonotonic piecewise linear activation functions and unbounded time-varying delays.

    PubMed

    Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde

    2016-12-01

    In this paper, the coexistence and dynamical behaviors of multiple equilibrium points are discussed for a class of memristive neural networks (MNNs) with unbounded time-varying delays and nonmonotonic piecewise linear activation functions. By means of the fixed point theorem, nonsmooth analysis theory and rigorous mathematical analysis, it is proven that under some conditions, such n-neuron MNNs can have 5 n equilibrium points located in ℜ n , and 3 n of them are locally μ-stable. As a direct application, some criteria are also obtained on the multiple exponential stability, multiple power stability, multiple log-stability and multiple log-log-stability. All these results reveal that the addressed neural networks with activation functions introduced in this paper can generate greater storage capacity than the ones with Mexican-hat-type activation function. Numerical simulations are presented to substantiate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Electrostatic turbulence in the earth's central plasma sheet produced by multiple-ring ion distributions

    NASA Technical Reports Server (NTRS)

    Huba, J. D.; Chen, J.; Anderson, R. R.

    1992-01-01

    Attention is given to a mechanism to generate a broad spectrum of electrostatic turbulence in the quiet time central plasma sheet (CPS) plasma. It is shown theoretically that multiple-ring ion distributions can generate short-wavelength (less than about 1), electrostatic turbulence with frequencies less than about kVj, where Vj is the velocity of the jth ring. On the basis of a set of parameters from measurements made in the CPS, it is found that electrostatic turbulence can be generated with wavenumbers in the range of 0.02 and 1.0, with real frequencies in the range of 0 and 10, and with linear growth rates greater than 0.01 over a broad range of angles relative to the magnetic field (5-90 deg). These theoretical results are compared with wave data from ISEE 1 using an ion distribution function exhibiting multiple-ring structures observed at the same time. The theoretical results in the linear regime are found to be consistent with the wave data.

  11. Digital processing of array seismic recordings

    USGS Publications Warehouse

    Ryall, Alan; Birtill, John

    1962-01-01

    This technical letter contains a brief review of the operations which are involved in digital processing of array seismic recordings by the methods of velocity filtering, summation, cross-multiplication and integration, and by combinations of these operations (the "UK Method" and multiple correlation). Examples are presented of analyses by the several techniques on array recordings which were obtained by the U.S. Geological Survey during chemical and nuclear explosions in the western United States. Seismograms are synthesized using actual noise and Pn-signal recordings, such that the signal-to-noise ratio, onset time and velocity of the signal are predetermined for the synthetic record. These records are then analyzed by summation, cross-multiplication, multiple correlation and the UK technique, and the results are compared. For all of the examples presented, analysis by the non-linear techniques of multiple correlation and cross-multiplication of the traces on an array recording are preferred to analyses by the linear operations involved in summation and the UK Method.

  12. Semilinear programming: applications and implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohan, S.

    Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less

  13. BIODEGRADATION PROBABILITY PROGRAM (BIODEG)

    EPA Science Inventory

    The Biodegradation Probability Program (BIODEG) calculates the probability that a chemical under aerobic conditions with mixed cultures of microorganisms will biodegrade rapidly or slowly. It uses fragment constants developed using multiple linear and non-linear regressions and d...

  14. The Use of Linear Programming for Prediction.

    ERIC Educational Resources Information Center

    Schnittjer, Carl J.

    The purpose of the study was to develop a linear programming model to be used for prediction, test the accuracy of the predictions, and compare the accuracy with that produced by curvilinear multiple regression analysis. (Author)

  15. Standard Error of Linear Observed-Score Equating for the NEAT Design with Nonnormally Distributed Data

    ERIC Educational Resources Information Center

    Zu, Jiyun; Yuan, Ke-Hai

    2012-01-01

    In the nonequivalent groups with anchor test (NEAT) design, the standard error of linear observed-score equating is commonly estimated by an estimator derived assuming multivariate normality. However, real data are seldom normally distributed, causing this normal estimator to be inconsistent. A general estimator, which does not rely on the…

  16. The Primordial Inflation Explorer (PIXIE)

    NASA Technical Reports Server (NTRS)

    Kogut, Alan; Chuss, David T.; Dotson, Jessie; Dwek, Eli; Fixsen, Dale J.; Halpern, Mark; Hinshaw, Gary F.; Meyer, Stephan; Moseley, S. Harvey; Seiffert, Michael D.; hide

    2014-01-01

    The Primordial Inflation Explorer is an Explorer-class mission to measure the gravity-wave signature of primordial inflation through its distinctive imprint on the linear polarization of the cosmic microwave background. PIXIE uses an innovative optical design to achieve background-limited sensitivity in 400 spectral channels spanning 2.5 decades in frequency from 30 GHz to 6 THz (1 cm to 50 micron wavelength). Multi-moded non-imaging optics feed a polarizing Fourier Transform Spectrometer to produce a set of interference fringes, proportional to the difference spectrum between orthogonal linear polarizations from the two input beams. Multiple levels of symmetry and signal modulation combine to reduce the instrumental signature and confusion from unpolarized sources to negligible levels. PIXIE will map the full sky in Stokes I, Q, and U parameters with angular resolution 2.6 deg and sensitivity 0.2 µK per 1 deg square pixel. The principal science goal is the detection and characterization of linear polarization from an inflationary epoch in the early universe, with tensor-to-scalar ratio r less than 10(exp -3) at 5 standard deviations. In addition, PIXIE will measure the absolute frequency spectrum to constrain physical processes ranging from inflation to the nature of the first stars to the physical conditions within the interstellar medium of the Galaxy. We describe the PIXIE instrument and mission architecture with an emphasis on the expected level of systematic error suppression.

  17. Visual Detection Under Uncertainty Operates Via an Early Static, Not Late Dynamic, Non-Linearity

    PubMed Central

    Neri, Peter

    2010-01-01

    Signals in the environment are rarely specified exactly: our visual system may know what to look for (e.g., a specific face), but not its exact configuration (e.g., where in the room, or in what orientation). Uncertainty, and the ability to deal with it, is a fundamental aspect of visual processing. The MAX model is the current gold standard for describing how human vision handles uncertainty: of all possible configurations for the signal, the observer chooses the one corresponding to the template associated with the largest response. We propose an alternative model in which the MAX operation, which is a dynamic non-linearity (depends on multiple inputs from several stimulus locations) and happens after the input stimulus has been matched to the possible templates, is replaced by an early static non-linearity (depends only on one input corresponding to one stimulus location) which is applied before template matching. By exploiting an integrated set of analytical and experimental tools, we show that this model is able to account for a number of empirical observations otherwise unaccounted for by the MAX model, and is more robust with respect to the realistic limitations imposed by the available neural hardware. We then discuss how these results, currently restricted to a simple visual detection task, may extend to a wider range of problems in sensory processing. PMID:21212835

  18. Multiresidue analysis of 36 pesticides in soil using a modified quick, easy, cheap, effective, rugged, and safe method by liquid chromatography with tandem quadruple linear ion trap mass spectrometry.

    PubMed

    Feng, Xue; He, Zeying; Wang, Lu; Peng, Yi; Luo, Ming; Liu, Xiaowei

    2015-09-01

    A new method for simultaneous determination of 36 pesticides, including 15 organophosphorus, six carbamate, and some other pesticides in soil was developed by liquid chromatography with tandem quadruple linear ion trap mass spectrometry. The extraction and clean-up steps were optimized based on the quick, easy, cheap, effective, rugged, and safe method. The data were acquired in multiple reaction monitoring mode combined with enhanced product ion to increase confidence of the analytical results. Validation experiments were performed in soil samples. The average recoveries of pesticides at four spiking levels (1, 5, 50, and 100 μg/kg) ranged from 63 to 126% with relative standard deviation below 20%. The limits of detection of pesticides were 0.04-0.8 μg/kg, and the limits of quantification were 0.1-2.6 μg/kg. The correlation coefficients (r(2) ) were higher than 0.990 in the linearity range of 0.5-200 μg/L for most of the pesticides. The method allowed for the analysis of the target pesticides in the lower μg/kg concentration range. The optimized method was then applied to the test of real soil samples obtained from several areas in China, confirming the feasibility of the method. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Response of an Impact Test Apparatus for Fall Protective Headgear Testing Using a Hybrid-III Head/Neck Assembly

    PubMed Central

    Caccese, V.; Ferguson, J.; Lloyd, J.; Edgecomb, M.; Seidi, M.; Hajiaghamemar, M.

    2017-01-01

    A test method based upon a Hybrid-III head and neck assembly that includes measurement of both linear and angular acceleration is investigated for potential use in impact testing of protective headgear. The test apparatus is based upon a twin wire drop test system modified with the head/neck assembly and associated flyarm components. This study represents a preliminary assessment of the test apparatus for use in the development of protective headgear designed to prevent injury due to falls. By including angular acceleration in the test protocol it becomes possible to assess and intentionally reduce this component of acceleration. Comparisons of standard and reduced durometer necks, various anvils, front, rear, and side drop orientations, and response data on performance of the apparatus are provided. Injury measures summarized for an unprotected drop include maximum linear and angular acceleration, head injury criteria (HIC), rotational injury criteria (RIC), and power rotational head injury criteria (PRHIC). Coefficient of variation for multiple drops ranged from 0.4 to 6.7% for linear acceleration. Angular acceleration recorded in a side drop orientation resulted in highest coefficient of variation of 16.3%. The drop test apparatus results in a reasonably repeatable test method that has potential to be used in studies of headgear designed to reduce head impact injury. PMID:28216804

  20. The effect of human immunodeficiency virus type 1 antibody status on military applicant aptitude test scores.

    PubMed

    Arday, D R; Brundage, J F; Gardner, L I; Goldenbaum, M; Wann, F; Wright, S

    1991-06-15

    The authors conducted a population-based study to attempt to estimate the effect of human immunodeficiency virus type 1 (HIV-1) seropositivity on Armed Services Vocational Aptitude Battery test scores in otherwise healthy individuals with early HIV-1 infection. The Armed Services Vocational Aptitude Battery is a 10-test written multiple aptitude battery administered to all civilian applicants for military enlistment prior to serologic screening for HIV-1 antibodies. A total of 975,489 induction testing records containing both Armed Services Vocational Aptitude Battery and HIV-1 results from October 1985 through March 1987 were examined. An analysis data set (n = 7,698) was constructed by choosing five controls for each of the 1,283 HIV-1-positive cases, matched on five-digit ZIP code, and a multiple linear regression analysis was performed to control for demographic and other factors that might influence test scores. Years of education was the strongest predictor of test scores, raising an applicant's score on a composite test nearly 0.16 standard deviation per year. The HIV-1-positive effect on the composite score was -0.09 standard deviation (99% confidence interval -0.17 to -0.02). Separate regressions on each component test within the battery showed HIV-1 effects between -0.39 and +0.06 standard deviation. The two Armed Services Vocational Aptitude Battery component tests felt a priori to be the most sensitive to HIV-1-positive status showed the least decrease with seropositivity. Much of the variability in test scores was not predicted by either HIV-1 serostatus or the demographic and other factors included in the model. There appeared to be little evidence of a strong HIV-1 effect.

  1. Simultaneous determination of multiple angiotensin type 1 receptor antagonists and its application to high-throughput pharmacokinetic study

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaoyan; Sun, Jianguo; Hao, Haiping; Wang, Guangji; Hu, Xiaoling; Lv, Hua; Gu, Shenghua; Wu, Xiaoming; Xu, Jinyi

    2008-05-01

    A rapid and sensitive high performance liquid chromatography-electrospray tandem mass spectrometry (HPLC-ESI-MS/MS) detection was developed for the simultaneous determination of multiple angiotensin type 1 receptor antagonists (AT1RAs) WX472, WX581, 1b and telmisartan in rat plasma for the purpose of high-throughout pharmacokinetic screening. The method was operated under selected reaction monitoring (SRM) mode in the positive ion mode. The analytes and the internal standard (pitavastatin) were extracted from 100 [mu]L rat plasma under acidic conditions by liquid-liquid extraction with ethyl acetate. The analytes and internal standard were baseline separated on a Gemini analytical column (3 [mu]m, 150 mm × 2.0 mm) with the adoption of a gradient elution using acetonitrile and 0.05% aqueous formic acid. The standard curves were linear in the concentration ranges of 4.5-900 ng/mL for WX472, 5-1000 ng/mL for WX581 and 0.5-100 ng/mL for 1b and telmisartan. Intra- and inter-batch precisions (R.S.D.%) were all within 15% and the method assessed a quite good accuracy (R.E.%). Recoveries were found to be >65% for all the compounds and no obvious matrix effects were found. This method has been successfully applied to the high-throughput pharmacokinetic screening study for both cassette dosing and cassette analysis of four compounds to rats. Significant drug-drug interactions were observed after cassette dosing. The study suggested that cassette analysis of pooled samples would be a better choice for the high-throughput pharmacokinetic screening of angiotensin type 1 receptor antagonists.

  2. Multiple reaction monitoring assay based on conventional liquid chromatography and electrospray ionization for simultaneous monitoring of multiple cerebrospinal fluid biomarker candidates for Alzheimer's disease

    PubMed Central

    Choi1, Yong Seok; Lee, Kelvin H.

    2016-01-01

    Alzheimer's disease (AD) is the most common type of dementia, but early and accurate diagnosis remains challenging. Previously, a panel of cerebrospinal fluid (CSF) biomarker candidates distinguishing AD and non-AD CSF accurately (> 90%) was reported. Furthermore, a multiple reaction monitoring (MRM) assay based on nano liquid chromatography tandem mass spectrometry (nLC-MS/MS) was developed to help validate putative AD CSF biomarker candidates including proteins from the panel. Despite the good performance of the MRM assay, wide acceptance may be challenging because of limited availability of nLC-MS/MS systems laboratories. Thus, here, a new MRM assay based on conventional LC-MS/MS is presented. This method monitors 16 peptides representing 16 (of 23) biomarker candidates that belonged to the previous AD CSF panel. A 30-times more concentrated sample than the sample used for the previous study was loaded onto a high capacity trap column, and all 16 MRM transitions showed good linearity (average R2 = 0.966), intra-day reproducibility (average coefficient of variance (CV) = 4.78%), and inter-day reproducibility (average CV = 9.85%). The present method has several advantages such as a shorter analysis time, no possibility of target variability, and no need for an internal standard. PMID:26404792

  3. Analyzing latent state-trait and multiple-indicator latent growth curve models as multilevel structural equation models

    PubMed Central

    Geiser, Christian; Bishop, Jacob; Lockhart, Ginger; Shiffman, Saul; Grenard, Jerry L.

    2013-01-01

    Latent state-trait (LST) and latent growth curve (LGC) models are frequently used in the analysis of longitudinal data. Although it is well-known that standard single-indicator LGC models can be analyzed within either the structural equation modeling (SEM) or multilevel (ML; hierarchical linear modeling) frameworks, few researchers realize that LST and multivariate LGC models, which use multiple indicators at each time point, can also be specified as ML models. In the present paper, we demonstrate that using the ML-SEM rather than the SL-SEM framework to estimate the parameters of these models can be practical when the study involves (1) a large number of time points, (2) individually-varying times of observation, (3) unequally spaced time intervals, and/or (4) incomplete data. Despite the practical advantages of the ML-SEM approach under these circumstances, there are also some limitations that researchers should consider. We present an application to an ecological momentary assessment study (N = 158 youths with an average of 23.49 observations of positive mood per person) using the software Mplus (Muthén and Muthén, 1998–2012) and discuss advantages and disadvantages of using the ML-SEM approach to estimate the parameters of LST and multiple-indicator LGC models. PMID:24416023

  4. Maximizing the sensitivity and reliability of peptide identification in large-scale proteomic experiments by harnessing multiple search engines.

    PubMed

    Yu, Wen; Taylor, J Alex; Davis, Michael T; Bonilla, Leo E; Lee, Kimberly A; Auger, Paul L; Farnsworth, Chris C; Welcher, Andrew A; Patterson, Scott D

    2010-03-01

    Despite recent advances in qualitative proteomics, the automatic identification of peptides with optimal sensitivity and accuracy remains a difficult goal. To address this deficiency, a novel algorithm, Multiple Search Engines, Normalization and Consensus is described. The method employs six search engines and a re-scoring engine to search MS/MS spectra against protein and decoy sequences. After the peptide hits from each engine are normalized to error rates estimated from the decoy hits, peptide assignments are then deduced using a minimum consensus model. These assignments are produced in a series of progressively relaxed false-discovery rates, thus enabling a comprehensive interpretation of the data set. Additionally, the estimated false-discovery rate was found to have good concordance with the observed false-positive rate calculated from known identities. Benchmarking against standard proteins data sets (ISBv1, sPRG2006) and their published analysis, demonstrated that the Multiple Search Engines, Normalization and Consensus algorithm consistently achieved significantly higher sensitivity in peptide identifications, which led to increased or more robust protein identifications in all data sets compared with prior methods. The sensitivity and the false-positive rate of peptide identification exhibit an inverse-proportional and linear relationship with the number of participating search engines.

  5. The prediction of intelligence in preschool children using alternative models to regression.

    PubMed

    Finch, W Holmes; Chang, Mei; Davis, Andrew S; Holden, Jocelyn E; Rothlisberg, Barbara A; McIntosh, David E

    2011-12-01

    Statistical prediction of an outcome variable using multiple independent variables is a common practice in the social and behavioral sciences. For example, neuropsychologists are sometimes called upon to provide predictions of preinjury cognitive functioning for individuals who have suffered a traumatic brain injury. Typically, these predictions are made using standard multiple linear regression models with several demographic variables (e.g., gender, ethnicity, education level) as predictors. Prior research has shown conflicting evidence regarding the ability of such models to provide accurate predictions of outcome variables such as full-scale intelligence (FSIQ) test scores. The present study had two goals: (1) to demonstrate the utility of a set of alternative prediction methods that have been applied extensively in the natural sciences and business but have not been frequently explored in the social sciences and (2) to develop models that can be used to predict premorbid cognitive functioning in preschool children. Predictions of Stanford-Binet 5 FSIQ scores for preschool-aged children is used to compare the performance of a multiple regression model with several of these alternative methods. Results demonstrate that classification and regression trees provided more accurate predictions of FSIQ scores than does the more traditional regression approach. Implications of these results are discussed.

  6. Spinal cord atrophy in anterior-posterior direction reflects impairment in multiple sclerosis.

    PubMed

    Lundell, H; Svolgaard, O; Dogonowski, A-M; Romme Christensen, J; Selleberg, F; Soelberg Sørensen, P; Blinkenberg, M; Siebner, H R; Garde, E

    2017-10-01

    To investigate how atrophy is distributed over the cross section of the upper cervical spinal cord and how this relates to functional impairment in multiple sclerosis (MS). We analysed the structural brain MRI scans of 54 patients with relapsing-remitting MS (n=22), primary progressive MS (n=9), secondary progressive MS (n=23) and 23 age- and sex-matched healthy controls. We measured the cross-sectional area (CSA), left-right width (LRW) and anterior-posterior width (APW) of the spinal cord at the segmental level C2. We tested for a nonparametric linear relationship between these atrophy measures and clinical impairments as reflected by the Expanded Disability Status Scale (EDSS) and Multiple Sclerosis Impairment Scale (MSIS). In patients with MS, CSA and APW but not LRW were reduced compared to healthy controls (P<.02) and showed significant correlations with EDSS, MSIS and specific MSIS subscores. In patients with MS, atrophy of the upper cervical cord is most evident in the antero-posterior direction. As APW of the cervical cord can be readily derived from standard structural MRI of the brain, APW constitutes a clinically useful neuroimaging marker of disease-related neurodegeneration in MS. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. Lepidopteran larva consumption of soybean foliage: basis for developing multiple-species economic thresholds for pest management decisions.

    PubMed

    Bueno, Regiane Cristina Oliveira de Freitas; Bueno, Adeney de Freitas; Moscardi, Flávio; Parra, José Roberto Postali; Hoffmann-Campo, Clara Beatriz

    2011-02-01

    Defoliation by Anticarsia gemmatalis (Hübner), Pseudoplusia includens (Walker), Spodoptera eridania (Cramer), S. cosmioides (Walker) and S. frugiperda (JE Smith) (Lepidoptera: Noctuidae) was evaluated in four soybean genotypes. A multiple-species economic threshold (ET), based upon the species' feeding capacity, is proposed with the aim of improving growers' management decisions on when to initiate control measures for the species complex. Consumption by A. gemmatalis, S. cosmioides or S. eridania on different genotypes was similar. The highest consumption of P. includens was 92.7 cm(2) on Codetec 219RR; that of S. frugiperda was 118 cm(2) on Codetec 219RR and 115.1 cm(2) on MSoy 8787RR. The insect injury equivalent for S. cosmoides, calculated on the basis of insect consumption, was double the standard consumption by A. gemmatalis, and statistically different from the other species tested, which were similar to each other. As S. cosmioides always defoliated nearly twice the leaf area of the other species, the injury equivalent would be 2 for this lepidopteran species and 1 for the other species. The recommended multiple-species ET to trigger the beginning of insect control would then be 20 insect equivalents per linear metre. Copyright © 2010 Society of Chemical Industry.

  8. Multiple reaction monitoring assay based on conventional liquid chromatography and electrospray ionization for simultaneous monitoring of multiple cerebrospinal fluid biomarker candidates for Alzheimer's disease.

    PubMed

    Choi, Yong Seok; Lee, Kelvin H

    2016-03-01

    Alzheimer's disease (AD) is the most common type of dementia, but early and accurate diagnosis remains challenging. Previously, a panel of cerebrospinal fluid (CSF) biomarker candidates distinguishing AD and non-AD CSF accurately (>90 %) was reported. Furthermore, a multiple reaction monitoring (MRM) assay based on nano liquid chromatography tandem mass spectrometry (nLC-MS/MS) was developed to help validate putative AD CSF biomarker candidates including proteins from the panel. Despite the good performance of the MRM assay, wide acceptance may be challenging because of limited availability of nLC-MS/MS systems in laboratories. Thus, here, a new MRM assay based on conventional LC-MS/MS is presented. This method monitors 16 peptides representing 16 (of 23) biomarker candidates that belonged to the previous AD CSF panel. A 30-times more concentrated sample than the sample used for the previous study was loaded onto a high capacity trap column, and all 16 MRM transitions showed good linearity (average R(2) = 0.966), intra-day reproducibility (average coefficient of variance (CV) = 4.78 %), and inter-day reproducibility (average CV = 9.85 %). The present method has several advantages such as a shorter analysis time, no possibility of target variability, and no need for an internal standard.

  9. Vanilla technicolor at linear colliders

    NASA Astrophysics Data System (ADS)

    Frandsen, Mads T.; Järvinen, Matti; Sannino, Francesco

    2011-08-01

    We analyze the reach of linear colliders for models of dynamical electroweak symmetry breaking. We show that linear colliders can efficiently test the compositeness scale, identified with the mass of the new spin-one resonances, until the maximum energy in the center of mass of the colliding leptons. In particular we analyze the Drell-Yan processes involving spin-one intermediate heavy bosons decaying either leptonically or into two standard model gauge bosons. We also analyze the light Higgs production in association with a standard model gauge boson stemming also from an intermediate spin-one heavy vector.

  10. Accurate Solution of Multi-Region Continuum Biomolecule Electrostatic Problems Using the Linearized Poisson-Boltzmann Equation with Curved Boundary Elements

    PubMed Central

    Altman, Michael D.; Bardhan, Jaydeep P.; White, Jacob K.; Tidor, Bruce

    2009-01-01

    We present a boundary-element method (BEM) implementation for accurately solving problems in biomolecular electrostatics using the linearized Poisson–Boltzmann equation. Motivating this implementation is the desire to create a solver capable of precisely describing the geometries and topologies prevalent in continuum models of biological molecules. This implementation is enabled by the synthesis of four technologies developed or implemented specifically for this work. First, molecular and accessible surfaces used to describe dielectric and ion-exclusion boundaries were discretized with curved boundary elements that faithfully reproduce molecular geometries. Second, we avoided explicitly forming the dense BEM matrices and instead solved the linear systems with a preconditioned iterative method (GMRES), using a matrix compression algorithm (FFTSVD) to accelerate matrix-vector multiplication. Third, robust numerical integration methods were employed to accurately evaluate singular and near-singular integrals over the curved boundary elements. Finally, we present a general boundary-integral approach capable of modeling an arbitrary number of embedded homogeneous dielectric regions with differing dielectric constants, possible salt treatment, and point charges. A comparison of the presented BEM implementation and standard finite-difference techniques demonstrates that for certain classes of electrostatic calculations, such as determining absolute electrostatic solvation and rigid-binding free energies, the improved convergence properties of the BEM approach can have a significant impact on computed energetics. We also demonstrate that the improved accuracy offered by the curved-element BEM is important when more sophisticated techniques, such as non-rigid-binding models, are used to compute the relative electrostatic effects of molecular modifications. In addition, we show that electrostatic calculations requiring multiple solves using the same molecular geometry, such as charge optimization or component analysis, can be computed to high accuracy using the presented BEM approach, in compute times comparable to traditional finite-difference methods. PMID:18567005

  11. [Determination of biphenyl ether herbicides in water using HPLC with cloud-point extraction].

    PubMed

    He, Cheng-Yan; Li, Yuan-Qian; Wang, Shen-Jiao; Ouyang, Hua-Xue; Zheng, Bo

    2010-01-01

    To determine residues of multiple biphenyl ether herbicides simultaneously in water using high performance liquid chromatography (HPLC) with cloud-point extraction. The residues of eight biphenyl ether herbicides (including bentazone, fomesafen, acifluorfen, aclonifen, bifenox, fluoroglycofenethy, nitrofen, oxyfluorfen) in water samples were extracted with cloud-point extraction of Triton X-114. The analytes were separated and determined using reverse phase HPLC with ultraviolet detector at 300 nm. Optimized conditions for the pretreatment of water samples and the parameters of chromatographic separation applied. There was a good linear correlation between the concentration and the peak area of the analytes in the range of 0.05-2.00 mg/L (r = 0.9991-0.9998). Except bentazone, the spiked recoveries of the biphenyl ether herbicides in the water samples ranged from 80.1% to 100.9%, with relative standard deviations ranging from 2.70% to 6.40%. The detection limit of the method ranged from 0.10 microg/L to 0.50 microg/L. The proposed method is simple, rapid and sensitive, and can meet the requirements of determination of multiple biphenyl ether herbicides simultaneously in natural waters.

  12. Tomographic PIV: particles versus blobs

    NASA Astrophysics Data System (ADS)

    Champagnat, Frédéric; Cornic, Philippe; Cheminet, Adam; Leclaire, Benjamin; Le Besnerais, Guy; Plyer, Aurélien

    2014-08-01

    We present an alternative approach to tomographic particle image velocimetry (tomo-PIV) that seeks to recover nearly single voxel particles rather than blobs of extended size. The baseline of our approach is a particle-based representation of image data. An appropriate discretization of this representation yields an original linear forward model with a weight matrix built with specific samples of the system’s point spread function (PSF). Such an approach requires only a few voxels to explain the image appearance, therefore it favors much more sparsely reconstructed volumes than classic tomo-PIV. The proposed forward model is general and flexible and can be embedded in a classical multiplicative algebraic reconstruction technique (MART) or a simultaneous multiplicative algebraic reconstruction technique (SMART) inversion procedure. We show, using synthetic PIV images and by way of a large exploration of the generating conditions and a variety of performance metrics, that the model leads to better results than the classical tomo-PIV approach, in particular in the case of seeding densities greater than 0.06 particles per pixel and of PSFs characterized by a standard deviation larger than 0.8 pixels.

  13. A geometrically based method for automated radiosurgery planning.

    PubMed

    Wagner, T H; Yi, T; Meeks, S L; Bova, F J; Brechner, B L; Chen, Y; Buatti, J M; Friedman, W A; Foote, K D; Bouchet, L G

    2000-12-01

    A geometrically based method of multiple isocenter linear accelerator radiosurgery treatment planning optimization was developed, based on a target's solid shape. Our method uses an edge detection process to determine the optimal sphere packing arrangement with which to cover the planning target. The sphere packing arrangement is converted into a radiosurgery treatment plan by substituting the isocenter locations and collimator sizes for the spheres. This method is demonstrated on a set of 5 irregularly shaped phantom targets, as well as a set of 10 clinical example cases ranging from simple to very complex in planning difficulty. Using a prototype implementation of the method and standard dosimetric radiosurgery treatment planning tools, feasible treatment plans were developed for each target. The treatment plans generated for the phantom targets showed excellent dose conformity and acceptable dose homogeneity within the target volume. The algorithm was able to generate a radiosurgery plan conforming to the Radiation Therapy Oncology Group (RTOG) guidelines on radiosurgery for every clinical and phantom target examined. This automated planning method can serve as a valuable tool to assist treatment planners in rapidly and consistently designing conformal multiple isocenter radiosurgery treatment plans.

  14. Exploring the Associations Among Nutrition, Science, and Mathematics Knowledge for an Integrative, Food-Based Curriculum.

    PubMed

    Stage, Virginia C; Kolasa, Kathryn M; Díaz, Sebastián R; Duffrin, Melani W

    2018-01-01

    Explore associations between nutrition, science, and mathematics knowledge to provide evidence that integrating food/nutrition education in the fourth-grade curriculum may support gains in academic knowledge. Secondary analysis of a quasi-experimental study. Sample included 438 students in 34 fourth-grade classrooms across North Carolina and Ohio; mean age 10 years old; gender (I = 53.2% female; C = 51.6% female). Dependent variable = post-test-nutrition knowledge; independent variables = baseline-nutrition knowledge, and post-test science and mathematics knowledge. Analyses included descriptive statistics and multiple linear regression. The hypothesized model predicted post-nutrition knowledge (F(437) = 149.4, p < .001; Adjusted R = .51). All independent variables were significant predictors with positive association. Science and mathematics knowledge were predictive of nutrition knowledge indicating use of an integrative science and mathematics curriculum to improve academic knowledge may also simultaneously improve nutrition knowledge among fourth-grade students. Teachers can benefit from integration by meeting multiple academic standards, efficiently using limited classroom time, and increasing nutrition education provided in the classroom. © 2018, American School Health Association.

  15. Efficient sparse matrix-matrix multiplication for computing periodic responses by shooting method on Intel Xeon Phi

    NASA Astrophysics Data System (ADS)

    Stoykov, S.; Atanassov, E.; Margenov, S.

    2016-10-01

    Many of the scientific applications involve sparse or dense matrix operations, such as solving linear systems, matrix-matrix products, eigensolvers, etc. In what concerns structural nonlinear dynamics, the computations of periodic responses and the determination of stability of the solution are of primary interest. Shooting method iswidely used for obtaining periodic responses of nonlinear systems. The method involves simultaneously operations with sparse and dense matrices. One of the computationally expensive operations in the method is multiplication of sparse by dense matrices. In the current work, a new algorithm for sparse matrix by dense matrix products is presented. The algorithm takes into account the structure of the sparse matrix, which is obtained by space discretization of the nonlinear Mindlin's plate equation of motion by the finite element method. The algorithm is developed to use the vector engine of Intel Xeon Phi coprocessors. It is compared with the standard sparse matrix by dense matrix algorithm and the one developed by Intel MKL and it is shown that by considering the properties of the sparse matrix better algorithms can be developed.

  16. Multiple component end-member mixing model of dilution: hydrochemical effects of construction water at Yucca Mountain, Nevada, USA

    NASA Astrophysics Data System (ADS)

    Lu, Guoping; Sonnenthal, Eric L.; Bodvarsson, Gudmundur S.

    2008-12-01

    The standard dual-component and two-member linear mixing model is often used to quantify water mixing of different sources. However, it is no longer applicable whenever actual mixture concentrations are not exactly known because of dilution. For example, low-water-content (low-porosity) rock samples are leached for pore-water chemical compositions, which therefore are diluted in the leachates. A multicomponent, two-member mixing model of dilution has been developed to quantify mixing of water sources and multiple chemical components experiencing dilution in leaching. This extended mixing model was used to quantify fracture-matrix interaction in construction-water migration tests along the Exploratory Studies Facility (ESF) tunnel at Yucca Mountain, Nevada, USA. The model effectively recovers the spatial distribution of water and chemical compositions released from the construction water, and provides invaluable data on the matrix fracture interaction. The methodology and formulations described here are applicable to many sorts of mixing-dilution problems, including dilution in petroleum reservoirs, hydrospheres, chemical constituents in rocks and minerals, monitoring of drilling fluids, and leaching, as well as to environmental science studies.

  17. The Multiple Correspondence Analysis Method and Brain Functional Connectivity: Its Application to the Study of the Non-linear Relationships of Motor Cortex and Basal Ganglia.

    PubMed

    Rodriguez-Sabate, Clara; Morales, Ingrid; Sanchez, Alberto; Rodriguez, Manuel

    2017-01-01

    The complexity of basal ganglia (BG) interactions is often condensed into simple models mainly based on animal data and that present BG in closed-loop cortico-subcortical circuits of excitatory/inhibitory pathways which analyze the incoming cortical data and return the processed information to the cortex. This study was aimed at identifying functional relationships in the BG motor-loop of 24 healthy-subjects who provided written, informed consent and whose BOLD-activity was recorded by MRI methods. The analysis of the functional interaction between these centers by correlation techniques and multiple linear regression showed non-linear relationships which cannot be suitably addressed with these methods. The multiple correspondence analysis (MCA), an unsupervised multivariable procedure which can identify non-linear interactions, was used to study the functional connectivity of BG when subjects were at rest. Linear methods showed different functional interactions expected according to current BG models. MCA showed additional functional interactions which were not evident when using lineal methods. Seven functional configurations of BG were identified with MCA, two involving the primary motor and somatosensory cortex, one involving the deepest BG (external-internal globus pallidum, subthalamic nucleus and substantia nigral), one with the input-output BG centers (putamen and motor thalamus), two linking the input-output centers with other BG (external pallidum and subthalamic nucleus), and one linking the external pallidum and the substantia nigral. The results provide evidence that the non-linear MCA and linear methods are complementary and should be best used in conjunction to more fully understand the nature of functional connectivity of brain centers.

  18. Development of a Multiple Linear Regression Model to Forecast Facility Electrical Consumption at an Air Force Base.

    DTIC Science & Technology

    1981-09-01

    corresponds to the same square footage that consumed the electrical energy. 3. The basic assumptions of multiple linear regres- sion, as enumerated in...7. Data related to the sample of bases is assumed to be representative of bases in the population. Limitations Basic limitations on this research were... Ratemaking --Overview. Rand Report R-5894, Santa Monica CA, May 1977. Chatterjee, Samprit, and Bertram Price. Regression Analysis by Example. New York: John

  19. Impact of Learning Styles on Air Force Technical Training: Multiple and Linear Imagery in the Presentation of a Comparative Visual Location Task to Visual and Haptic Subjects. Interim Report for Period January 1977-January 1978.

    ERIC Educational Resources Information Center

    Ausburn, Floyd B.

    A U.S. Air Force study was designed to develop instruction based on the supplantation theory, in which tasks are performed (supplanted) for individuals who are unable to perform them due to their cognitive style. The study examined the effects of linear and multiple imagery in presenting a task requiring visual comparison and location to…

  20. A novel simple QSAR model for the prediction of anti-HIV activity using multiple linear regression analysis.

    PubMed

    Afantitis, Antreas; Melagraki, Georgia; Sarimveis, Haralambos; Koutentis, Panayiotis A; Markopoulos, John; Igglessi-Markopoulou, Olga

    2006-08-01

    A quantitative-structure activity relationship was obtained by applying Multiple Linear Regression Analysis to a series of 80 1-[2-hydroxyethoxy-methyl]-6-(phenylthio) thymine (HEPT) derivatives with significant anti-HIV activity. For the selection of the best among 37 different descriptors, the Elimination Selection Stepwise Regression Method (ES-SWR) was utilized. The resulting QSAR model (R (2) (CV) = 0.8160; S (PRESS) = 0.5680) proved to be very accurate both in training and predictive stages.

  1. Assignment of the Stereochemistry and Anomeric Configuration of Sugars within Oligosaccharides Via Overlapping Disaccharide Ladders Using MSn

    NASA Astrophysics Data System (ADS)

    Konda, Chiharu; Londry, Frank A.; Bendiak, Brad; Xia, Yu

    2014-08-01

    A systematic approach is described that can pinpoint the stereo-structures (sugar identity, anomeric configuration, and location) of individual sugar units within linear oligosaccharides. Using a highly modified mass spectrometer, dissociation of linear oligosaccharides in the gas phase was optimized along multiple-stage tandem dissociation pathways (MSn, n = 4 or 5). The instrument was a hybrid triple quadrupole/linear ion trap mass spectrometer capable of high-efficiency bidirectional ion transfer between quadrupole arrays. Different types of collision-induced dissociation (CID), either on-resonance ion trap or beam-type CID could be utilized at any given stage of dissociation, enabling either glycosidic bond cleavages or cross-ring cleavages to be maximized when wanted. The approach first involves optimizing the isolation of disaccharide units as an ordered set of overlapping substructures via glycosidic bond cleavages during early stages of MSn, with explicit intent to minimize cross-ring cleavages. Subsequently, cross-ring cleavages were optimized for individual disaccharides to yield key diagnostic product ions ( m/ z 221). Finally, fingerprint patterns that establish stereochemistry and anomeric configuration were obtained from the diagnostic ions via CID. Model linear oligosaccharides were derivatized at the reducing end, allowing overlapping ladders of disaccharides to be isolated from MSn. High confidence stereo-structural determination was achieved by matching MSn CID of the diagnostic ions to synthetic standards via a spectral matching algorithm. Using this MSn ( n = 4 or 5) approach, the stereo-structures, anomeric configurations, and locations of three individual sugar units within two pentasaccharides were successfully determined.

  2. Analysis and prediction of flow from local source in a river basin using a Neuro-fuzzy modeling tool.

    PubMed

    Aqil, Muhammad; Kita, Ichiro; Yano, Akira; Nishiyama, Soichi

    2007-10-01

    Traditionally, the multiple linear regression technique has been one of the most widely used models in simulating hydrological time series. However, when the nonlinear phenomenon is significant, the multiple linear will fail to develop an appropriate predictive model. Recently, neuro-fuzzy systems have gained much popularity for calibrating the nonlinear relationships. This study evaluated the potential of a neuro-fuzzy system as an alternative to the traditional statistical regression technique for the purpose of predicting flow from a local source in a river basin. The effectiveness of the proposed identification technique was demonstrated through a simulation study of the river flow time series of the Citarum River in Indonesia. Furthermore, in order to provide the uncertainty associated with the estimation of river flow, a Monte Carlo simulation was performed. As a comparison, a multiple linear regression analysis that was being used by the Citarum River Authority was also examined using various statistical indices. The simulation results using 95% confidence intervals indicated that the neuro-fuzzy model consistently underestimated the magnitude of high flow while the low and medium flow magnitudes were estimated closer to the observed data. The comparison of the prediction accuracy of the neuro-fuzzy and linear regression methods indicated that the neuro-fuzzy approach was more accurate in predicting river flow dynamics. The neuro-fuzzy model was able to improve the root mean square error (RMSE) and mean absolute percentage error (MAPE) values of the multiple linear regression forecasts by about 13.52% and 10.73%, respectively. Considering its simplicity and efficiency, the neuro-fuzzy model is recommended as an alternative tool for modeling of flow dynamics in the study area.

  3. Optimized multiple linear mappings for single image super-resolution

    NASA Astrophysics Data System (ADS)

    Zhang, Kaibing; Li, Jie; Xiong, Zenggang; Liu, Xiuping; Gao, Xinbo

    2017-12-01

    Learning piecewise linear regression has been recognized as an effective way for example learning-based single image super-resolution (SR) in literature. In this paper, we employ an expectation-maximization (EM) algorithm to further improve the SR performance of our previous multiple linear mappings (MLM) based SR method. In the training stage, the proposed method starts with a set of linear regressors obtained by the MLM-based method, and then jointly optimizes the clustering results and the low- and high-resolution subdictionary pairs for regression functions by using the metric of the reconstruction errors. In the test stage, we select the optimal regressor for SR reconstruction by accumulating the reconstruction errors of m-nearest neighbors in the training set. Thorough experimental results carried on six publicly available datasets demonstrate that the proposed SR method can yield high-quality images with finer details and sharper edges in terms of both quantitative and perceptual image quality assessments.

  4. Robust lane detection and tracking using multiple visual cues under stochastic lane shape conditions

    NASA Astrophysics Data System (ADS)

    Huang, Zhi; Fan, Baozheng; Song, Xiaolin

    2018-03-01

    As one of the essential components of environment perception techniques for an intelligent vehicle, lane detection is confronted with challenges including robustness against the complicated disturbance and illumination, also adaptability to stochastic lane shapes. To overcome these issues, we proposed a robust lane detection method named classification-generation-growth-based (CGG) operator to the detected lines, whereby the linear lane markings are identified by synergizing multiple visual cues with the a priori knowledge and spatial-temporal information. According to the quality of linear lane fitting, the linear and linear-parabolic models are dynamically switched to describe the actual lane. The Kalman filter with adaptive noise covariance and the region of interests (ROI) tracking are applied to improve the robustness and efficiency. Experiments were conducted with images covering various challenging scenarios. The experimental results evaluate the effectiveness of the presented method for complicated disturbances, illumination, and stochastic lane shapes.

  5. Multiband selection with linear array detectors

    NASA Technical Reports Server (NTRS)

    Richard, H. L.; Barnes, W. L.

    1985-01-01

    Several techniques that can be used in an earth-imaging system to separate the linear image formed after the collecting optics into the desired spectral band are examined. The advantages and disadvantages of the Multispectral Linear Array (MLA) multiple optics, the MLA adjacent arrays, the imaging spectrometer, and the MLA beam splitter are discussed. The beam-splitter design approach utilizes, in addition to relatively broad spectral region separation, a movable Multiband Selection Device (MSD), placed between the exit ports of the beam splitter and a linear array detector, permitting many bands to be selected. The successful development and test of the MSD is described. The device demonstrated the capacity to provide a wide field of view, visible-to-near IR/short-wave IR and thermal IR capability, and a multiplicity of spectral bands and polarization measuring means, as well as a reasonable size and weight at minimal cost and risk compared to a spectrometer design approach.

  6. A High-Linearity Low-Noise Amplifier with Variable Bandwidth for Neural Recoding Systems

    NASA Astrophysics Data System (ADS)

    Yoshida, Takeshi; Sueishi, Katsuya; Iwata, Atsushi; Matsushita, Kojiro; Hirata, Masayuki; Suzuki, Takafumi

    2011-04-01

    This paper describes a low-noise amplifier with multiple adjustable parameters for neural recording applications. An adjustable pseudo-resistor implemented by cascade metal-oxide-silicon field-effect transistors (MOSFETs) is proposed to achieve low-signal distortion and wide variable bandwidth range. The amplifier has been implemented in 0.18 µm standard complementary metal-oxide-semiconductor (CMOS) process and occupies 0.09 mm2 on chip. The amplifier achieved a selectable voltage gain of 28 and 40 dB, variable bandwidth from 0.04 to 2.6 Hz, total harmonic distortion (THD) of 0.2% with 200 mV output swing, input referred noise of 2.5 µVrms over 0.1-100 Hz and 18.7 µW power consumption at a supply voltage of 1.8 V.

  7. Quantification of Free Phenytoin by Liquid Chromatography Tandem Mass Spectrometry (LC/MS/MS).

    PubMed

    Peat, Judy; Frazee, Clint; Garg, Uttam

    2016-01-01

    Phenytoin (diphenylhydantoin) is an anticonvulsant drug that has been used for decades for the treatment of many types of seizures. The drug is highly protein bound and measurement of free-active form of the drug is warranted particularly in patients with conditions that can affect drug protein binding. Here, we describe a LC/MS/MS method for the measurement of free phenytoin. Free drug is separated by ultrafiltration of serum or plasma. Ultrafiltrate is treated with acetonitrile containing internal standard phenytoin d-10 to precipitate proteins. The mixture is centrifuged and supernatant is injected onto LC-MS-MS, and analyzed using multiple reaction monitoring. This method is linear from 0.1 to 4.0 μg/mL and does not demonstrate any significant ion suppression or enhancement.

  8. Conical diffraction as a versatile building block to implement new imaging modalities for superresolution in fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Fallet, Clément; Caron, Julien; Oddos, Stephane; Tinevez, Jean-Yves; Moisan, Lionel; Sirat, Gabriel Y.; Braitbart, Philippe O.; Shorte, Spencer L.

    2014-08-01

    We present a new technology for super-resolution fluorescence imaging, based on conical diffraction. Conical diffraction is a linear, singular phenomenon taking place when a polarized beam is diffracted through a biaxial crystal. The illumination patterns generated by conical diffraction are more compact than the classical Gaussian beam; we use them to generate a super-resolution imaging modality. Conical Diffraction Microscopy (CODIM) resolution enhancement can be achieved with any type of objective on any kind of sample preparation and standard fluorophores. Conical diffraction can be used in multiple fashion to create new and disruptive technologies for super-resolution microscopy. This paper will focus on the first one that has been implemented and give a glimpse at what the future of microscopy using conical diffraction could be.

  9. MRM assay for quantitation of complement components in human blood plasma - a feasibility study on multiple sclerosis.

    PubMed

    Rezeli, Melinda; Végvári, Akos; Ottervald, Jan; Olsson, Tomas; Laurell, Thomas; Marko-Varga, György

    2011-12-10

    As a proof-of-principle study, a multiple reaction monitoring (MRM) assay was developed for quantitation of proteotypic peptides, representing seven plasma proteins associated with inflammation (complement components and C-reactive protein). The assay development and the sample analysis were performed on a linear ion trap mass spectrometer. We were able to quantify 5 of the 7 target proteins in depleted plasma digests with reasonable reproducibility over a 2 orders of magnitude linear range (RSD≤25%). The assay panel was utilized for the analysis of a small multiple sclerosis sample cohort with 10 diseased and 8 control patients. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. A case of unilateral, systematized linear hair follicle nevi associated with epidermal nevus-like lesions.

    PubMed

    Ikeda, Shigaku; Kawada, Juri; Yaguchi, Hitoshi; Ogawa, Hideoki

    2003-01-01

    Multiple hair follicle nevi are an extremely rare condition. In 1998, a case of unilateral multiple hair follicle nevi, ipsilateral alopecia and ipsilateral leptomeningeal angiomatosis of the brain was first reported from Japan. Very recently, hair follicle nevus in a distribution following Blaschko's lines has also been reported. In this paper, we observed a congenital case of unilateral, systematized linear hair follicle nevi associated with congenital, ipsilateral, multiple plaque lesions resembling epidermal nevi but lacking leptomeningeal angiomatosis of the brain. These cases implicate the possibility of a novel neurocutaneous syndrome. Additional cases should be sought in order to determine whether this condition is pathophysiologically distinct. Copyright 2003 S. Karger AG, Basel

  11. Iterative-method performance evaluation for multiple vectors associated with a large-scale sparse matrix

    NASA Astrophysics Data System (ADS)

    Imamura, Seigo; Ono, Kenji; Yokokawa, Mitsuo

    2016-07-01

    Ensemble computing, which is an instance of capacity computing, is an effective computing scenario for exascale parallel supercomputers. In ensemble computing, there are multiple linear systems associated with a common coefficient matrix. We improve the performance of iterative solvers for multiple vectors by solving them at the same time, that is, by solving for the product of the matrices. We implemented several iterative methods and compared their performance. The maximum performance on Sparc VIIIfx was 7.6 times higher than that of a naïve implementation. Finally, to deal with the different convergence processes of linear systems, we introduced a control method to eliminate the calculation of already converged vectors.

  12. 40 CFR 86.1806-04 - On-board diagnostics.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 2002). (vii) As an alternative to the above standards, heavy-duty vehicles may conform to the... standards shall utilize multiplicative factors from the California vehicle type (i.e. LEV II, ULEV II... standards shall utilize the Tier 2 Bin 4 emission standards and the CARB ULEV II multiplicative factors to...

  13. Daily sodium and potassium excretion can be estimated by scheduled spot urine collections.

    PubMed

    Doenyas-Barak, Keren; Beberashvili, Ilia; Bar-Chaim, Adina; Averbukh, Zhan; Vogel, Ofir; Efrati, Shai

    2015-01-01

    The evaluation of sodium and potassium intake is part of the optimal management of hypertension, metabolic syndrome, renal stones, and other conditions. To date, no convenient method for its evaluation exists, as the gold standard method of 24-hour urine collection is cumbersome and often incorrectly performed, and methods that use spot or shorter collections are not accurate enough to replace the gold standard. The aim of this study was to evaluate the correlation and agreement between a new method that uses multiple-scheduled spot urine collection and the gold standard method of 24-hour urine collection. The urine sodium or potassium to creatinine ratios were determined for four scheduled spot urine samples. The mean ratios of the four spot samples and the ratios of each of the single spot samples were corrected for estimated creatinine excretion and compared to the gold standard. A significant linear correlation was demonstrated between the 24-hour urinary solute excretions and estimated excretion evaluated by any of the scheduled spot urine samples. The correlation of the mean of the four spots was better than for any of the single spots. Bland-Altman plots showed that the differences between these measurements were within the limits of agreement. Four scheduled spot urine samples can be used as a convenient method for estimation of 24-hour sodium or potassium excretion. © 2015 S. Karger AG, Basel.

  14. Analysis of amino acids by HPLC/electrospray negative ion tandem mass spectrometry using 9-fluorenylmethoxycarbonyl chloride (Fmoc-Cl) derivatization.

    PubMed

    Ziegler, Jörg; Abel, Steffen

    2014-12-01

    A new method for the determination of amino acids is presented. It combines established methods for the derivatization of primary and secondary amino groups with 9-fluorenylmethoxycarbonyl chloride (Fmoc-Cl) with the subsequent amino acid specific detection of the derivatives by LC-ESI-MS/MS using multiple reaction monitoring (MRM). The derivatization proceeds within 5 min, and the resulting amino acid derivatives can be rapidly purified from matrix by solid-phase extraction (SPE) on HR-X resin and separated by reversed-phase HPLC. The Fmoc derivatives yield several amino acid specific fragment ions which opened the possibility to select amino acid specific MRM transitions. The method was applied to all 20 proteinogenic amino acids, and the quantification was performed using L-norvaline as standard. A limit of detection as low as 1 fmol/µl with a linear range of up to 125 pmol/µl could be obtained. Intraday and interday precisions were lower than 10 % relative standard deviations for most of the amino acids. Quantification using L-norvaline as internal standard gave very similar results compared to the quantification using deuterated amino acid as internal standards. Using this protocol, it was possible to record the amino acid profiles of only a single root from Arabidopsis thaliana seedlings and to compare it with the amino acid profiles of 20 dissected root meristems (200 μm).

  15. Neural network and multiple linear regression to predict school children dimensions for ergonomic school furniture design.

    PubMed

    Agha, Salah R; Alnahhal, Mohammed J

    2012-11-01

    The current study investigates the possibility of obtaining the anthropometric dimensions, critical to school furniture design, without measuring all of them. The study first selects some anthropometric dimensions that are easy to measure. Two methods are then used to check if these easy-to-measure dimensions can predict the dimensions critical to the furniture design. These methods are multiple linear regression and neural networks. Each dimension that is deemed necessary to ergonomically design school furniture is expressed as a function of some other measured anthropometric dimensions. Results show that out of the five dimensions needed for chair design, four can be related to other dimensions that can be measured while children are standing. Therefore, the method suggested here would definitely save time and effort and avoid the difficulty of dealing with students while measuring these dimensions. In general, it was found that neural networks perform better than multiple linear regression in the current study. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  16. Fixed Point Problems for Linear Transformations on Pythagorean Triples

    ERIC Educational Resources Information Center

    Zhan, M.-Q.; Tong, J.-C.; Braza, P.

    2006-01-01

    In this article, an attempt is made to find all linear transformations that map a standard Pythagorean triple (a Pythagorean triple [x y z][superscript T] with y being even) into a standard Pythagorean triple, which have [3 4 5][superscript T] as their fixed point. All such transformations form a monoid S* under matrix product. It is found that S*…

  17. On the Linear Relation between the Mean and the Standard Deviation of a Response Time Distribution

    ERIC Educational Resources Information Center

    Wagenmakers, Eric-Jan; Brown, Scott

    2007-01-01

    Although it is generally accepted that the spread of a response time (RT) distribution increases with the mean, the precise nature of this relation remains relatively unexplored. The authors show that in several descriptive RT distributions, the standard deviation increases linearly with the mean. Results from a wide range of tasks from different…

  18. Estimation of Standard Error of Regression Effects in Latent Regression Models Using Binder's Linearization. Research Report. ETS RR-07-09

    ERIC Educational Resources Information Center

    Li, Deping; Oranje, Andreas

    2007-01-01

    Two versions of a general method for approximating standard error of regression effect estimates within an IRT-based latent regression model are compared. The general method is based on Binder's (1983) approach, accounting for complex samples and finite populations by Taylor series linearization. In contrast, the current National Assessment of…

  19. A survey of the state of the art and focused research in range systems, task 2

    NASA Technical Reports Server (NTRS)

    Yao, K.

    1986-01-01

    Many communication, control, and information processing subsystems are modeled by linear systems incorporating tapped delay lines (TDL). Such optimized subsystems result in full precision multiplications in the TDL. In order to reduce complexity and cost in a microprocessor implementation, these multiplications can be replaced by single-shift instructions which are equivalent to powers of two multiplications. Since, in general, the obvious operation of rounding the infinite precision TDL coefficients to the nearest powers of two usually yield quite poor system performance, the optimum powers of two coefficient solution was considered. Detailed explanations on the use of branch-and-bound algorithms for finding the optimum powers of two solutions are given. Specific demonstration of this methodology to the design of a linear data equalizer and its implementation in assembly language on a 8080 microprocessor with a 12 bit A/D converter are reported. This simple microprocessor implementation with optimized TDL coefficients achieves a system performance comparable to the optimum linear equalization with full precision multiplications for an input data rate of 300 baud. The philosophy demonstrated in this implementation is dully applicable to many other microprocessor controlled information processing systems.

  20. Factors affecting quality of life in patients on haemodialysis: a cross-sectional study from Palestine.

    PubMed

    Zyoud, Sa'ed H; Daraghmeh, Dala N; Mezyed, Diana O; Khdeir, Razan L; Sawafta, Mayas N; Ayaseh, Nora A; Tabeeb, Ghada H; Sweileh, Waleed M; Awang, Rahmat; Al-Jabi, Samah W

    2016-04-27

    Haemodialysis (HD) is a life-sustaining treatment for patients with end-stage renal disease (ESRD). HD can bring about significant impairment in health-related quality of life (HRQOL) and outcomes. Therefore, we sought to describe the patterns of HRQOL and determine the independent factors associated with poor HRQOL in Palestinian patients on HD. A multicenter cross-sectional study was performed from June 2014 to January 2015 using the EuroQOL-5 Dimensions instrument (EQ-5D-5L) for the assessment of HRQOL. ESRD patients undergoing HD in all dialysis centres in the West Bank of Palestine were approached and recruited for this study. Multiple linear regression was carried out to identify factors that were significantly associated with HRQOL. Two hundred and sixty-seven patients were participated in the current study giving response rate of 96 %. Overall, 139 (52.1 %) were male, and the mean ± standard deviation age was 53.3 ± 16.2 years. The reported HRQOL as measured by mean EQ-5D-5L index value and Euro QOL visual analogue scale (EQ-VAS) score was 0.37 ± 0.44 and 59.38 ± 45.39, respectively. There was a moderate positive correlation between the EQ-VAS and the EQ-5D-5L index value (r = 0.42, p < 0.001). The results of multiple linear regression showed a significant negative association between HRQOL with age, total number of chronic co-morbid diseases and the total number of chronic medications. However, a significant positive association was found between HRQOL with male gender, university education level and patients who live in village. Our results provided insight into a number of associations between patient variables and their HRQOL. Healthcare providers should be aware of low HRQOL among patients with no formal education, female gender, patient's residents of refugee camps, multiple co-morbid diseases, multiple chronic medications, and elderly patients to improve their quality of life.

  1. Longitudinal change in physical activity and its correlates in relapsing-remitting multiple sclerosis.

    PubMed

    Motl, Robert W; McAuley, Edward; Sandroff, Brian M

    2013-08-01

    Physical activity is beneficial for people with multiple sclerosis (MS), but this population is largely inactive. There is minimal information on change in physical activity and its correlates for informing the development of behavioral interventions. This study examined change in physical activity and its symptomatic, social-cognitive, and ambulatory or disability correlates over a 2.5-year period of time in people with relapsing-remitting multiple sclerosis. On 6 occasions, each separated by 6 months, people (N=269) with relapsing-remitting multiple sclerosis completed assessments of symptoms, self-efficacy, walking impairment, disability, and physical activity. The participants wore an accelerometer for 7 days. The change in study variables over 6 time points was examined with unconditional latent growth curve modeling. The association among changes in study variables over time was examined using conditional latent growth curve modeling, and the associations were expressed as standardized path coefficients (β). There were significant linear changes in self-reported and objectively measured physical activity, self-efficacy, walking impairment, and disability over the 2.5-year period; there were no changes in fatigue, depression, and pain. The changes in self-reported and objective physical activity were associated with change in self-efficacy (β=.49 and β=.61, respectively), after controlling for other variables and confounders. The primary limitations of the study were the generalizability of results among those with progressive multiple sclerosis and inclusion of a single variable from social-cognitive theory. Researchers should consider designing interventions that target self-efficacy for the promotion and maintenance of physical activity in this population.

  2. Quantifying relative importance: Computing standardized effects in models with binary outcomes

    USGS Publications Warehouse

    Grace, James B.; Johnson, Darren; Lefcheck, Jonathan S.; Byrnes, Jarrett E.K.

    2018-01-01

    Results from simulation studies show that both the LT and OE methods of standardization support a similarly-broad range of coefficient comparisons. The LT method estimates effects that reflect underlying latent-linear propensities, while the OE method computes a linear approximation for the effects of predictors on binary responses. The contrast between assumptions for the two methods is reflected in persistently weaker standardized effects associated with OE standardization. Reliance on standard deviations for standardization (the traditional approach) is critically examined and shown to introduce substantial biases when predictors are non-Gaussian. The use of relevant ranges in place of standard deviations has the capacity to place LT and OE standardized coefficients on a more comparable scale. As ecologists address increasingly complex hypotheses, especially those that involve comparing the influences of different controlling factors (e.g., top-down versus bottom-up or biotic versus abiotic controls), comparable coefficients become a necessary component for evaluations.

  3. Linear summation of outputs in a balanced network model of motor cortex

    PubMed Central

    Capaday, Charles; van Vreeswijk, Carl

    2015-01-01

    Given the non-linearities of the neural circuitry's elements, we would expect cortical circuits to respond non-linearly when activated. Surprisingly, when two points in the motor cortex are activated simultaneously, the EMG responses are the linear sum of the responses evoked by each of the points activated separately. Additionally, the corticospinal transfer function is close to linear, implying that the synaptic interactions in motor cortex must be effectively linear. To account for this, here we develop a model of motor cortex composed of multiple interconnected points, each comprised of reciprocally connected excitatory and inhibitory neurons. We show how non-linearities in neuronal transfer functions are eschewed by strong synaptic interactions within each point. Consequently, the simultaneous activation of multiple points results in a linear summation of their respective outputs. We also consider the effects of reduction of inhibition at a cortical point when one or more surrounding points are active. The network response in this condition is linear over an approximately two- to three-fold decrease of inhibitory feedback strength. This result supports the idea that focal disinhibition allows linear coupling of motor cortical points to generate movement related muscle activation patterns; albeit with a limitation on gain control. The model also explains why neural activity does not spread as far out as the axonal connectivity allows, whilst also explaining why distant cortical points can be, nonetheless, functionally coupled by focal disinhibition. Finally, we discuss the advantages that linear interactions at the cortical level afford to motor command synthesis. PMID:26097452

  4. Normal reference values for bladder wall thickness on CT in a healthy population.

    PubMed

    Fananapazir, Ghaneh; Kitich, Aleksandar; Lamba, Ramit; Stewart, Susan L; Corwin, Michael T

    2018-02-01

    To determine normal bladder wall thickness on CT in patients without bladder disease. Four hundred and nineteen patients presenting for trauma with normal CTs of the abdomen and pelvis were included in our retrospective study. Bladder wall thickness was assessed, and bladder volume was measured using both the ellipsoid formula and an automated technique. Patient age, gender, and body mass index were recorded. Linear regression models were created to account for bladder volume, age, gender, and body mass index, and the multiple correlation coefficient with bladder wall thickness was computed. Bladder volume and bladder wall thickness were log-transformed to achieve approximate normality and homogeneity of variance. Variables that did not contribute substantively to the model were excluded, and a parsimonious model was created and the multiple correlation coefficient was calculated. Expected bladder wall thickness was estimated for different bladder volumes, and 1.96 standard deviation above expected provided the upper limit of normal on the log scale. Age, gender, and bladder volume were associated with bladder wall thickness (p = 0.049, 0.024, and < 0.001, respectively). The linear regression model had an R 2 of 0.52. Age and gender were negligible in contribution to the model, and a parsimonious model using only volume was created for both the ellipsoid and automated volumes (R 2  = 0.52 and 0.51, respectively). Bladder wall thickness correlates with bladder wall volume. The study provides reference bladder wall thicknesses on CT utilizing both the ellipsoid formula and automated bladder volumes.

  5. Relation of dietary and lifestyle traits to difference in serum leptin of Japanese in Japan and Hawaii: the INTERLIPID study.

    PubMed

    Nakamura, Y; Ueshima, H; Okuda, N; Miura, K; Kita, Y; Okamura, T; Turin, T C; Okayama, A; Rodriguez, B; Curb, J D; Stamler, J

    2012-01-01

    Previously, we found significantly higher serum leptin in Japanese-Americans in Hawaii than Japanese in Japan. We investigated whether differences in dietary and other lifestyle factors explain higher serum leptin concentrations in Japanese living a Western lifestyle in Hawaii compared with Japanese in Japan. Serum leptin and nutrient intakes were examined by standardized methods in men and women ages 40-59 years from two population samples, one Japanese-American in Hawaii (88 men, 94 women), the other Japanese in central Japan (123 men, 111 women). Multiple linear regression models were used to assess role of dietary and other lifestyle traits in accounting for serum leptin difference between Hawaii and Japan. Mean leptin was significantly higher in Hawaii than Japan (7.2 ± 6.8 vs 3.7 ± 2.3 ng/ml in men, P < 0.0001; 12.8 ± 6.6 vs 8.5 ± 5.0 in women <0.0001). In men, higher BMI in Hawaii explained over 90% of the difference in serum leptin; in women, only 47%. In multiple linear regression analyses in women, further adjustment for physical activity and dietary factors--alcohol, dietary fiber, iron--produced a further reduction in the coefficient for the difference, total reduction 70.7%; P-value for the Hawaii-Japan difference became 0.126. The significantly higher mean leptin concentration in Hawaii than Japan may be attributable largely to differences in BMI. Differences in nutrient intake in the two samples were associated with only modest relationship to the leptin difference. Copyright © 2010 Elsevier B.V. All rights reserved.

  6. Genomic Selection in Multi-environment Crop Trials.

    PubMed

    Oakey, Helena; Cullis, Brian; Thompson, Robin; Comadran, Jordi; Halpin, Claire; Waugh, Robbie

    2016-05-03

    Genomic selection in crop breeding introduces modeling challenges not found in animal studies. These include the need to accommodate replicate plants for each line, consider spatial variation in field trials, address line by environment interactions, and capture nonadditive effects. Here, we propose a flexible single-stage genomic selection approach that resolves these issues. Our linear mixed model incorporates spatial variation through environment-specific terms, and also randomization-based design terms. It considers marker, and marker by environment interactions using ridge regression best linear unbiased prediction to extend genomic selection to multiple environments. Since the approach uses the raw data from line replicates, the line genetic variation is partitioned into marker and nonmarker residual genetic variation (i.e., additive and nonadditive effects). This results in a more precise estimate of marker genetic effects. Using barley height data from trials, in 2 different years, of up to 477 cultivars, we demonstrate that our new genomic selection model improves predictions compared to current models. Analyzing single trials revealed improvements in predictive ability of up to 5.7%. For the multiple environment trial (MET) model, combining both year trials improved predictive ability up to 11.4% compared to a single environment analysis. Benefits were significant even when fewer markers were used. Compared to a single-year standard model run with 3490 markers, our partitioned MET model achieved the same predictive ability using between 500 and 1000 markers depending on the trial. Our approach can be used to increase accuracy and confidence in the selection of the best lines for breeding and/or, to reduce costs by using fewer markers. Copyright © 2016 Oakey et al.

  7. Evolution of Quantitative Measures in NMR: Quantum Mechanical qHNMR Advances Chemical Standardization of a Red Clover (Trifolium pratense) Extract

    PubMed Central

    2017-01-01

    Chemical standardization, along with morphological and DNA analysis ensures the authenticity and advances the integrity evaluation of botanical preparations. Achievement of a more comprehensive, metabolomic standardization requires simultaneous quantitation of multiple marker compounds. Employing quantitative 1H NMR (qHNMR), this study determined the total isoflavone content (TIfCo; 34.5–36.5% w/w) via multimarker standardization and assessed the stability of a 10-year-old isoflavone-enriched red clover extract (RCE). Eleven markers (nine isoflavones, two flavonols) were targeted simultaneously, and outcomes were compared with LC-based standardization. Two advanced quantitative measures in qHNMR were applied to derive quantities from complex and/or overlapping resonances: a quantum mechanical (QM) method (QM-qHNMR) that employs 1H iterative full spin analysis, and a non-QM method that uses linear peak fitting algorithms (PF-qHNMR). A 10 min UHPLC-UV method provided auxiliary orthogonal quantitation. This is the first systematic evaluation of QM and non-QM deconvolution as qHNMR quantitation measures. It demonstrates that QM-qHNMR can account successfully for the complexity of 1H NMR spectra of individual analytes and how QM-qHNMR can be built for mixtures such as botanical extracts. The contents of the main bioactive markers were in good agreement with earlier HPLC-UV results, demonstrating the chemical stability of the RCE. QM-qHNMR advances chemical standardization by its inherent QM accuracy and the use of universal calibrants, avoiding the impractical need for identical reference materials. PMID:28067513

  8. Parameter estimation method and updating of regional prediction equations for ungaged sites in the desert region of California

    USGS Publications Warehouse

    Barth, Nancy A.; Veilleux, Andrea G.

    2012-01-01

    The U.S. Geological Survey (USGS) is currently updating at-site flood frequency estimates for USGS streamflow-gaging stations in the desert region of California. The at-site flood-frequency analysis is complicated by short record lengths (less than 20 years is common) and numerous zero flows/low outliers at many sites. Estimates of the three parameters (mean, standard deviation, and skew) required for fitting the log Pearson Type 3 (LP3) distribution are likely to be highly unreliable based on the limited and heavily censored at-site data. In a generalization of the recommendations in Bulletin 17B, a regional analysis was used to develop regional estimates of all three parameters (mean, standard deviation, and skew) of the LP3 distribution. A regional skew value of zero from a previously published report was used with a new estimated mean squared error (MSE) of 0.20. A weighted least squares (WLS) regression method was used to develop both a regional standard deviation and a mean model based on annual peak-discharge data for 33 USGS stations throughout California’s desert region. At-site standard deviation and mean values were determined by using an expected moments algorithm (EMA) method for fitting the LP3 distribution to the logarithms of annual peak-discharge data. Additionally, a multiple Grubbs-Beck (MGB) test, a generalization of the test recommended in Bulletin 17B, was used for detecting multiple potentially influential low outliers in a flood series. The WLS regression found that no basin characteristics could explain the variability of standard deviation. Consequently, a constant regional standard deviation model was selected, resulting in a log-space value of 0.91 with a MSE of 0.03 log units. Yet drainage area was found to be statistically significant at explaining the site-to-site variability in mean. The linear WLS regional mean model based on drainage area had a Pseudo- 2 R of 51 percent and a MSE of 0.32 log units. The regional parameter estimates were then used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins. The final equations are functions of drainage area.Average standard errors of prediction for these regression equations range from 214.2 to 856.2 percent.

  9. Precision, accuracy and linearity of radiometer EML 105 whole blood metabolite biosensors.

    PubMed

    Cobbaert, C; Morales, C; van Fessem, M; Kemperman, H

    1999-11-01

    The analytical performance of a new, whole blood glucose and lactate electrode system (EML 105 analyser. Radiometer Medical A/S. Copenhagen, Denmark) was evaluated. Between-day coefficients of variation were < or = 1.9% and < or = 3.1% for glucose and lactate, respectively. Recoveries of glucose were 100 +/- 10% using either aqueous or protein-based standards. Recoveries of lactate depended on the matrix, being underestimated in aqueous standards (approximately -10%) and 95-100% in standards containing 40 g/L albumin at lactate concentrations of 15 and 30 mmol/L. However, recoveries were high (up to 180%) at low lactate concentrations in protein-based standards. Carry-over, investigated according to National Clinical Chemistry Laboratory Standards EP10-T2, was negligible (alpha = 0.01). Glucose and lactate biosensors equipped with new membranes were linear up to 60 and 30 mmol/L, respectively. However, linearity fell upon daily use with increasing membrane lifetime. We conclude that the Radiometer metabolite biosensor results are reproducible and do not suffer from specimen-related carry-over. However, lactate recovery depends on the protein content and the lactate concentration.

  10. Rigorous quantitative elemental microanalysis by scanning electron microscopy/energy dispersive x-ray spectrometry (SEM/EDS) with spectrum processing by NIST DTSA-II

    NASA Astrophysics Data System (ADS)

    Newbury, Dale E.; Ritchie, Nicholas W. M.

    2014-09-01

    Quantitative electron-excited x-ray microanalysis by scanning electron microscopy/silicon drift detector energy dispersive x-ray spectrometry (SEM/SDD-EDS) is capable of achieving high accuracy and high precision equivalent to that of the high spectral resolution wavelength dispersive x-ray spectrometer even when severe peak interference occurs. The throughput of the SDD-EDS enables high count spectra to be measured that are stable in calibration and resolution (peak shape) across the full deadtime range. With this high spectral stability, multiple linear least squares peak fitting is successful for separating overlapping peaks and spectral background. Careful specimen preparation is necessary to remove topography on unknowns and standards. The standards-based matrix correction procedure embedded in the NIST DTSA-II software engine returns quantitative results supported by a complete error budget, including estimates of the uncertainties from measurement statistics and from the physical basis of the matrix corrections. NIST DTSA-II is available free for Java-platforms at: http://www.cstl.nist.gov/div837/837.02/epq/dtsa2/index.html).

  11. Behavioral modeling and digital compensation of nonlinearity in DFB lasers for multi-band directly modulated radio-over-fiber systems

    NASA Astrophysics Data System (ADS)

    Li, Jianqiang; Yin, Chunjing; Chen, Hao; Yin, Feifei; Dai, Yitang; Xu, Kun

    2014-11-01

    The envisioned C-RAN concept in wireless communication sector replies on distributed antenna systems (DAS) which consist of a central unit (CU), multiple remote antenna units (RAUs) and the fronthaul links between them. As the legacy and emerging wireless communication standards will coexist for a long time, the fronthaul links are preferred to carry multi-band multi-standard wireless signals. Directly-modulated radio-over-fiber (ROF) links can serve as a lowcost option to make fronthaul connections conveying multi-band wireless signals. However, directly-modulated radioover- fiber (ROF) systems often suffer from inherent nonlinearities from directly-modulated lasers. Unlike ROF systems working at the single-band mode, the modulation nonlinearities in multi-band ROF systems can result in both in-band and cross-band nonlinear distortions. In order to address this issue, we have recently investigated the multi-band nonlinear behavior of directly-modulated DFB lasers based on multi-dimensional memory polynomial model. Based on this model, an efficient multi-dimensional baseband digital predistortion technique was developed and experimentally demonstrated for linearization of multi-band directly-modulated ROF systems.

  12. Determination of cocaine and benzoylecgonine in guinea pig's hair after a single dose administration by LC-MS/MS.

    PubMed

    Sun, Qi-ran; Xiang, Ping; Yan, Hui; Shen, Min

    2008-12-01

    A sensitive LC-MS/MS method to determine cocaine and its major metabolite benzoylecgonine in guinea pig' s hair has been established. About 20 mg of decontaminated hair sample was hydrolyzed with 0. 1 mol x L(-1) HCl at 50 degrees C overnight, in the presence of cocaine-d3 and benzoylecgonine-d8 used as internal standards, and then extracted with dichlormethane. The analysis was performed by liquid chromatography-tandem mass spectrometry (LC-MS/MS). Positive electrospray ionization (ESI +) and multiple reactions monitoring (MRM) mode were used. The limit of detection (LOD) for cocaine and benzoylecgonine was 1 pg x mg(-1). The calibration curves of extracted standards were linear over the range from 5 pg x mg(-1) to 250 pg x mg(-1) (r2 > or = 0.9997). The method was validated and applied to the analysis of guinea pig's hair after a single dose administration of cocaine hydrochloride. Cocaine and benzoylecgonine were not only detected, but also quantified in guinea pigs hair.

  13. The linear sizes tolerances and fits system modernization

    NASA Astrophysics Data System (ADS)

    Glukhov, V. I.; Grinevich, V. A.; Shalay, V. V.

    2018-04-01

    The study is carried out on the urgent topic for technical products quality providing in the tolerancing process of the component parts. The aim of the paper is to develop alternatives for improving the system linear sizes tolerances and dimensional fits in the international standard ISO 286-1. The tasks of the work are, firstly, to classify as linear sizes the elements additionally linear coordinating sizes that determine the detail elements location and, secondly, to justify the basic deviation of the tolerance interval for the element's linear size. The geometrical modeling method of real details elements, the analytical and experimental methods are used in the research. It is shown that the linear coordinates are the dimensional basis of the elements linear sizes. To standardize the accuracy of linear coordinating sizes in all accuracy classes, it is sufficient to select in the standardized tolerance system only one tolerance interval with symmetrical deviations: Js for internal dimensional elements (holes) and js for external elements (shafts). The main deviation of this coordinating tolerance is the average zero deviation, which coincides with the nominal value of the coordinating size. Other intervals of the tolerance system are remained for normalizing the accuracy of the elements linear sizes with a fundamental change in the basic deviation of all tolerance intervals is the maximum deviation corresponding to the limit of the element material: EI is the lower tolerance for the of the internal elements (holes) sizes and es is the upper tolerance deviation for the outer elements (shafts) sizes. It is the sizes of the material maximum that are involved in the of the dimensional elements mating of the shafts and holes and determine the fits type.

  14. Quantification of Inflammasome Adaptor Protein ASC in Biological Samples by Multiple-Reaction Monitoring Mass Spectrometry.

    PubMed

    Ulke-Lemée, Annegret; Lau, Arthur; Nelson, Michelle C; James, Matthew T; Muruve, Daniel A; MacDonald, Justin A

    2018-06-09

    Inflammation is an integral component of many diseases, including chronic kidney disease (CKD). ASC (apoptosis-associated speck-like protein containing CARD, also PYCARD) is the key inflammasome adaptor protein in the innate immune response. Since ASC specks, a macromolecular condensate of ASC protein, can be released by inflammasome-activated cells into the extracellular space to amplify inflammatory responses, the ASC protein could be an important biomarker in diagnostic applications. Herein, we describe the development and validation of a multiple reaction monitoring mass spectrometry (MRM-MS) assay for the accurate quantification of ASC in human biospecimens. Limits of detection and quantification for the signature DLLLQALR peptide (used as surrogate for the target ASC protein) were determined by the method of standard addition using synthetic isotope-labeled internal standard (SIS) peptide and urine matrix from a healthy donor (LOQ was 8.25 pM, with a ~ 1000-fold linear range). We further quantified ASC in the urine of CKD patients (8.4 ± 1.3 ng ASC/ml urine, n = 13). ASC was positively correlated with proteinuria and urinary IL-18 in CKD samples but not with urinary creatinine. Unfortunately, the ASC protein is susceptible to degradation, and patient urine that was thawed and refrozen lost 85% of the ASC signal. In summary, the MRM-MS assay provides a robust means to quantify ASC in biological samples, including clinical biospecimens; however, sample collection and storage conditions will have a critical impact on assay reliability.

  15. Multiple Reaction Monitoring for Direct Quantitation of Intact Proteins Using a Triple Quadrupole Mass Spectrometer

    NASA Astrophysics Data System (ADS)

    Wang, Evelyn H.; Combe, Peter C.; Schug, Kevin A.

    2016-05-01

    Methods that can efficiently and effectively quantify proteins are needed to support increasing demand in many bioanalytical fields. Triple quadrupole mass spectrometry (QQQ-MS) is sensitive and specific, and it is routinely used to quantify small molecules. However, low resolution fragmentation-dependent MS detection can pose inherent difficulties for intact proteins. In this research, we investigated variables that affect protein and fragment ion signals to enable protein quantitation using QQQ-MS. Collision induced dissociation gas pressure and collision energy were found to be the most crucial variables for optimization. Multiple reaction monitoring (MRM) transitions for seven standard proteins, including lysozyme, ubiquitin, cytochrome c from both equine and bovine, lactalbumin, myoglobin, and prostate-specific antigen (PSA) were determined. Assuming the eventual goal of applying such methodology is to analyze protein in biological fluids, a liquid chromatography method was developed. Calibration curves of six standard proteins (excluding PSA) were obtained to show the feasibility of intact protein quantification using QQQ-MS. Linearity (2-3 orders), limits of detection (0.5-50 μg/mL), accuracy (<5% error), and precision (1%-12% CV) were determined for each model protein. Sensitivities for different proteins varied considerably. Biological fluids, including human urine, equine plasma, and bovine plasma were used to demonstrate the specificity of the approach. The purpose of this model study was to identify, study, and demonstrate the advantages and challenges for QQQ-MS-based intact protein quantitation, a largely underutilized approach to date.

  16. Correlation and simple linear regression.

    PubMed

    Eberly, Lynn E

    2007-01-01

    This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.

  17. EpiCollect+: linking smartphones to web applications for complex data collection projects

    PubMed Central

    Aanensen, David M.; Huntley, Derek M.; Menegazzo, Mirko; Powell, Chris I.; Spratt, Brian G.

    2014-01-01

    Previously, we have described the development of the generic mobile phone data gathering tool, EpiCollect, and an associated web application, providing two-way communication between multiple data gatherers and a project database. This software only allows data collection on the phone using a single questionnaire form that is tailored to the needs of the user (including a single GPS point and photo per entry), whereas many applications require a more complex structure, allowing users to link a series of forms in a linear or branching hierarchy, along with the addition of any number of media types accessible from smartphones and/or tablet devices (e.g., GPS, photos, videos, sound clips and barcode scanning). A much enhanced version of EpiCollect has been developed (EpiCollect+). The individual data collection forms in EpiCollect+ provide more design complexity than the single form used in EpiCollect, and the software allows the generation of complex data collection projects through the ability to link many forms together in a linear (or branching) hierarchy. Furthermore, EpiCollect+ allows the collection of multiple media types as well as standard text fields, increased data validation and form logic. The entire process of setting up a complex mobile phone data collection project to the specification of a user (project and form definitions) can be undertaken at the EpiCollect+ website using a simple ‘drag and drop’ procedure, with visualisation of the data gathered using Google Maps and charts at the project website. EpiCollect+ is suitable for situations where multiple users transmit complex data by mobile phone (or other Android devices) to a single project web database and is already being used for a range of field projects, particularly public health projects in sub-Saharan Africa. However, many uses can be envisaged from education, ecology and epidemiology to citizen science. PMID:25485096

  18. Objective Structured Assessment of Technical Skills (OSATS) evaluation of hysteroscopy training: a prospective study.

    PubMed

    Alici, Ferizan; Buerkle, Bernd; Tempfer, Clemens B

    2014-07-01

    To describe the performance curve of hysteroscopy-naïve probands repeatedly working through a surgery algorithm on a hysteroscopy trainer. We prospectively recruited medical students to a 30min demonstration session teaching a standardized surgery algorithm. Subjects subsequently performed three training courses immediately after training (T1) and after 24h (T2) and 48h (T3). Skills were recorded with a 20-item Objective Structured Assessment of Technical Skills (OSATS) at T1, T2, and T3. The presence of a sustained OSATS score improvement from T1 to T3 was the primary outcome. Performance time (PT) and self assessment (SA) were secondary outcomes. Statistics were performed using paired T-test and multiple linear regression analysis. 92 subjects were included. OSATS scores significantly improved over time from T1 to T2 (15.21±1.95 vs. 16.02±2.06, respectively; p<0.0001) and from T2 to T3 (16.02±2.06 vs. 16.95±1.61, respectively; p<0.0001). The secondary outcomes PT (414±119s vs. 357±88s vs. 304±91s; p<0.0001) and SA (3.02±0.85 vs. 3.80±0.76 vs. 4.41±0.67; p<0.0001) also showed an improvement over time with quicker performance and higher confidence. SA, but not PT demonstrated construct validity. In a multiple linear regression analysis, gender (odds ratio (OR) 0.96; 95% confidence interval (CI) 0.35-2.71; p=0.9) did not independently influence the likelihood of OSATS score improvement. In a hysteroscopy-naïve population, there is a continuous and sustained improvement of surgical proficiency and confidence after multiple training courses on a hysteroscopy trainer. Serial hysteroscopy trainings may be helpful for teaching hysteroscopy skills. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. Association of Cortical β-Amyloid with Erythrocyte Membrane Monounsaturated and Saturated Fatty Acids in Older Adults at Risk of Dementia.

    PubMed

    Hooper, C; De Souto Barreto, P; Payoux, P; Salabert, A S; Guyonnet, S; Andrieu, S; Sourdet, S; Delrieu, J; Vellas, B

    2017-01-01

    We examined the relationships between erythrocyte membrane monounsaturated fatty acids (MUFAs) and saturated fatty acids (SFAs) and cortical β-amyloid (Aβ) load in older adults reporting subjective memory complaints. This is a cross-sectional study using data from the Multidomain Alzheimer Preventive Trial (MAPT); a randomised controlled trial. French community dwellers aged 70 or over reporting subjective memory complaints, but free from a diagnosis of clinical dementia. Participants of this study were 61 individuals from the placebo arm of the MAPT trial with data on erythrocyte membrane fatty acid levels and cortical Aβ load. Cortical-to-cerebellar standard uptake value ratios were assessed using [18F] florbetapir positron emission tomography (PET). Fatty acids were measured in erythrocyte cell membranes using gas chromatography. Associations between erythrocyte membrane MUFAs and SFAs and cortical Aβ load were explored using adjusted multiple linear regression models and were considered significant at p ≤ 0.005 (10 comparisons) after correction for multiple testing. We found no significant associations between fatty acids and cortical Aβ load using multiple linear regression adjusted for age, sex, education, cognition, PET-scan to clinical assessment interval, PET-scan to blood collection interval and apolipoprotein E (ApoE) status. The association closest to significance was that between erythrocyte membrane stearic acid and Aβ (B-coefficient 0.03, 95 % CI: 0.00,0.05, p = 0.05). This association, although statistically non-significant, appeared to be stronger amongst ApoE ε4 carriers (B-coefficient 0.04, 95 % CI: -0.01,0.09, p = 0.08) compared to ApoE ε4 non-carriers (B-coefficient 0.02, 95 % CI: -0.01,0.05, p = 0.18) in age and sex stratified analysis. Future research in the form of large longitudinal observational study is needed to validate our findings, particularly regarding the potential association of stearic acid with cortical Aβ.

  20. Shared dosimetry error in epidemiological dose-response analyses

    DOE PAGES

    Stram, Daniel O.; Preston, Dale L.; Sokolnikov, Mikhail; ...

    2015-03-23

    Radiation dose reconstruction systems for large-scale epidemiological studies are sophisticated both in providing estimates of dose and in representing dosimetry uncertainty. For example, a computer program was used by the Hanford Thyroid Disease Study to provide 100 realizations of possible dose to study participants. The variation in realizations reflected the range of possible dose for each cohort member consistent with the data on dose determinates in the cohort. Another example is the Mayak Worker Dosimetry System 2013 which estimates both external and internal exposures and provides multiple realizations of "possible" dose history to workers given dose determinants. This paper takesmore » up the problem of dealing with complex dosimetry systems that provide multiple realizations of dose in an epidemiologic analysis. In this paper we derive expected scores and the information matrix for a model used widely in radiation epidemiology, namely the linear excess relative risk (ERR) model that allows for a linear dose response (risk in relation to radiation) and distinguishes between modifiers of background rates and of the excess risk due to exposure. We show that treating the mean dose for each individual (calculated by averaging over the realizations) as if it was true dose (ignoring both shared and unshared dosimetry errors) gives asymptotically unbiased estimates (i.e. the score has expectation zero) and valid tests of the null hypothesis that the ERR slope β is zero. Although the score is unbiased the information matrix (and hence the standard errors of the estimate of β) is biased for β≠0 when ignoring errors in dose estimates, and we show how to adjust the information matrix to remove this bias, using the multiple realizations of dose. The use of these methods in the context of several studies including, the Mayak Worker Cohort, and the U.S. Atomic Veterans Study, is discussed.« less

  1. High doses of folic acid in the periconceptional period and risk of low weight for gestational age at birth in a population based cohort study.

    PubMed

    Navarrete-Muñoz, Eva María; Valera-Gran, Desirée; Garcia-de-la-Hera, Manuela; Gonzalez-Palacios, Sandra; Riaño, Isolina; Murcia, Mario; Lertxundi, Aitana; Guxens, Mònica; Tardón, Adonina; Amiano, Pilar; Vrijheid, Martine; Rebagliato, Marisa; Vioque, Jesus

    2017-11-27

    We investigated the association between maternal use of folic acid (FA) during pregnancy and child anthropometric measures at birth. We included 2302 mother-child pairs from a population-based birth cohort in Spain (INMA Project). FA dosages at first and third trimester of pregnancy were assessed using a specific battery questionnaire and were categorized in non-user, < 1000, 1000-4999, and ≥ 5000 µg/day. Anthropometric measures at birth (weight in grams, length and head circumference in centimetres) were obtained from medical records. Small for gestational age according to weight (SGA-w), length (SGA-l) and head circumference (SGA-hc) were defined using the 10th percentile based on Spanish standardized growth reference charts. Multiple linear and logistic regression analyses were used to explore the association between FA dosages in different stages of pregnancy and child anthropometric measures at birth. In the multiple linear regression analysis, we found a tendency for a negative association between the use of high dosages of FA (≥ 5000 µg/day) in the periconceptional period of pregnancy and weight at birth compared to mothers who were non-users of FA (β = - 73.83; 95% CI - 151.71, 4.06). In the multiple logistic regression, a greater risk of SGA-w was also evident among children whose mothers took FA dosages of 1000-4999 (OR = 2.21; 95% CI 1.17, 4.19) and of ≥ 5000 µg/day (OR = 2.32; 95% CI 1.06, 5.08) compared to mothers non-users of FA in the periconceptional period of pregnancy. Our findings suggest that a high dosage of FA (≥ 1000 µg/day) may be associated with an increased risk of SGA-w at birth.

  2. EpiCollect+: linking smartphones to web applications for complex data collection projects.

    PubMed

    Aanensen, David M; Huntley, Derek M; Menegazzo, Mirko; Powell, Chris I; Spratt, Brian G

    2014-01-01

    Previously, we have described the development of the generic mobile phone data gathering tool, EpiCollect, and an associated web application, providing two-way communication between multiple data gatherers and a project database. This software only allows data collection on the phone using a single questionnaire form that is tailored to the needs of the user (including a single GPS point and photo per entry), whereas many applications require a more complex structure, allowing users to link a series of forms in a linear or branching hierarchy, along with the addition of any number of media types accessible from smartphones and/or tablet devices (e.g., GPS, photos, videos, sound clips and barcode scanning). A much enhanced version of EpiCollect has been developed (EpiCollect+). The individual data collection forms in EpiCollect+ provide more design complexity than the single form used in EpiCollect, and the software allows the generation of complex data collection projects through the ability to link many forms together in a linear (or branching) hierarchy. Furthermore, EpiCollect+ allows the collection of multiple media types as well as standard text fields, increased data validation and form logic. The entire process of setting up a complex mobile phone data collection project to the specification of a user (project and form definitions) can be undertaken at the EpiCollect+ website using a simple 'drag and drop' procedure, with visualisation of the data gathered using Google Maps and charts at the project website. EpiCollect+ is suitable for situations where multiple users transmit complex data by mobile phone (or other Android devices) to a single project web database and is already being used for a range of field projects, particularly public health projects in sub-Saharan Africa. However, many uses can be envisaged from education, ecology and epidemiology to citizen science.

  3. Factors Associated With Patient-perceived Hoarseness in Spasmodic Dysphonia Patients.

    PubMed

    Hu, Amanda; Hillel, Al; Meyer, Tanya

    2016-11-01

    The American Academy of Otolaryngology-Head and Neck Surgery Clinical Practice Guidelines on Hoarseness distinguishes between hoarseness, which is a symptom perceived by the patient, and dysphonia, which is a diagnosis made by the clinician. Our objective was to determine factors that are associated with patient-perceived hoarseness in spasmodic dysphonia (SD) patients. Retrospective study. Adductor SD patients who presented for botulinum toxin injections from September 2011 to June 2012 were recruited. The main outcome variable, Voice Handicap Index-10 (VHI-10), was used to quantify patient-perceived hoarseness. Clinical data, Hospital Anxiety and Depression Scale (HADS), and VHI-10 were collected. Clinician-perceived dysphonia was measured by a speech-language pathologist with Consensus Auditory Perceptual Evaluation of Voice (CAPE-V). Statistical analysis included univariate analyses and multiple linear regression. One hundred thirty-nine SD patients had VHI-10 score of 26.0 ± 7.2 (mean ± standard deviation), disease duration of 10.5 + 7.0 years, CAPE-V overall score of 43.2 ± 21.8, HADS anxiety score of 6.7 ± 3.8, and HADS depression score of 3.6 ± 2.8. In univariate analyses, there were positive correlations (P < 0.05) between VHI-10 and female gender, CAPE-V overall, older age, HADS anxiety, and depression. There was no correlation with professional voice use and disease duration. In multiple linear regression (R 2  = 0.178, P < 0.001), age, HADS anxiety, female gender, and CAPE-V were significant. Older age, higher anxiety levels, female gender, and clinician-perceived dysphonia are associated with higher levels of patient-perceived hoarseness in SD patients. Hoarseness is a very personal symptom. Multiple factors determine its self-perception. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  4. 48 CFR 52.222-43 - Fair Labor Standards Act and Service Contract Act-Price Adjustment (Multiple Year and Option...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... and Service Contract Act-Price Adjustment (Multiple Year and Option Contracts). 52.222-43 Section 52... Standards Act and Service Contract Act—Price Adjustment (Multiple Year and Option Contracts). As prescribed...—Price Adjustment (Multiple Year and Option Contracts) (SEP 2009) (a) This clause applies to both...

  5. Combining multiple imputation and meta-analysis with individual participant data

    PubMed Central

    Burgess, Stephen; White, Ian R; Resche-Rigon, Matthieu; Wood, Angela M

    2013-01-01

    Multiple imputation is a strategy for the analysis of incomplete data such that the impact of the missingness on the power and bias of estimates is mitigated. When data from multiple studies are collated, we can propose both within-study and multilevel imputation models to impute missing data on covariates. It is not clear how to choose between imputation models or how to combine imputation and inverse-variance weighted meta-analysis methods. This is especially important as often different studies measure data on different variables, meaning that we may need to impute data on a variable which is systematically missing in a particular study. In this paper, we consider a simulation analysis of sporadically missing data in a single covariate with a linear analysis model and discuss how the results would be applicable to the case of systematically missing data. We find in this context that ensuring the congeniality of the imputation and analysis models is important to give correct standard errors and confidence intervals. For example, if the analysis model allows between-study heterogeneity of a parameter, then we should incorporate this heterogeneity into the imputation model to maintain the congeniality of the two models. In an inverse-variance weighted meta-analysis, we should impute missing data and apply Rubin's rules at the study level prior to meta-analysis, rather than meta-analyzing each of the multiple imputations and then combining the meta-analysis estimates using Rubin's rules. We illustrate the results using data from the Emerging Risk Factors Collaboration. PMID:23703895

  6. On the linear programming bound for linear Lee codes.

    PubMed

    Astola, Helena; Tabus, Ioan

    2016-01-01

    Based on an invariance-type property of the Lee-compositions of a linear Lee code, additional equality constraints can be introduced to the linear programming problem of linear Lee codes. In this paper, we formulate this property in terms of an action of the multiplicative group of the field [Formula: see text] on the set of Lee-compositions. We show some useful properties of certain sums of Lee-numbers, which are the eigenvalues of the Lee association scheme, appearing in the linear programming problem of linear Lee codes. Using the additional equality constraints, we formulate the linear programming problem of linear Lee codes in a very compact form, leading to a fast execution, which allows to efficiently compute the bounds for large parameter values of the linear codes.

  7. Frequency-domain full-waveform inversion with non-linear descent directions

    NASA Astrophysics Data System (ADS)

    Geng, Yu; Pan, Wenyong; Innanen, Kristopher A.

    2018-05-01

    Full-waveform inversion (FWI) is a highly non-linear inverse problem, normally solved iteratively, with each iteration involving an update constructed through linear operations on the residuals. Incorporating a flexible degree of non-linearity within each update may have important consequences for convergence rates, determination of low model wavenumbers and discrimination of parameters. We examine one approach for doing so, wherein higher order scattering terms are included within the sensitivity kernel during the construction of the descent direction, adjusting it away from that of the standard Gauss-Newton approach. These scattering terms are naturally admitted when we construct the sensitivity kernel by varying not the current but the to-be-updated model at each iteration. Linear and/or non-linear inverse scattering methodologies allow these additional sensitivity contributions to be computed from the current data residuals within any given update. We show that in the presence of pre-critical reflection data, the error in a second-order non-linear update to a background of s0 is, in our scheme, proportional to at most (Δs/s0)3 in the actual parameter jump Δs causing the reflection. In contrast, the error in a standard Gauss-Newton FWI update is proportional to (Δs/s0)2. For numerical implementation of more complex cases, we introduce a non-linear frequency-domain scheme, with an inner and an outer loop. A perturbation is determined from the data residuals within the inner loop, and a descent direction based on the resulting non-linear sensitivity kernel is computed in the outer loop. We examine the response of this non-linear FWI using acoustic single-parameter synthetics derived from the Marmousi model. The inverted results vary depending on data frequency ranges and initial models, but we conclude that the non-linear FWI has the capability to generate high-resolution model estimates in both shallow and deep regions, and to converge rapidly, relative to a benchmark FWI approach involving the standard gradient.

  8. A comparison of linear and nonlinear statistical techniques in performance attribution.

    PubMed

    Chan, N H; Genovese, C R

    2001-01-01

    Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.

  9. Simultaneous detection of MCF-7 and HepG2 cells in blood by ICP-MS with gold nanoparticles and quantum dots as elemental tags.

    PubMed

    Li, Xiaoting; Chen, Beibei; He, Man; Wang, Han; Xiao, Guangyang; Yang, Bin; Hu, Bin

    2017-04-15

    In this work, we demonstrate a novel method based on inductively coupled plasma mass spectrometry (ICP-MS) detection with gold nanoparticles (Au NPs) and quantum dots (QDs) labeling for the simultaneous counting of two circulating tumor cell lines (MCF-7 and HepG2 cells) in human blood. MCF-7 and HepG2 cells were captured by magnetic beads coupled with anti-EpCAM and then specifically labeled by CdSe QDs-anti-ASGPR and Au NPs-anti-MUC1, respectively, which were used as signal probes for ICP-MS measurement. Under the optimal experimental conditions, the limits of detection of 50 MCF-7, 89 HepG2 cells and the linear ranges of 200-40000 MCF-7, 300-30000 HepG2 cells were obtained, and the relative standard deviations for seven replicate detections of 800 MCF-7 and HepG2 cells were 4.6% and 5.7%, respectively. This method has the advantages of high sensitivity, low sample consumption, wide linear range and can be extended to the simultaneous detection of multiple CTC lines in human peripheral blood. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. LC/MS/MS quantitation assay for pharmacokinetics of naringenin and double peaks phenomenon in rats plasma.

    PubMed

    Ma, Yan; Li, Peibo; Chen, Dawei; Fang, Tiezheng; Li, Haitian; Su, Weiwei

    2006-01-13

    A highly sensitive and specific electrospray ionization (ESI) liquid chromatography-tandem mass spectrometry (LC/MS/MS) method for quantitation of naringenin (NAR) and an explanation for the double peaks phenomenon was developed and validated. NAR was extracted from rat plasma and tissues along with the internal standard (IS), hesperidin, with ethyl acetate. The analytes were analyzed in the multiple-reaction-monitoring (MRM) mode as the precursor/product ion pair of m/z 273.4/151.3 for NAR and m/z 611.5/303.3 for the IS. The assay was linear over the concentration range of 5-2500 ng/mL. The lower limit quantification was 5 ng/mL, available for plasma pharmacokinetics of NAR in rats. Accuracy in within- and between-run precisions showed good reproducibility. When NAR was administered orally, only little and predominantly its glucuronidation were into circulation in the plasma. There existed double peaks phenomenon in plasma concentration-time curve leading to the relatively slow elimination of NAR in plasma. The results showed that there was a linear relationship between the AUC of total NAR and dosages. And the double peaks are mainly due to enterohepatic circulation.

  11. Combinatorial approach toward high-throughput analysis of direct methanol fuel cells.

    PubMed

    Jiang, Rongzhong; Rong, Charles; Chu, Deryn

    2005-01-01

    A 40-member array of direct methanol fuel cells (with stationary fuel and convective air supplies) was generated by electrically connecting the fuel cells in series. High-throughput analysis of these fuel cells was realized by fast screening of voltages between the two terminals of a fuel cell at constant current discharge. A large number of voltage-current curves (200) were obtained by screening the voltages through multiple small-current steps. Gaussian distribution was used to statistically analyze the large number of experimental data. The standard deviation (sigma) of voltages of these fuel cells increased linearly with discharge current. The voltage-current curves at various fuel concentrations were simulated with an empirical equation of voltage versus current and a linear equation of sigma versus current. The simulated voltage-current curves fitted the experimental data well. With increasing methanol concentration from 0.5 to 4.0 M, the Tafel slope of the voltage-current curves (at sigma=0.0), changed from 28 to 91 mV.dec-1, the cell resistance from 2.91 to 0.18 Omega, and the power output from 3 to 18 mW.cm-2.

  12. Modeling Longitudinal Data Containing Non-Normal Within Subject Errors

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan; Glenn, Nancy L.

    2013-01-01

    The mission of the National Aeronautics and Space Administration’s (NASA) human research program is to advance safe human spaceflight. This involves conducting experiments, collecting data, and analyzing data. The data are longitudinal and result from a relatively few number of subjects; typically 10 – 20. A longitudinal study refers to an investigation where participant outcomes and possibly treatments are collected at multiple follow-up times. Standard statistical designs such as mean regression with random effects and mixed–effects regression are inadequate for such data because the population is typically not approximately normally distributed. Hence, more advanced data analysis methods are necessary. This research focuses on four such methods for longitudinal data analysis: the recently proposed linear quantile mixed models (lqmm) by Geraci and Bottai (2013), quantile regression, multilevel mixed–effects linear regression, and robust regression. This research also provides computational algorithms for longitudinal data that scientists can directly use for human spaceflight and other longitudinal data applications, then presents statistical evidence that verifies which method is best for specific situations. This advances the study of longitudinal data in a broad range of applications including applications in the sciences, technology, engineering and mathematics fields.

  13. Improving resident performance on standardized assessments of medical knowledge: a retrospective analysis of interventions correlated to American Board of Surgery In-Service Training Examination performance.

    PubMed

    Buckley, Elaine Jayne; Markwell, Stephen; Farr, Debb; Sanfey, Hilary; Mellinger, John

    2015-10-01

    American Board of Surgery In-Service Training Examination (ABSITE) scores are used to assess individual progress and predict board pass rates. We reviewed strategies to enhance ABSITE performance and their impact within a surgery residency. Several interventions were introduced from 2010 to 2014. A retrospective review was undertaken evaluating these and correlating them to ABSITE performance. Analyses of variance and linear trends were performed for ABSITE, United States Medical Licensing Examination (USMLEs), mock oral, and mock ABSITE scores followed by post hoc analyses if significant. Results were correlated with core curricular changes. ABSITE mean percentile increased 34% in 4 years with significant performance improvement and increasing linear trends in postgraduate year (PGY)1 and PGY4 ABSITE scores. Mock ABSITE introduction correlated to significant improvement in ABSITE scores for PGY4 and PGY5. Mock oral introduction correlated with significant improvement in PGY1 and PGY3. Our study demonstrates an improvement in mean program ABSITE percentiles correlating with multiple interventions. Similar strategies may be useful for other programs. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Translation, cross-cultural adaptation and validation of the Persian version of COOP/WONCA charts in Persian-speaking Iranians with multiple sclerosis.

    PubMed

    Taghipour, Morteza; Salavati, Mahyar; Nabavi, Seyed Massood; Akhbari, Behnam; Ebrahimi Takamjani, Ismail; Negahban, Hossein; Rajabzadeh, Fatemeh

    2018-03-01

    Translation, cross-culturally adaptation and validation of a Persian version of COOP/WONCA charts in Persian-speaking Iranians with multiple sclerosis (MS). The Persian version of COOP/WONCA charts was developed after a standard forward translation, synthesis and backward translation. A total of 197 subjects with MS participated in this study. They were asked to complete the COOP/WONCA charts and Short-Form 36 Health Survey (SF-36). The COOP/WONCA charts were re-administered to 50 patients, 4 weeks after the first session. Expanded Disability Status Scale (EDSS) was also scored for each subject by the referring physician. Construct validity was assessed by testing linear relationship between corresponding domains of the COOP/WONCA charts, the SF-36 and the EDSS. Test-retest reliability was examined using interclass correlation coefficient (ICC), standard error of measurement (SEM) and minimal detectable change (MDC) values. Related domains of COOP/WONCA charts and SF-36 demonstrated strong linear relationships with Spearman's coefficients ranging from -0.51 to -0.75 (p< 0.05). Physical fitness and daily activity charts also demonstrated strong relationships with the EDSS by Spearman's coefficients of 0.65 and 0.50, respectively (p< 0.05). The ICC values for most of COOP/WONCA charts domains were acceptable (>0.70) except for feelings and quality-of-life domains that were 0.50 and 0.51, respectively. The Persian version of the COOP/WONCA charts was shown to be psychometrically appropriate to evaluate the functional level and quality of life in Persian-speaking Iranians with MS. Implications for rehabilitation COOP/WONCA charts are now available in Persian and demonstrate good psychometric properties. COOP/WONCA charts demonstrate excellent reliability and construct validity in a Persian-speaking Iranian population with MS. Minimal detectable change in COOP/WONCA is now available in MS to guide within and between group analyses. Knowledge on a wide variety of physical, mental and emotional parameters as well as the status of patients' symptoms, daily activities and quality of life helps rehabilitation clinicians and service providers plan preventive and remedial interventions more effectively.

  15. Estimation of stature from the foot and its segments in a sub-adult female population of North India

    PubMed Central

    2011-01-01

    Background Establishing personal identity is one of the main concerns in forensic investigations. Estimation of stature forms a basic domain of the investigation process in unknown and co-mingled human remains in forensic anthropology case work. The objective of the present study was to set up standards for estimation of stature from the foot and its segments in a sub-adult female population. Methods The sample for the study constituted 149 young females from the Northern part of India. The participants were aged between 13 and 18 years. Besides stature, seven anthropometric measurements that included length of the foot from each toe (T1, T2, T3, T4, and T5 respectively), foot breadth at ball (BBAL) and foot breadth at heel (BHEL) were measured on both feet in each participant using standard methods and techniques. Results The results indicated that statistically significant differences (p < 0.05) between left and right feet occur in both the foot breadth measurements (BBAL and BHEL). Foot length measurements (T1 to T5 lengths) did not show any statistically significant bilateral asymmetry. The correlation between stature and all the foot measurements was found to be positive and statistically significant (p-value < 0.001). Linear regression models and multiple regression models were derived for estimation of stature from the measurements of the foot. The present study indicates that anthropometric measurements of foot and its segments are valuable in the estimation of stature. Foot length measurements estimate stature with greater accuracy when compared to foot breadth measurements. Conclusions The present study concluded that foot measurements have a strong relationship with stature in the sub-adult female population of North India. Hence, the stature of an individual can be successfully estimated from the foot and its segments using different regression models derived in the study. The regression models derived in the study may be applied successfully for the estimation of stature in sub-adult females, whenever foot remains are brought for forensic examination. Stepwise multiple regression models tend to estimate stature more accurately than linear regression models in female sub-adults. PMID:22104433

  16. Estimation of stature from the foot and its segments in a sub-adult female population of North India.

    PubMed

    Krishan, Kewal; Kanchan, Tanuj; Passi, Neelam

    2011-11-21

    Establishing personal identity is one of the main concerns in forensic investigations. Estimation of stature forms a basic domain of the investigation process in unknown and co-mingled human remains in forensic anthropology case work. The objective of the present study was to set up standards for estimation of stature from the foot and its segments in a sub-adult female population. The sample for the study constituted 149 young females from the Northern part of India. The participants were aged between 13 and 18 years. Besides stature, seven anthropometric measurements that included length of the foot from each toe (T1, T2, T3, T4, and T5 respectively), foot breadth at ball (BBAL) and foot breadth at heel (BHEL) were measured on both feet in each participant using standard methods and techniques. The results indicated that statistically significant differences (p < 0.05) between left and right feet occur in both the foot breadth measurements (BBAL and BHEL). Foot length measurements (T1 to T5 lengths) did not show any statistically significant bilateral asymmetry. The correlation between stature and all the foot measurements was found to be positive and statistically significant (p-value < 0.001). Linear regression models and multiple regression models were derived for estimation of stature from the measurements of the foot. The present study indicates that anthropometric measurements of foot and its segments are valuable in the estimation of stature. Foot length measurements estimate stature with greater accuracy when compared to foot breadth measurements. The present study concluded that foot measurements have a strong relationship with stature in the sub-adult female population of North India. Hence, the stature of an individual can be successfully estimated from the foot and its segments using different regression models derived in the study. The regression models derived in the study may be applied successfully for the estimation of stature in sub-adult females, whenever foot remains are brought for forensic examination. Stepwise multiple regression models tend to estimate stature more accurately than linear regression models in female sub-adults.

  17. Establishment and validation of analytical reference panels for the standardization of quantitative BCR-ABL1 measurements on the international scale.

    PubMed

    White, Helen E; Hedges, John; Bendit, Israel; Branford, Susan; Colomer, Dolors; Hochhaus, Andreas; Hughes, Timothy; Kamel-Reid, Suzanne; Kim, Dong-Wook; Modur, Vijay; Müller, Martin C; Pagnano, Katia B; Pane, Fabrizio; Radich, Jerry; Cross, Nicholas C P; Labourier, Emmanuel

    2013-06-01

    Current guidelines for managing Philadelphia-positive chronic myeloid leukemia include monitoring the expression of the BCR-ABL1 (breakpoint cluster region/c-abl oncogene 1, non-receptor tyrosine kinase) fusion gene by quantitative reverse-transcription PCR (RT-qPCR). Our goal was to establish and validate reference panels to mitigate the interlaboratory imprecision of quantitative BCR-ABL1 measurements and to facilitate global standardization on the international scale (IS). Four-level secondary reference panels were manufactured under controlled and validated processes with synthetic Armored RNA Quant molecules (Asuragen) calibrated to reference standards from the WHO and the NIST. Performance was evaluated in IS reference laboratories and with non-IS-standardized RT-qPCR methods. For most methods, percent ratios for BCR-ABL1 e13a2 and e14a2 relative to ABL1 or BCR were robust at 4 different levels and linear over 3 logarithms, from 10% to 0.01% on the IS. The intraassay and interassay imprecision was <2-fold overall. Performance was stable across 3 consecutive lots, in multiple laboratories, and over a period of 18 months to date. International field trials demonstrated the commutability of the reagents and their accurate alignment to the IS within the intra- and interlaboratory imprecision of IS-standardized methods. The synthetic calibrator panels are robust, reproducibly manufactured, analytically calibrated to the WHO primary standards, and compatible with most BCR-ABL1 RT-qPCR assay designs. The broad availability of secondary reference reagents will further facilitate interlaboratory comparative studies and independent quality assessment programs, which are of paramount importance for worldwide standardization of BCR-ABL1 monitoring results and the optimization of current and new therapeutic approaches for chronic myeloid leukemia. © 2013 American Association for Clinical Chemistry.

  18. Microwave ablation with multiple simultaneously powered small-gauge triaxial antennas: results from an in vivo swine liver model.

    PubMed

    Brace, Christopher L; Laeseke, Paul F; Sampson, Lisa A; Frey, Tina M; van der Weide, Daniel W; Lee, Fred T

    2007-07-01

    To prospectively investigate the ability of a single generator to power multiple small-diameter antennas and create large zones of ablation in an in vivo swine liver model. Thirteen female domestic swine (mean weight, 70 kg) were used for the study as approved by the animal care and use committee. A single generator was used to simultaneously power three triaxial antennas at 55 W per antenna for 10 minutes in three groups: a control group where antennas were spaced to eliminate ablation zone overlap (n=6; 18 individual zones of ablation) and experimental groups where antennas were spaced 2.5 cm (n=7) or 3.0 cm (n=5) apart. Animals were euthanized after ablation, and ablation zones were sectioned and measured. A mixed linear model was used to test for differences in size and circularity among groups. Mean (+/-standard deviation) cross-sectional areas of multiple-antenna zones of ablation at 2.5- and 3.0-cm spacing (26.6 cm(2) +/- 9.7 and 32.2 cm(2) +/- 8.1, respectively) were significantly larger than individual ablation zones created with single antennas (6.76 cm(2) +/- 2.8, P<.001) and were 31% (2.5-cm spacing group: multiple antenna mean area, 26.6 cm(2); 3 x single antenna mean area, 20.28 cm(2)) to 59% (3.0-cm spacing group: multiple antenna mean area, 32.2 cm(2); 3 x single antenna mean area, 20.28 cm(2)) larger than 3 times the mean area of the single-antenna zones. Zones of ablation were found to be very circular, and vessels as large as 1.1 cm were completely coagulated with multiple antennas. A single generator may effectively deliver microwave power to multiple antennas. Large volumes of tissue may be ablated and large vessels coagulated with multiple-antenna ablation in the same time as single-antenna ablation. (c) RSNA, 2007.

  19. Estimation of stature using lower limb measurements in Sudanese Arabs.

    PubMed

    Ahmed, Altayeb Abdalla

    2013-07-01

    The estimation of stature from body parts is one of the most vital parts of personal identification in medico-legal autopsies, especially when mutilated and amputated limbs or body parts are found. The aim of this study was to assess the reliability and accuracy of using lower limb measurements for stature estimations. The stature, tibial length, bimalleolar breadth, foot length and foot breadth of 160 right-handed Sudanese Arab subjects, 80 men and 80 women (25-30 years old), were measured. The reliability of measurement acquisition was tested prior to the primary data collection. The data were analysed using basic univariate analysis and linear and multiple regression analyses. The results showed acceptable standards of measurement errors and reliability. Sex differences were significant for all of the measurements. There was a positive correlation coefficient between lower-limb dimensions and stature (P-value < 0.01). The best predictors were tibial length and foot length. The stature prediction accuracy ranged from ± 2.75-5.40 cm, which is comparable to the established skeletal standards for the lower limbs. This study provides new forensic standards for stature estimation using the lower limb measurements of Sudanese Arabs. Copyright © 2013 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  20. The role of family and community involvement in the development and implementation of school nutrition and physical activity policy.

    PubMed

    Kehm, Rebecca; Davey, Cynthia S; Nanney, Marilyn S

    2015-02-01

    Although there are several evidence-based recommendations directed at improving nutrition and physical activity standards in schools, these guidelines have not been uniformly adopted throughout the United States. Consequently, research is needed to identify facilitators promoting schools to implement these recommendations. Therefore, this study analyzed the 2008 School Health Profiles Principal Survey (Profiles) to explore the role of family and community involvement in school nutrition and physical activity standards. Survey data on nutrition and physical activity policies, as well as family and community involvement, were available for 28 states, representing 6732 secondary schools. One-factor analysis of variance (ANOVA), 2-sample t-tests, Pearson's chi-square tests, and multiple logistic and linear regression models were employed in this analysis. Family and community involvement were associated with schools more frequently utilizing healthy eating strategies and offering students healthier food options. Further, involvement was associated with greater support for physical education staff and more intramural sports opportunities for students. Though family and community involvement have the potential to have a positive influence on school nutrition and physical activity policies and practices, involvement remains low in schools. Increased efforts are needed to encourage collaboration among schools, families, and communities to ensure the highest health standards for all students. © 2015, American School Health Association.

  1. Variables Associated with Communicative Participation in People with Multiple Sclerosis: A Regression Analysis

    ERIC Educational Resources Information Center

    Baylor, Carolyn; Yorkston, Kathryn; Bamer, Alyssa; Britton, Deanna; Amtmann, Dagmar

    2010-01-01

    Purpose: To explore variables associated with self-reported communicative participation in a sample (n = 498) of community-dwelling adults with multiple sclerosis (MS). Method: A battery of questionnaires was administered online or on paper per participant preference. Data were analyzed using multiple linear backward stepwise regression. The…

  2. Standards for Standardized Logistic Regression Coefficients

    ERIC Educational Resources Information Center

    Menard, Scott

    2011-01-01

    Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…

  3. Correlations of neutron multiplicity and γ -ray multiplicity with fragment mass and total kinetic energy in spontaneous fission of Cf 252

    DOE PAGES

    Wang, Taofeng; Li, Guangwu; Zhu, Liping; ...

    2016-01-08

    The dependence of correlations of neutron multiplicity ν and γ-ray multiplicity M γ in spontaneous fission of 252Cf on fragment mass A* and total kinetic energy (TKE) have been investigated by employing the ratio of M γ/ν and the form of M γ(ν). We show for the first time that M γ and ν have a complex correlation for heavy fragment masses, while there is a positive dependence of Mγ for light fragment masses and for near-symmetric mass splits. The ratio M γ/ν exhibits strong shell effects for neutron magic number N=50 and near doubly magic number shell closure atmore » Z=50 and N=82. The γ-ray multiplicity Mγ has a maximum for TKE=165-170 MeV. Above 170 MeV M γ(TKE) is approximately linear, while it deviates significantly from a linear dependence at lower TKE. The correlation between the average neutron and γ-ray multiplicities can be partly reproduced by model calculations.« less

  4. An improved multiple linear regression and data analysis computer program package

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1972-01-01

    NEWRAP, an improved version of a previous multiple linear regression program called RAPIER, CREDUC, and CRSPLT, allows for a complete regression analysis including cross plots of the independent and dependent variables, correlation coefficients, regression coefficients, analysis of variance tables, t-statistics and their probability levels, rejection of independent variables, plots of residuals against the independent and dependent variables, and a canonical reduction of quadratic response functions useful in optimum seeking experimentation. A major improvement over RAPIER is that all regression calculations are done in double precision arithmetic.

  5. A scalable parallel algorithm for multiple objective linear programs

    NASA Technical Reports Server (NTRS)

    Wiecek, Malgorzata M.; Zhang, Hong

    1994-01-01

    This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.

  6. Predicting musically induced emotions from physiological inputs: linear and neural network models.

    PubMed

    Russo, Frank A; Vempala, Naresh N; Sandstrom, Gillian M

    2013-01-01

    Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of "felt" emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants-heart rate (HR), respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA) dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a non-linear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The non-linear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the non-linear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  7. Unambiguous discrimination between linearly dependent equidistant states with multiple copies

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-Hai; Ren, Gang

    2018-07-01

    Linearly independent quantum states can be unambiguously discriminated, but linearly dependent ones cannot. For linearly dependent quantum states, however, if C copies of the single states are available, then they may form linearly independent states, and can be unambiguously discriminated. We consider unambiguous discrimination among N = D + 1 linearly dependent states given that C copies are available and that the single copies span a D-dimensional space with equal inner products. The maximum unambiguous discrimination probability is derived for all C with equal a priori probabilities. For this classification of the linearly dependent equidistant states, our result shows that if C is even then adding a further copy fails to increase the maximum discrimination probability.

  8. Update on Linear Mode Photon Counting with the HgCdTe Linear Mode Avalanche Photodiode

    NASA Technical Reports Server (NTRS)

    Beck, Jeffrey D.; Kinch, Mike; Sun, Xiaoli

    2014-01-01

    The behavior of the gain-voltage characteristic of the mid-wavelength infrared cutoff HgCdTe linear mode avalanche photodiode (e-APD) is discussed both experimentally and theoretically as a function of the width of the multiplication region. Data are shown that demonstrate a strong dependence of the gain at a given bias voltage on the width of the n- gain region. Geometrical and fundamental theoretical models are examined to explain this behavior. The geometrical model takes into account the gain-dependent optical fill factor of the cylindrical APD. The theoretical model is based on the ballistic ionization model being developed for the HgCdTe APD. It is concluded that the fundamental theoretical explanation is the dominant effect. A model is developed that combines both the geometrical and fundamental effects. The model also takes into account the effect of the varying multiplication width in the low bias region of the gain-voltage curve. It is concluded that the lower than expected gain seen in the first 2 × 8 HgCdTe linear mode photon counting APD arrays, and higher excess noise factor, was very likely due to the larger than typical multiplication region length in the photon counting APD pixel design. The implications of these effects on device photon counting performance are discussed.

  9. Non-linear relationship of cell hit and transformation probabilities in a low dose of inhaled radon progenies.

    PubMed

    Balásházy, Imre; Farkas, Arpád; Madas, Balázs Gergely; Hofmann, Werner

    2009-06-01

    Cellular hit probabilities of alpha particles emitted by inhaled radon progenies in sensitive bronchial epithelial cell nuclei were simulated at low exposure levels to obtain useful data for the rejection or support of the linear-non-threshold (LNT) hypothesis. In this study, local distributions of deposited inhaled radon progenies in airway bifurcation models were computed at exposure conditions characteristic of homes and uranium mines. Then, maximum local deposition enhancement factors at bronchial airway bifurcations, expressed as the ratio of local to average deposition densities, were determined to characterise the inhomogeneity of deposition and to elucidate their effect on resulting hit probabilities. The results obtained suggest that in the vicinity of the carinal regions of the central airways the probability of multiple hits can be quite high, even at low average doses. Assuming a uniform distribution of activity there are practically no multiple hits and the hit probability as a function of dose exhibits a linear shape in the low dose range. The results are quite the opposite in the case of hot spots revealed by realistic deposition calculations, where practically all cells receive multiple hits and the hit probability as a function of dose is non-linear in the average dose range of 10-100 mGy.

  10. Multiple-Input Multiple-Output (MIMO) Linear Systems Extreme Inputs/Outputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smallwood, David O.

    2007-01-01

    A linear structure is excited at multiple points with a stationary normal random process. The response of the structure is measured at multiple outputs. If the autospectral densities of the inputs are specified, the phase relationships between the inputs are derived that will minimize or maximize the trace of the autospectral density matrix of the outputs. If the autospectral densities of the outputs are specified, the phase relationships between the outputs that will minimize or maximize the trace of the input autospectral density matrix are derived. It is shown that other phase relationships and ordinary coherence less than one willmore » result in a trace intermediate between these extremes. Least favorable response and some classes of critical response are special cases of the development. It is shown that the derivation for stationary random waveforms can also be applied to nonstationary random, transients, and deterministic waveforms.« less

  11. Precision magnetic suspension linear bearing

    NASA Technical Reports Server (NTRS)

    Trumper, David L.; Queen, Michael A.

    1992-01-01

    We have shown the design and analyzed the electromechanics of a linear motor suitable for independently controlling two suspension degrees of freedom. This motor, at least on paper, meets the requirements for driving an X-Y stage of 10 Kg mass with about 4 m/sq sec acceleration, with travel of several hundred millimeters in X and Y, and with reasonable power dissipation. A conceptual design for such a stage is presented. The theoretical feasibility of linear and planar bearings using single or multiple magnetic suspension linear motors is demonstrated.

  12. A novel approach for prediction of tacrolimus blood concentration in liver transplantation patients in the intensive care unit through support vector regression.

    PubMed

    Van Looy, Stijn; Verplancke, Thierry; Benoit, Dominique; Hoste, Eric; Van Maele, Georges; De Turck, Filip; Decruyenaere, Johan

    2007-01-01

    Tacrolimus is an important immunosuppressive drug for organ transplantation patients. It has a narrow therapeutic range, toxic side effects, and a blood concentration with wide intra- and interindividual variability. Hence, it is of the utmost importance to monitor tacrolimus blood concentration, thereby ensuring clinical effect and avoiding toxic side effects. Prediction models for tacrolimus blood concentration can improve clinical care by optimizing monitoring of these concentrations, especially in the initial phase after transplantation during intensive care unit (ICU) stay. This is the first study in the ICU in which support vector machines, as a new data modeling technique, are investigated and tested in their prediction capabilities of tacrolimus blood concentration. Linear support vector regression (SVR) and nonlinear radial basis function (RBF) SVR are compared with multiple linear regression (MLR). Tacrolimus blood concentrations, together with 35 other relevant variables from 50 liver transplantation patients, were extracted from our ICU database. This resulted in a dataset of 457 blood samples, on average between 9 and 10 samples per patient, finally resulting in a database of more than 16,000 data values. Nonlinear RBF SVR, linear SVR, and MLR were performed after selection of clinically relevant input variables and model parameters. Differences between observed and predicted tacrolimus blood concentrations were calculated. Prediction accuracy of the three methods was compared after fivefold cross-validation (Friedman test and Wilcoxon signed rank analysis). Linear SVR and nonlinear RBF SVR had mean absolute differences between observed and predicted tacrolimus blood concentrations of 2.31 ng/ml (standard deviation [SD] 2.47) and 2.38 ng/ml (SD 2.49), respectively. MLR had a mean absolute difference of 2.73 ng/ml (SD 3.79). The difference between linear SVR and MLR was statistically significant (p < 0.001). RBF SVR had the advantage of requiring only 2 input variables to perform this prediction in comparison to 15 and 16 variables needed by linear SVR and MLR, respectively. This is an indication of the superior prediction capability of nonlinear SVR. Prediction of tacrolimus blood concentration with linear and nonlinear SVR was excellent, and accuracy was superior in comparison with an MLR model.

  13. Screening of Carotenoids in Tomato Fruits by Using Liquid Chromatography with Diode Array-Linear Ion Trap Mass Spectrometry Detection.

    PubMed

    Gentili, Alessandra; Caretti, Fulvia; Ventura, Salvatore; Pérez-Fernández, Virginia; Venditti, Alessandro; Curini, Roberta

    2015-08-26

    This paper presents an analytical strategy for a large-scale screening of carotenoids in tomato fruits by exploiting the potentialities of the triple quadrupole-linear ion trap hybrid mass spectrometer (QqQLIT). The method involves separation on C30 reversed-phase column and identification by means of diode array detection (DAD) and atmospheric pressure chemical ionization-mass spectrometry (APCI-MS). The authentic standards of six model compounds were used to optimize the separative conditions and to predict the chromatographic behavior of untargeted carotenoids. An information dependent acquisition (IDA) was performed with (i) enhanced-mass scan (EMS) as the survey scan, (ii) enhanced-resolution (ER) scan to obtain the exact mass of the precursor ions (16-35 ppm), and (iii) enhanced product ion (EPI) scan as dependent scan to obtain structural information. LC-DAD-multiple reaction monitoring (MRM) chromatograms were also acquired for the identification of targeted carotenoids occurring at low concentrations; for the first time, the relative abundance between the MRM transitions (ion ratio) was used as an extra tool for the MS distinction of structural isomers and the related families of geometrical isomers. The whole analytical strategy was high-throughput, because a great number of experimental data could be acquired with few analytical steps, and cost-effective, because only few standards were used; when applied to characterize some tomato varieties ('Tangerine', 'Pachino', 'Datterino', and 'Camone') and passata of 'San Marzano' tomatoes, our method succeeded in identifying up to 44 carotenoids in the 'Tangerine'" variety.

  14. Multicenter Collaborative Quality Assurance Program for the Province of Ontario, Canada: First-Year Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Létourneau, Daniel, E-mail: daniel.letourneau@rmp.uh.on.ca; Department of Radiation Oncology, University of Toronto, Toronto, Ontario; McNiven, Andrea

    2013-05-01

    Purpose: The objective of this work was to develop a collaborative quality assurance (CQA) program to assess the performance of intensity modulated radiation therapy (IMRT) planning and delivery across the province of Ontario, Canada. Methods and Materials: The CQA program was designed to be a comprehensive end-to-end test that can be completed on multiple planning and delivery platforms. The first year of the program included a head-and-neck (H and N) planning exercise and on-site visit to acquire dosimetric measurements to assess planning and delivery performance. A single dosimeter was used at each institution, and the planned to measured dose agreementmore » was evaluated for both the H and N plan and a standard plan (linear-accelerator specific) that was created to enable a direct comparison between centers with similar infrastructure. Results: CQA program feasibility was demonstrated through participation of all 13 radiation therapy centers in the province. Planning and delivery was completed on a variety of infrastructure (treatment planning systems and linear accelerators). The planning exercise was completed using both static gantry and rotational IMRT, and planned-to-delivered dose agreement (pass rates) for 3%/3-mm gamma evaluation were greater than 90% (92.6%-99.6%). Conclusions: All centers had acceptable results, but variation in planned to delivered dose agreement for the same planning and delivery platform was noted. The upper end of the range will provide an achievable target for other centers through continued quality improvement, aided by feedback provided by the program through the use of standard plans and simple test fields.« less

  15. On Holo-Hilbert Spectral Analysis: A Full Informational Spectral Representation for Nonlinear and Non-Stationary Data

    NASA Technical Reports Server (NTRS)

    Huang, Norden E.; Hu, Kun; Yang, Albert C. C.; Chang, Hsing-Chih; Jia, Deng; Liang, Wei-Kuang; Yeh, Jia Rong; Kao, Chu-Lan; Juan, Chi-Huang; Peng, Chung Kang; hide

    2016-01-01

    The Holo-Hilbert spectral analysis (HHSA) method is introduced to cure the deficiencies of traditional spectral analysis and to give a full informational representation of nonlinear and non-stationary data. It uses a nested empirical mode decomposition and Hilbert-Huang transform (HHT) approach to identify intrinsic amplitude and frequency modulations often present in nonlinear systems. Comparisons are first made with traditional spectrum analysis, which usually achieved its results through convolutional integral transforms based on additive expansions of an a priori determined basis, mostly under linear and stationary assumptions. Thus, for non-stationary processes, the best one could do historically was to use the time- frequency representations, in which the amplitude (or energy density) variation is still represented in terms of time. For nonlinear processes, the data can have both amplitude and frequency modulations (intra-mode and inter-mode) generated by two different mechanisms: linear additive or nonlinear multiplicative processes. As all existing spectral analysis methods are based on additive expansions, either a priori or adaptive, none of them could possibly represent the multiplicative processes. While the earlier adaptive HHT spectral analysis approach could accommodate the intra-wave nonlinearity quite remarkably, it remained that any inter-wave nonlinear multiplicative mechanisms that include cross-scale coupling and phase-lock modulations were left untreated. To resolve the multiplicative processes issue, additional dimensions in the spectrum result are needed to account for the variations in both the amplitude and frequency modulations simultaneously. HHSA accommodates all the processes: additive and multiplicative, intra-mode and inter-mode, stationary and nonstationary, linear and nonlinear interactions. The Holo prefix in HHSA denotes a multiple dimensional representation with both additive and multiplicative capabilities.

  16. On Holo-Hilbert spectral analysis: a full informational spectral representation for nonlinear and non-stationary data

    PubMed Central

    Huang, Norden E.; Hu, Kun; Yang, Albert C. C.; Chang, Hsing-Chih; Jia, Deng; Liang, Wei-Kuang; Yeh, Jia Rong; Kao, Chu-Lan; Juan, Chi-Hung; Peng, Chung Kang; Meijer, Johanna H.; Wang, Yung-Hung; Long, Steven R.; Wu, Zhauhua

    2016-01-01

    The Holo-Hilbert spectral analysis (HHSA) method is introduced to cure the deficiencies of traditional spectral analysis and to give a full informational representation of nonlinear and non-stationary data. It uses a nested empirical mode decomposition and Hilbert–Huang transform (HHT) approach to identify intrinsic amplitude and frequency modulations often present in nonlinear systems. Comparisons are first made with traditional spectrum analysis, which usually achieved its results through convolutional integral transforms based on additive expansions of an a priori determined basis, mostly under linear and stationary assumptions. Thus, for non-stationary processes, the best one could do historically was to use the time–frequency representations, in which the amplitude (or energy density) variation is still represented in terms of time. For nonlinear processes, the data can have both amplitude and frequency modulations (intra-mode and inter-mode) generated by two different mechanisms: linear additive or nonlinear multiplicative processes. As all existing spectral analysis methods are based on additive expansions, either a priori or adaptive, none of them could possibly represent the multiplicative processes. While the earlier adaptive HHT spectral analysis approach could accommodate the intra-wave nonlinearity quite remarkably, it remained that any inter-wave nonlinear multiplicative mechanisms that include cross-scale coupling and phase-lock modulations were left untreated. To resolve the multiplicative processes issue, additional dimensions in the spectrum result are needed to account for the variations in both the amplitude and frequency modulations simultaneously. HHSA accommodates all the processes: additive and multiplicative, intra-mode and inter-mode, stationary and non-stationary, linear and nonlinear interactions. The Holo prefix in HHSA denotes a multiple dimensional representation with both additive and multiplicative capabilities. PMID:26953180

  17. Dilations and the Equation of a Line

    ERIC Educational Resources Information Center

    Yopp, David A.

    2016-01-01

    Students engage in proportional reasoning when they use covariance and multiple comparisons. Without rich connections to proportional reasoning, students may develop inadequate understandings of linear relationships and the equations that model them. Teachers can improve students' understanding of linear relationships by focusing on realistic…

  18. Linear time-varying models can reveal non-linear interactions of biomolecular regulatory networks using multiple time-series data.

    PubMed

    Kim, Jongrae; Bates, Declan G; Postlethwaite, Ian; Heslop-Harrison, Pat; Cho, Kwang-Hyun

    2008-05-15

    Inherent non-linearities in biomolecular interactions make the identification of network interactions difficult. One of the principal problems is that all methods based on the use of linear time-invariant models will have fundamental limitations in their capability to infer certain non-linear network interactions. Another difficulty is the multiplicity of possible solutions, since, for a given dataset, there may be many different possible networks which generate the same time-series expression profiles. A novel algorithm for the inference of biomolecular interaction networks from temporal expression data is presented. Linear time-varying models, which can represent a much wider class of time-series data than linear time-invariant models, are employed in the algorithm. From time-series expression profiles, the model parameters are identified by solving a non-linear optimization problem. In order to systematically reduce the set of possible solutions for the optimization problem, a filtering process is performed using a phase-portrait analysis with random numerical perturbations. The proposed approach has the advantages of not requiring the system to be in a stable steady state, of using time-series profiles which have been generated by a single experiment, and of allowing non-linear network interactions to be identified. The ability of the proposed algorithm to correctly infer network interactions is illustrated by its application to three examples: a non-linear model for cAMP oscillations in Dictyostelium discoideum, the cell-cycle data for Saccharomyces cerevisiae and a large-scale non-linear model of a group of synchronized Dictyostelium cells. The software used in this article is available from http://sbie.kaist.ac.kr/software

  19. An Inquiry-Based Linear Algebra Class

    ERIC Educational Resources Information Center

    Wang, Haohao; Posey, Lisa

    2011-01-01

    Linear algebra is a standard undergraduate mathematics course. This paper presents an overview of the design and implementation of an inquiry-based teaching material for the linear algebra course which emphasizes discovery learning, analytical thinking and individual creativity. The inquiry-based teaching material is designed to fit the needs of a…

  20. Earthquake Clustering in Noisy Viscoelastic Systems

    NASA Astrophysics Data System (ADS)

    Dicaprio, C. J.; Simons, M.; Williams, C. A.; Kenner, S. J.

    2006-12-01

    Geologic studies show evidence for temporal clustering of earthquakes on certain fault systems. Since post- seismic deformation may result in a variable loading rate on a fault throughout the inter-seismic period, it is reasonable to expect that the rheology of the non-seismogenic lower crust and mantle lithosphere may play a role in controlling earthquake recurrence times. Previously, the role of rheology of the lithosphere on the seismic cycle had been studied with a one-dimensional spring-dashpot-slider model (Kenner and Simons [2005]). In this study we use the finite element code PyLith to construct a two-dimensional continuum model a strike-slip fault in an elastic medium overlying one or more linear Maxwell viscoelastic layers loaded in the far field by a constant velocity boundary condition. Taking advantage of the linear properties of the model, we use the finite element solution to one earthquake as a spatio-temporal Green's function. Multiple Green's function solutions, scaled by the size of each earthquake, are then summed to form an earthquake sequence. When the shear stress on the fault reaches a predefined yield stress it is allowed to slip, relieving all accumulated shear stress. Random variation in the fault yield stress from one earthquake to the next results in a temporally clustered earthquake sequence. The amount of clustering depends on a non-dimensional number, W, called the Wallace number. For models with one viscoelastic layer, W is equal to the standard deviation of the earthquake stress drop divided by the viscosity times the tectonic loading rate. This definition of W is modified from the original one used in Kenner and Simons [2005] by using the standard deviation of the stress drop instead of the mean stress drop. We also use a new, more appropriate, metric to measure the amount of temporal clustering of the system. W is the ratio of the viscoelastic relaxation rate of the system to the tectonic loading rate of the system. For values of W greater than the critical value of about 10, the clustered earthquake behavior is due to the rapid reloading of the fault due to viscoelastic recycling of stress. A model with multiple viscoelastic layers has more complex clustering behavior than a system with only one viscosity. In this case, multiple clustering modes exist; the size and mean period of which are influenced by the viscosities and relative thicknesses of the viscoelastic layers. Kenner, S.J. and Simons, M., (2005), Temporal cluster of major earthquakes along individual faults due to post-seismic reloading, Geophysical Journal International, 160, 179-194.

  1. Integrable generalizations of non-linear multiple three-wave interaction models

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav

    1989-07-01

    Integrable generalizations of multiple three-wave interaction models in terms of r-matrix formulation are investigated. The Lax representations, complete sets of first integrals in involution are constructed, the quantization leading to Gaudin's models is discussed.

  2. Characteristics of compound multiplicity in 84Kr36 with various light and heavy targets at 1 GeV per nucleon

    NASA Astrophysics Data System (ADS)

    Chouhan, N. S.; Singh, M. K.; Singh, V.; Pathak, R.

    2013-12-01

    Interactions of 84Kr36 having kinetic energy around 1 GeV per nucleon with NIKFI BR-2 nuclear emulsion detector's target reveal some of the important features of compound multiplicity. Present article shows that width of compound multiplicity distributions and value of mean compound multiplicity have linear relationship with mass number of the projectile colliding system.

  3. The optimal hormonal replacement modality selection for multiple organ procurement from brain-dead organ donors

    PubMed Central

    Mi, Zhibao; Novitzky, Dimitri; Collins, Joseph F; Cooper, David KC

    2015-01-01

    The management of brain-dead organ donors is complex. The use of inotropic agents and replacement of depleted hormones (hormonal replacement therapy) is crucial for successful multiple organ procurement, yet the optimal hormonal replacement has not been identified, and the statistical adjustment to determine the best selection is not trivial. Traditional pair-wise comparisons between every pair of treatments, and multiple comparisons to all (MCA), are statistically conservative. Hsu’s multiple comparisons with the best (MCB) – adapted from the Dunnett’s multiple comparisons with control (MCC) – has been used for selecting the best treatment based on continuous variables. We selected the best hormonal replacement modality for successful multiple organ procurement using a two-step approach. First, we estimated the predicted margins by constructing generalized linear models (GLM) or generalized linear mixed models (GLMM), and then we applied the multiple comparison methods to identify the best hormonal replacement modality given that the testing of hormonal replacement modalities is independent. Based on 10-year data from the United Network for Organ Sharing (UNOS), among 16 hormonal replacement modalities, and using the 95% simultaneous confidence intervals, we found that the combination of thyroid hormone, a corticosteroid, antidiuretic hormone, and insulin was the best modality for multiple organ procurement for transplantation. PMID:25565890

  4. MultiDK: A Multiple Descriptor Multiple Kernel Approach for Molecular Discovery and Its Application to Organic Flow Battery Electrolytes.

    PubMed

    Kim, Sungjin; Jinich, Adrián; Aspuru-Guzik, Alán

    2017-04-24

    We propose a multiple descriptor multiple kernel (MultiDK) method for efficient molecular discovery using machine learning. We show that the MultiDK method improves both the speed and accuracy of molecular property prediction. We apply the method to the discovery of electrolyte molecules for aqueous redox flow batteries. Using multiple-type-as opposed to single-type-descriptors, we obtain more relevant features for machine learning. Following the principle of "wisdom of the crowds", the combination of multiple-type descriptors significantly boosts prediction performance. Moreover, by employing multiple kernels-more than one kernel function for a set of the input descriptors-MultiDK exploits nonlinear relations between molecular structure and properties better than a linear regression approach. The multiple kernels consist of a Tanimoto similarity kernel and a linear kernel for a set of binary descriptors and a set of nonbinary descriptors, respectively. Using MultiDK, we achieve an average performance of r 2 = 0.92 with a test set of molecules for solubility prediction. We also extend MultiDK to predict pH-dependent solubility and apply it to a set of quinone molecules with different ionizable functional groups to assess their performance as flow battery electrolytes.

  5. Early Parallel Activation of Semantics and Phonology in Picture Naming: Evidence from a Multiple Linear Regression MEG Study

    PubMed Central

    Miozzo, Michele; Pulvermüller, Friedemann; Hauk, Olaf

    2015-01-01

    The time course of brain activation during word production has become an area of increasingly intense investigation in cognitive neuroscience. The predominant view has been that semantic and phonological processes are activated sequentially, at about 150 and 200–400 ms after picture onset. Although evidence from prior studies has been interpreted as supporting this view, these studies were arguably not ideally suited to detect early brain activation of semantic and phonological processes. We here used a multiple linear regression approach to magnetoencephalography (MEG) analysis of picture naming in order to investigate early effects of variables specifically related to visual, semantic, and phonological processing. This was combined with distributed minimum-norm source estimation and region-of-interest analysis. Brain activation associated with visual image complexity appeared in occipital cortex at about 100 ms after picture presentation onset. At about 150 ms, semantic variables became physiologically manifest in left frontotemporal regions. In the same latency range, we found an effect of phonological variables in the left middle temporal gyrus. Our results demonstrate that multiple linear regression analysis is sensitive to early effects of multiple psycholinguistic variables in picture naming. Crucially, our results suggest that access to phonological information might begin in parallel with semantic processing around 150 ms after picture onset. PMID:25005037

  6. Improving Prediction Accuracy for WSN Data Reduction by Applying Multivariate Spatio-Temporal Correlation

    PubMed Central

    Carvalho, Carlos; Gomes, Danielo G.; Agoulmine, Nazim; de Souza, José Neuman

    2011-01-01

    This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN). Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction. PMID:22346626

  7. Conceptualizing Matrix Multiplication: A Framework for Student Thinking, an Historical Analysis, and a Modeling Perspective

    ERIC Educational Resources Information Center

    Larson, Christine

    2010-01-01

    Little is known about the variety of ways students conceptualize matrix multiplication, yet this is a fundamental part of most introductory linear algebra courses. My dissertation follows a three-paper format, with the three papers exploring conceptualizations of matrix multiplication from a variety of viewpoints. In these papers, I explore (1)…

  8. Alpha-synuclein levels in patients with multiple system atrophy: a meta-analysis.

    PubMed

    Yang, Fei; Li, Wan-Jun; Huang, Xu-Sheng

    2018-05-01

    This study evaluates the relationship between multiple system atrophy and α-synuclein levels in the cerebrospinal fluid, plasma and neural tissue. Literature search for relevant research articles was undertaken in electronic databases and study selection was based on a priori eligibility criteria. Random-effects meta-analyses of standardized mean differences in α-synuclein levels between multiple system atrophy patients and normal controls were conducted to obtain the overall and subgroup effect sizes. Meta-regression analyses were performed to evaluate the effect of age, gender and disease severity on standardized mean differences. Data were obtained from 11 studies involving 378 multiple system atrophy patients and 637 healthy controls (age: multiple system atrophy patients 64.14 [95% confidence interval 62.05, 66.23] years; controls 64.16 [60.06, 68.25] years; disease duration: 44.41 [26.44, 62.38] months). Cerebrospinal fluid α-synuclein levels were significantly lower in multiple system atrophy patients than in controls but in plasma and neural tissue, α-synuclein levels were significantly higher in multiple system atrophy patients (standardized mean difference: -0.99 [-1.65, -0.32]; p = 0.001). Percentage of male multiple system atrophy patients was significantly positively associated with the standardized mean differences of cerebrospinal fluid α-synuclein levels (p = 0.029) whereas the percentage of healthy males was not associated with the standardized mean differences of cerebrospinal fluid α-synuclein levels (p = 0.920). In multiple system atrophy patients, α-synuclein levels were significantly lower in the cerebrospinal fluid and were positively associated with the male gender.

  9. Forest biomass, canopy structure, and species composition relationships with multipolarization L-band synthetic aperture radar data

    NASA Technical Reports Server (NTRS)

    Sader, Steven A.

    1987-01-01

    The effect of forest biomass, canopy structure, and species composition on L-band synthetic aperature radar data at 44 southern Mississippi bottomland hardwood and pine-hardwood forest sites was investigated. Cross-polarization mean digital values for pine forests were significantly correlated with green weight biomass and stand structure. Multiple linear regression with five forest structure variables provided a better integrated measure of canopy roughness and produced highly significant correlation coefficients for hardwood forests using HV/VV ratio only. Differences in biomass levels and canopy structure, including branching patterns and vertical canopy stratification, were important sources of volume scatter affecting multipolarization radar data. Standardized correction techniques and calibration of aircraft data, in addition to development of canopy models, are recommended for future investigations of forest biomass and structure using synthetic aperture radar.

  10. Application of modern control theory to scheduling and path-stretching maneuvers of aircraft in the near terminal area

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1974-01-01

    A design concept of the dynamic control of aircraft in the near terminal area is discussed. An arbitrary set of nominal air routes, with possible multiple merging points, all leading to a single runway, is considered. The system allows for the automated determination of acceleration/deceleration of aircraft along the nominal air routes, as well as for the automated determination of path-stretching delay maneuvers. In addition to normal operating conditions, the system accommodates: (1) variable commanded separations over the outer marker to allow for takeoffs and between successive landings and (2) emergency conditions under which aircraft in distress have priority. The system design is based on a combination of three distinct optimal control problems involving a standard linear-quadratic problem, a parameter optimization problem, and a minimum-time rendezvous problem.

  11. Application of factor analysis of infrared spectra for quantitative determination of beta-tricalcium phosphate in calcium hydroxylapatite.

    PubMed

    Arsenyev, P A; Trezvov, V V; Saratovskaya, N V

    1997-01-01

    This work represents a method, which allows to determine phase composition of calcium hydroxylapatite basing on its infrared spectrum. The method uses factor analysis of the spectral data of calibration set of samples to determine minimal number of factors required to reproduce the spectra within experimental error. Multiple linear regression is applied to establish correlation between factor scores of calibration standards and their properties. The regression equations can be used to predict the property value of unknown sample. The regression model was built for determination of beta-tricalcium phosphate content in hydroxylapatite. Statistical estimation of quality of the model was carried out. Application of the factor analysis on spectral data allows to increase accuracy of beta-tricalcium phosphate determination and expand the range of determination towards its less concentration. Reproducibility of results is retained.

  12. Spatial aliasing for efficient direction-of-arrival estimation based on steering vector reconstruction

    NASA Astrophysics Data System (ADS)

    Yan, Feng-Gang; Cao, Bin; Rong, Jia-Jia; Shen, Yi; Jin, Ming

    2016-12-01

    A new technique is proposed to reduce the computational complexity of the multiple signal classification (MUSIC) algorithm for direction-of-arrival (DOA) estimate using a uniform linear array (ULA). The steering vector of the ULA is reconstructed as the Kronecker product of two other steering vectors, and a new cost function with spatial aliasing at hand is derived. Thanks to the estimation ambiguity of this spatial aliasing, mirror angles mathematically relating to the true DOAs are generated, based on which the full spectral search involved in the MUSIC algorithm is highly compressed into a limited angular sector accordingly. Further complexity analysis and performance studies are conducted by computer simulations, which demonstrate that the proposed estimator requires an extremely reduced computational burden while it shows a similar accuracy to the standard MUSIC.

  13. The Mistaken Birth and Adoption of LNT: An Abridged Version

    PubMed Central

    Calabrese, Edward J.

    2017-01-01

    The historical foundations of cancer risk assessment were based on the discovery of X-ray-induced gene mutations by Hermann J. Muller, its transformation into the linear nonthreshold (LNT) single-hit theory, the recommendation of the model by the US National Academy of Sciences, Biological Effects of Atomic Radiation I, Genetics Panel in 1956, and subsequent widespread adoption by regulatory agencies worldwide. This article summarizes substantial recent historical revelations of this history, which profoundly challenge the standard and widely acceptable history of cancer risk assessment, showing multiple significant scientific errors and incorrect interpretations, mixed with deliberate misrepresentation of the scientific record by leading ideologically motivated radiation geneticists. These novel historical findings demonstrate that the scientific foundations of the LNT single-hit model were seriously flawed and should not have been adopted for cancer risk assessment. PMID:29051718

  14. Probability-based constrained MPC for structured uncertain systems with state and random input delays

    NASA Astrophysics Data System (ADS)

    Lu, Jianbo; Li, Dewei; Xi, Yugeng

    2013-07-01

    This article is concerned with probability-based constrained model predictive control (MPC) for systems with both structured uncertainties and time delays, where a random input delay and multiple fixed state delays are included. The process of input delay is governed by a discrete-time finite-state Markov chain. By invoking an appropriate augmented state, the system is transformed into a standard structured uncertain time-delay Markov jump linear system (MJLS). For the resulting system, a multi-step feedback control law is utilised to minimise an upper bound on the expected value of performance objective. The proposed design has been proved to stabilise the closed-loop system in the mean square sense and to guarantee constraints on control inputs and system states. Finally, a numerical example is given to illustrate the proposed results.

  15. The Primordial Inflation Explorer (PIXIE)

    NASA Technical Reports Server (NTRS)

    Kogut, Alan J.

    2011-01-01

    The Primordial Inflation Explorer is an Explorer-class mission to measure the gravity-wave signature of primordial inflation through its distinctive imprint on the linear polarization of the cosmic microwave background. PIXIE uses an innovative optical design to achieve background-limited sensitivity in 400 spectral channels spanning 2.5 decades in frequency from 30 GHz to 6 THz (1 cm to 50 micron wavelength). Multi-moded non-imaging optics feed a polarizing Fourier Transform Spectrometer to produce a set of interference fringes, proportional to the difference spectrum between orthogonal linear polarizations from the two input beams. The differential design and multiple signal modulations spanning 11 orders of magnitude in time combine to reduce the instrumental signature and confusion from unpolarized sources to negligible levels. PIXIE will map the full sky in Stokes I, Q, and U parameters with angular resolution 2.6 deg and sensitivity 0.2 uK per 1 deg square pixel. The principal science goal is the detection and characterization of linear polarization from an inflationary epoch in the early universe, with tensor-to-scalar ratio r <10(exp -3) at 5 standard deviations. In addition, the rich PIXIE data will constrain physical processes ranging from Big Bang cosmology to the nature of the first stars to the physical conditions within the interstellar medium of the Galaxy. We describe the PIXIE instrument and mission architecture needed to detect the signature of an inflationary epoch in the early universe using only 4 semiconductor bolometers.

  16. Rapid and sensitive analysis of multiple bioactive constituents in tripterygium glycosides tablets using liquid chromatography coupled with time-of-flight mass spectrometry.

    PubMed

    Su, Meng-xiang; Zhou, Wen-di; Lan, Juan; Di, Bin; Hang, Tai-jun

    2015-03-01

    A simultaneous determination method based on liquid chromatography coupled with time-of-flight mass spectrometry was developed for the analysis of 11 bioactive constituents in tripterygium glycosides tablets, an immune and inflammatory prescription used in China. The analysis was fully optimized on a 1.8 μm particle size C18 column with linear gradient elution, permitting good separation of the 11 analytes and two internal standards in 21 min. The quantitation of each target constituent was carried out using the narrow window extracted ion chromatograms with a ±l0 ppm extraction window, yielding good linearity (r(2) > 0.996) with a linear range of 10-1000 ng/mL. The limits of quantitation were low ranging from 0.25 to 5.02 ng/mL for the 11 analytes, and the precisions and repeatability were better than 1.6 and 5.3%, respectively. The acceptable recoveries obtained were in the range of 93.4-107.4%. This proposed method was successfully applied to quantify the 11 bioactive constituents in commercial samples produced by nine pharmaceutical manufacturers to profile the quality of these preparations. The overall results demonstrate that the contents of the 11 bioactive constituents in different samples were in great diversity, therefore, the quality, clinical safety, and efficacy of this drug needs further research and evaluation. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Regression Analysis of Top of Descent Location for Idle-thrust Descents

    NASA Technical Reports Server (NTRS)

    Stell, Laurel; Bronsvoort, Jesper; McDonald, Greg

    2013-01-01

    In this paper, multiple regression analysis is used to model the top of descent (TOD) location of user-preferred descent trajectories computed by the flight management system (FMS) on over 1000 commercial flights into Melbourne, Australia. The independent variables cruise altitude, final altitude, cruise Mach, descent speed, wind, and engine type were also recorded or computed post-operations. Both first-order and second-order models are considered, where cross-validation, hypothesis testing, and additional analysis are used to compare models. This identifies the models that should give the smallest errors if used to predict TOD location for new data in the future. A model that is linear in TOD altitude, final altitude, descent speed, and wind gives an estimated standard deviation of 3.9 nmi for TOD location given the trajec- tory parameters, which means about 80% of predictions would have error less than 5 nmi in absolute value. This accuracy is better than demonstrated by other ground automation predictions using kinetic models. Furthermore, this approach would enable online learning of the model. Additional data or further knowl- edge of algorithms is necessary to conclude definitively that no second-order terms are appropriate. Possible applications of the linear model are described, including enabling arriving aircraft to fly optimized descents computed by the FMS even in congested airspace. In particular, a model for TOD location that is linear in the independent variables would enable decision support tool human-machine interfaces for which a kinetic approach would be computationally too slow.

  18. The Digital Shoreline Analysis System (DSAS) Version 4.0 - An ArcGIS extension for calculating shoreline change

    USGS Publications Warehouse

    Thieler, E. Robert; Himmelstoss, Emily A.; Zichichi, Jessica L.; Ergul, Ayhan

    2009-01-01

    The Digital Shoreline Analysis System (DSAS) version 4.0 is a software extension to ESRI ArcGIS v.9.2 and above that enables a user to calculate shoreline rate-of-change statistics from multiple historic shoreline positions. A user-friendly interface of simple buttons and menus guides the user through the major steps of shoreline change analysis. Components of the extension and user guide include (1) instruction on the proper way to define a reference baseline for measurements, (2) automated and manual generation of measurement transects and metadata based on user-specified parameters, and (3) output of calculated rates of shoreline change and other statistical information. DSAS computes shoreline rates of change using four different methods: (1) endpoint rate, (2) simple linear regression, (3) weighted linear regression, and (4) least median of squares. The standard error, correlation coefficient, and confidence interval are also computed for the simple and weighted linear-regression methods. The results of all rate calculations are output to a table that can be linked to the transect file by a common attribute field. DSAS is intended to facilitate the shoreline change-calculation process and to provide rate-of-change information and the statistical data necessary to establish the reliability of the calculated results. The software is also suitable for any generic application that calculates positional change over time, such as assessing rates of change of glacier limits in sequential aerial photos, river edge boundaries, land-cover changes, and so on.

  19. Modeling the effects of AADT on predicting multiple-vehicle crashes at urban and suburban signalized intersections.

    PubMed

    Chen, Chen; Xie, Yuanchang

    2016-06-01

    Annual Average Daily Traffic (AADT) is often considered as a main covariate for predicting crash frequencies at urban and suburban intersections. A linear functional form is typically assumed for the Safety Performance Function (SPF) to describe the relationship between the natural logarithm of expected crash frequency and covariates derived from AADTs. Such a linearity assumption has been questioned by many researchers. This study applies Generalized Additive Models (GAMs) and Piecewise Linear Negative Binomial (PLNB) regression models to fit intersection crash data. Various covariates derived from minor-and major-approach AADTs are considered. Three different dependent variables are modeled, which are total multiple-vehicle crashes, rear-end crashes, and angle crashes. The modeling results suggest that a nonlinear functional form may be more appropriate. Also, the results show that it is important to take into consideration the joint safety effects of multiple covariates. Additionally, it is found that the ratio of minor to major-approach AADT has a varying impact on intersection safety and deserves further investigations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Modeling non-linear growth responses to temperature and hydrology in wetland trees

    NASA Astrophysics Data System (ADS)

    Keim, R.; Allen, S. T.

    2016-12-01

    Growth responses of wetland trees to flooding and climate variations are difficult to model because they depend on multiple, apparently interacting factors, but are a critical link in hydrological control of wetland carbon budgets. To more generally understand tree growth to hydrological forcing, we modeled non-linear responses of tree ring growth to flooding and climate at sub-annual time steps, using Vaganov-Shashkin response functions. We calibrated the model to six baldcypress tree-ring chronologies from two hydrologically distinct sites in southern Louisiana, and tested several hypotheses of plasticity in wetlands tree responses to interacting environmental variables. The model outperformed traditional multiple linear regression. More importantly, optimized response parameters were generally similar among sites with varying hydrological conditions, suggesting generality to the functions. Model forms that included interacting responses to multiple forcing factors were more effective than were single response functions, indicating the principle of a single limiting factor is not correct in wetlands and both climatic and hydrological variables must be considered in predicting responses to hydrological or climate change.

  1. Multiplicity fluctuation analysis of target residues in nucleus-emulsion collisions at a few hundred MeV/nucleon

    NASA Astrophysics Data System (ADS)

    Zhang, Dong-Hai; Chen, Yan-Ling; Wang, Guo-Rong; Li, Wang-Dong; Wang, Qing; Yao, Ji-Jie; Zhou, Jian-Guo; Zheng, Su-Hua; Xu, Li-Ling; Miao, Hui-Feng; Wang, Peng

    2014-07-01

    Multiplicity fluctuation of the target evaporated fragments emitted in 290 MeV/u 12C-AgBr, 400 MeV/u 12C-AgBr, 400 MeV/u 20Ne-AgBr and 500 MeV/u 56Fe-AgBr interactions is investigated using the scaled factorial moment method in two-dimensional normal phase space and cumulative variable space, respectively. It is found that in normal phase space the scaled factorial moment (ln) increases linearly with the increase of the divided number of phase space (lnM)for lower q-value and increases linearly with the increase of lnM, and then becomes saturated or decreased for a higher q-value. In cumulative variable space ln decreases linearly with increase of lnM. This indicates that no evidence of non-statistical multiplicity fluctuation is observed in our data sets. So, any fluctuation indicated in the results of normal variable space analysis is totally caused by the non-uniformity of the single-particle density distribution.

  2. Extending the eigCG algorithm to nonsymmetric Lanczos for linear systems with multiple right-hand sides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdel-Rehim, A M; Stathopoulos, Andreas; Orginos, Kostas

    2014-08-01

    The technique that was used to build the EigCG algorithm for sparse symmetric linear systems is extended to the nonsymmetric case using the BiCG algorithm. We show that, similarly to the symmetric case, we can build an algorithm that is capable of computing a few smallest magnitude eigenvalues and their corresponding left and right eigenvectors of a nonsymmetric matrix using only a small window of the BiCG residuals while simultaneously solving a linear system with that matrix. For a system with multiple right-hand sides, we give an algorithm that computes incrementally more eigenvalues while solving the first few systems andmore » then uses the computed eigenvectors to deflate BiCGStab for the remaining systems. Our experiments on various test problems, including Lattice QCD, show the remarkable ability of EigBiCG to compute spectral approximations with accuracy comparable to that of the unrestarted, nonsymmetric Lanczos. Furthermore, our incremental EigBiCG followed by appropriately restarted and deflated BiCGStab provides a competitive method for systems with multiple right-hand sides.« less

  3. An enzyme-linked immuno-mass spectrometric assay with the substrate adenosine monophosphate.

    PubMed

    Florentinus-Mefailoski, Angelique; Soosaipillai, Antonius; Dufresne, Jaimie; Diamandis, Eleftherios P; Marshall, John G

    2015-02-01

    An enzyme-linked immuno-mass spectrometric assay (ELIMSA) with the specific detection probe streptavidin conjugated to alkaline phosphatase catalyzed the production of adenosine from the substrate adenosine monophosphate (AMP) for sensitive quantification of prostate-specific antigen (PSA) by mass spectrometry. Adenosine ionized efficiently and was measured to the femtomole range by dilution and direct analysis with micro-liquid chromatography, electrospray ionization, and mass spectrometry (LC-ESI-MS). The LC-ESI-MS assay for adenosine production was shown to be linear and accurate using internal (13)C(15)N adenosine isotope dilution, internal (13)C(15)N adenosine one-point calibration, and external adenosine standard curves with close agreement. The detection limits of LC-ESI-MS for alkaline phosphatase-streptavidin (AP-SA, ∼190,000 Da) was tested by injecting 0.1 μl of a 1 pg/ml solution, i.e., 100 attograms or 526 yoctomole (5.26E-22) of the alkaline-phosphatase labeled probe on column (about 315 AP-SA molecules). The ELIMSA for PSA was linear and showed strong signals across the picogram per milliliter range and could robustly detect PSA from all of the prostatectomy patients and all of the female plasma samples that ranged as low as 70 pg/ml with strong signals well separated from the background and well within the limit of quantification of the AP-SA probe. The results of the ELIMSA assay for PSA are normal and homogenous when independently replicated with a fresh standard over multiple days, and intra and inter diem assay variation was less than 10 % of the mean. In a blind comparison, ELIMSA showed excellent agreement with, but was more sensitive than, the present gold standard commercial fluorescent ELISA, or ECL-based detection, of PSA from normal and prostatectomy samples, respectively.

  4. Anomalous dielectric relaxation with linear reaction dynamics in space-dependent force fields.

    PubMed

    Hong, Tao; Tang, Zhengming; Zhu, Huacheng

    2016-12-28

    The anomalous dielectric relaxation of disordered reaction with linear reaction dynamics is studied via the continuous time random walk model in the presence of space-dependent electric field. Two kinds of modified reaction-subdiffusion equations are derived for different linear reaction processes by the master equation, including the instantaneous annihilation reaction and the noninstantaneous annihilation reaction. If a constant proportion of walkers is added or removed instantaneously at the end of each step, there will be a modified reaction-subdiffusion equation with a fractional order temporal derivative operating on both the standard diffusion term and a linear reaction kinetics term. If the walkers are added or removed at a constant per capita rate during the waiting time between steps, there will be a standard linear reaction kinetics term but a fractional order temporal derivative operating on an anomalous diffusion term. The dielectric polarization is analyzed based on the Legendre polynomials and the dielectric properties of both reactions can be expressed by the effective rotational diffusion function and component concentration function, which is similar to the standard reaction-diffusion process. The results show that the effective permittivity can be used to describe the dielectric properties in these reactions if the chemical reaction time is much longer than the relaxation time.

  5. An Investigation of the Fit of Linear Regression Models to Data from an SAT[R] Validity Study. Research Report 2011-3

    ERIC Educational Resources Information Center

    Kobrin, Jennifer L.; Sinharay, Sandip; Haberman, Shelby J.; Chajewski, Michael

    2011-01-01

    This study examined the adequacy of a multiple linear regression model for predicting first-year college grade point average (FYGPA) using SAT[R] scores and high school grade point average (HSGPA). A variety of techniques, both graphical and statistical, were used to examine if it is possible to improve on the linear regression model. The results…

  6. Computing Linear Mathematical Models Of Aircraft

    NASA Technical Reports Server (NTRS)

    Duke, Eugene L.; Antoniewicz, Robert F.; Krambeer, Keith D.

    1991-01-01

    Derivation and Definition of Linear Aircraft Model (LINEAR) computer program provides user with powerful, and flexible, standard, documented, and verified software tool for linearization of mathematical models of aerodynamics of aircraft. Intended for use in software tool to drive linear analysis of stability and design of control laws for aircraft. Capable of both extracting such linearized engine effects as net thrust, torque, and gyroscopic effects, and including these effects in linear model of system. Designed to provide easy selection of state, control, and observation variables used in particular model. Also provides flexibility of allowing alternate formulations of both state and observation equations. Written in FORTRAN.

  7. Noise limitations in optical linear algebra processors.

    PubMed

    Batsell, S G; Jong, T L; Walkup, J F; Krile, T F

    1990-05-10

    A general statistical noise model is presented for optical linear algebra processors. A statistical analysis which includes device noise, the multiplication process, and the addition operation is undertaken. We focus on those processes which are architecturally independent. Finally, experimental results which verify the analytical predictions are also presented.

  8. The association of reproductive and lifestyle factors with a score of multiple endogenous hormones

    PubMed Central

    Shafrir, Amy L.; Zhang, Xuehong; Poole, Elizabeth M.; Hankinson, Susan E.; Tworoger, Shelley S.

    2014-01-01

    Introduction We recently reported that high levels of multiple sex and growth hormones were associated with increased postmenopausal breast cancer risk. Limited research has explored the relationship between reproductive, anthropometric, and lifestyle factors and levels of multiple hormones simultaneously. Methods This cross-sectional analysis included 738 postmenopausal Nurses' Health Study participants who were controls in a breast cancer nested case-control study and had measured levels of estrone, estradiol, estrone sulfate, testosterone, androstenedione, dehydroepiandrosterone sulfate, prolactin and sex hormone binding globulin (SHBG). A score was created by summing the number of hormones a woman had above (below for SHBG) each hormone's age-adjusted geometric mean. The association between lifestyle, anthropometric, and reproductive exposures and the score was assessed using generalized linear models. Results The hormone score ranged from 0 to 8 with a mean of 4.0 (standard deviation=2.2). Body mass index (BMI) and alcohol consumption at blood draw were positively associated with the hormone score: a 5 unit increase in BMI was associated with a 0.79 (95%CI: 0.63, 0.95) unit increase in the score (p<0.0001) and each 15 grams/day increase in alcohol consumption was associated with a 0.41 (95%CI: 0.18, 0.63) unit increase in the score (p=0.0004). Family history of breast cancer, age at menarche, and physical activity were not associated with the score. Conclusions Reproductive breast cancer risk factors were not associated with elevated levels of multiple endogenous hormones, whereas anthropometric and lifestyle factors, particularly BMI and alcohol consumption, tended to be associated with higher levels of multiple hormones. PMID:25048255

  9. The association of reproductive and lifestyle factors with a score of multiple endogenous hormones.

    PubMed

    Shafrir, Amy L; Zhang, Xuehong; Poole, Elizabeth M; Hankinson, Susan E; Tworoger, Shelley S

    2014-10-01

    We recently reported that high levels of multiple sex and growth hormones were associated with increased postmenopausal breast cancer risk. Limited research has explored the relationship between reproductive, anthropometric, and lifestyle factors and levels of multiple hormones simultaneously. This cross-sectional analysis included 738 postmenopausal Nurses' Health Study participants who were controls in a breast cancer nested case-control study and had measured levels of estrone, estradiol, estrone sulfate, testosterone, androstenedione, dehydroepiandrosterone sulfate, prolactin, and sex hormone binding globulin (SHBG). A score was created by summing the number of hormones a woman had above (below for SHBG) each hormone's age-adjusted geometric mean. The association between lifestyle, anthropometric, and reproductive exposures and the score was assessed using generalized linear models. The hormone score ranged from 0 to 8 with a mean of 4.0 (standard deviation = 2.2). Body mass index (BMI) and alcohol consumption at blood draw were positively associated with the hormone score: a 5 unit increase in BMI was associated with a 0.79 (95%CI: 0.63, 0.95) unit increase in the score (p < 0.0001) and each 15 g/day increase in alcohol consumption was associated with a 0.41 (95%CI: 0.18, 0.63) unit increase in the score (p = 0.0004). Family history of breast cancer, age at menarche, and physical activity were not associated with the score. Reproductive breast cancer risk factors were not associated with elevated levels of multiple endogenous hormones, whereas anthropometric and lifestyle factors, particularly BMI and alcohol consumption, tended to be associated with higher levels of multiple hormones.

  10. Libraries for Software Use on Peregrine | High-Performance Computing | NREL

    Science.gov Websites

    -specific libraries. Libraries List Name Description BLAS Basic Linear Algebra Subroutines, libraries only managing hierarchically structured data. LAPACK Standard Netlib offering for computational linear algebra

  11. Linearization: Students Forget the Operating Point

    ERIC Educational Resources Information Center

    Roubal, J.; Husek, P.; Stecha, J.

    2010-01-01

    Linearization is a standard part of modeling and control design theory for a class of nonlinear dynamical systems taught in basic undergraduate courses. Although linearization is a straight-line methodology, it is not applied correctly by many students since they often forget to keep the operating point in mind. This paper explains the topic and…

  12. Vector Adaptive/Predictive Encoding Of Speech

    NASA Technical Reports Server (NTRS)

    Chen, Juin-Hwey; Gersho, Allen

    1989-01-01

    Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.

  13. Estimation of perceptible water vapor of atmosphere using artificial neural network, support vector machine and multiple linear regression algorithm and their comparative study

    NASA Astrophysics Data System (ADS)

    Shastri, Niket; Pathak, Kamlesh

    2018-05-01

    The water vapor content in atmosphere plays very important role in climate. In this paper the application of GPS signal in meteorology is discussed, which is useful technique that is used to estimate the perceptible water vapor of atmosphere. In this paper various algorithms like artificial neural network, support vector machine and multiple linear regression are use to predict perceptible water vapor. The comparative studies in terms of root mean square error and mean absolute errors are also carried out for all the algorithms.

  14. Development of tearing instability in a current sheet forming by sheared incompressible flow

    NASA Astrophysics Data System (ADS)

    Tolman, Elizabeth A.; Loureiro, Nuno F.; Uzdensky, Dmitri A.

    2018-02-01

    Sweet-Parker current sheets in high Lundquist number plasmas are unstable to tearing, suggesting they will not form in physical systems. Understanding magnetic reconnection thus requires study of the stability of a current sheet as it forms. Formation can occur due to sheared, sub-Alfvénic incompressible flows which narrow the sheet. Standard tearing theory (Furth et al. Phys. Fluids, vol. 6 (4), 1963, pp. 459-484, Rutherford, Phys. Fluids, vol. 16 (11), 1973, pp. 1903-1908, Coppi et al. Fizika Plazmy, vol. 2, 1976, pp. 961-966) is not immediately applicable to such forming sheets for two reasons: first, because the flow introduces terms not present in the standard calculation; second, because the changing equilibrium introduces time dependence to terms which are constant in the standard calculation, complicating the formulation of an eigenvalue problem. This paper adapts standard tearing mode analysis to confront these challenges. In an initial phase when any perturbations are primarily governed by ideal magnetohydrodynamics, a coordinate transformation reveals that the flow compresses and stretches perturbations. A multiple scale formulation describes how linear tearing mode theory (Furth et al. Phys. Fluids, vol. 6 (4), 1963, pp. 459-484, Coppi et al. Fizika Plazmy, vol. 2, 1976, pp. 961-966) can be applied to an equilibrium changing under flow, showing that the flow affects the separable exponential growth only implicitly, by making the standard scalings time dependent. In the nonlinear Rutherford stage, the coordinate transformation shows that standard theory can be adapted by adding to the stationary rates time dependence and an additional term due to the strengthening equilibrium magnetic field. Overall, this understanding supports the use of flow-free scalings with slight modifications to study tearing in a forming sheet.

  15. Do health care workforce, population, and service provision significantly contribute to the total health expenditure? An econometric analysis of Serbia.

    PubMed

    Santric-Milicevic, M; Vasic, V; Terzic-Supic, Z

    2016-08-15

    In times of austerity, the availability of econometric health knowledge assists policy-makers in understanding and balancing health expenditure with health care plans within fiscal constraints. The objective of this study is to explore whether the health workforce supply of the public health care sector, population number, and utilization of inpatient care significantly contribute to total health expenditure. The dependent variable is the total health expenditure (THE) in Serbia from the years 2003 to 2011. The independent variables are the number of health workers employed in the public health care sector, population number, and inpatient care discharges per 100 population. The statistical analyses include the quadratic interpolation method, natural logarithm and differentiation, and multiple linear regression analyses. The level of significance is set at P < 0.05. The regression model captures 90 % of all variations of observed dependent variables (adjusted R square), and the model is significant (P < 0.001). Total health expenditure increased by 1.21 standard deviations, with an increase in health workforce growth rate by 1 standard deviation. Furthermore, this rate decreased by 1.12 standard deviations, with an increase in (negative) population growth rate by 1 standard deviation. Finally, the growth rate increased by 0.38 standard deviation, with an increase of the growth rate of inpatient care discharges per 100 population by 1 standard deviation (P < 0.001). Study results demonstrate that the government has been making an effort to control strongly health budget growth. Exploring causality relationships between health expenditure and health workforce is important for countries that are trying to consolidate their public health finances and achieve universal health coverage at the same time.

  16. Schwarz maps of algebraic linear ordinary differential equations

    NASA Astrophysics Data System (ADS)

    Sanabria Malagón, Camilo

    2017-12-01

    A linear ordinary differential equation is called algebraic if all its solution are algebraic over its field of definition. In this paper we solve the problem of finding closed form solution to algebraic linear ordinary differential equations in terms of standard equations. Furthermore, we obtain a method to compute all algebraic linear ordinary differential equations with rational coefficients by studying their associated Schwarz map through the Picard-Vessiot Theory.

  17. Research of Medical Expenditure among Inpatients with Unstable Angina Pectoris in a Single Center

    PubMed Central

    Wu, Suo-Wei; Pan, Qi; Chen, Tong; Wei, Liang-Yu; Xuan, Yong; Wang, Qin; Li, Chao; Song, Jing-Chen

    2017-01-01

    Background: With the rising incidence as well as the medical expenditure among patients with unstable angina pectoris, the research aimed to investigate the inpatient medical expenditure through the combination of diagnosis-related groups (DRGs) among patients with unstable angina pectoris in a Grade A tertiary hospital to conduct the referential standards of medical costs for the diagnosis. Methods: Single-factor analysis and multiple linear stepwise regression method were used to investigate 3933 cases between 2014 and 2016 in Beijing Hospital (China) whose main diagnosis was defined as unstable angina pectoris to determine the main factors influencing the inpatient medical expenditure, and decision tree method was adopted to establish the model of DRGs grouping combinations. Results: The major influential factors of inpatient medical expenditure included age, operative method, therapeutic effects as well as comorbidity and complications (CCs) of the disease, and the 3933 cases were divided into ten DRGs by four factors: age, CCs, therapeutic effects, and the type of surgery with corresponding inpatient medical expenditure standards setup. Data of nonparametric test on medical costs among different groups were all significant (P < 0.001, by Kruskal-Wallis test), with R2 = 0.53 and coefficient of variation (CV) = 0.524. Conclusions: The classification of DRGs by adopting the type of surgery as the main branch node to develop cost control standards in inpatient treatment of unstable angina pectoris is conducive in standardizing the diagnosis and treatment behaviors of the hospital and reducing economic burdens among patients. PMID:28639566

  18. Research of Medical Expenditure among Inpatients with Unstable Angina Pectoris in a Single Center.

    PubMed

    Wu, Suo-Wei; Pan, Qi; Chen, Tong; Wei, Liang-Yu; Xuan, Yong; Wang, Qin; Li, Chao; Song, Jing-Chen

    2017-07-05

    With the rising incidence as well as the medical expenditure among patients with unstable angina pectoris, the research aimed to investigate the inpatient medical expenditure through the combination of diagnosis-related groups (DRGs) among patients with unstable angina pectoris in a Grade A tertiary hospital to conduct the referential standards of medical costs for the diagnosis. Single-factor analysis and multiple linear stepwise regression method were used to investigate 3933 cases between 2014 and 2016 in Beijing Hospital (China) whose main diagnosis was defined as unstable angina pectoris to determine the main factors influencing the inpatient medical expenditure, and decision tree method was adopted to establish the model of DRGs grouping combinations. The major influential factors of inpatient medical expenditure included age, operative method, therapeutic effects as well as comorbidity and complications (CCs) of the disease, and the 3933 cases were divided into ten DRGs by four factors: age, CCs, therapeutic effects, and the type of surgery with corresponding inpatient medical expenditure standards setup. Data of nonparametric test on medical costs among different groups were all significant (P < 0.001, by Kruskal-Wallis test), with R2 = 0.53 and coefficient of variation (CV) = 0.524. The classification of DRGs by adopting the type of surgery as the main branch node to develop cost control standards in inpatient treatment of unstable angina pectoris is conducive in standardizing the diagnosis and treatment behaviors of the hospital and reducing economic burdens among patients.

  19. A robust approach to measuring the detective quantum efficiency of radiographic detectors in a clinical setting

    NASA Astrophysics Data System (ADS)

    McDonald, Michael C.; Kim, H. K.; Henry, J. R.; Cunningham, I. A.

    2012-03-01

    The detective quantum efficiency (DQE) is widely accepted as a primary measure of x-ray detector performance in the scientific community. A standard method for measuring the DQE, based on IEC 62220-1, requires the system to have a linear response meaning that the detector output signals are proportional to the incident x-ray exposure. However, many systems have a non-linear response due to characteristics of the detector, or post processing of the detector signals, that cannot be disabled and may involve unknown algorithms considered proprietary by the manufacturer. For these reasons, the DQE has not been considered as a practical candidate for routine quality assurance testing in a clinical setting. In this article we described a method that can be used to measure the DQE of both linear and non-linear systems that employ only linear image processing algorithms. The method was validated on a Cesium Iodide based flat panel system that simultaneously stores a raw (linear) and processed (non-linear) image for each exposure. It was found that the resulting DQE was equivalent to a conventional standards-compliant DQE with measurement precision, and the gray-scale inversion and linear edge enhancement did not affect the DQE result. While not IEC 62220-1 compliant, it may be adequate for QA programs.

  20. MABE multibeam accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasti, D.E.; Ramirez, J.J.; Coleman, P.D.

    1985-01-01

    The Megamp Accelerator and Beam Experiment (MABE) was the technology development testbed for the multiple beam, linear induction accelerator approach for Hermes III, a new 20 MeV, 0.8 MA, 40 ns accelerator being developed at Sandia for gamma-ray simulation. Experimental studies of a high-current, single-beam accelerator (8 MeV, 80 kA), and a nine-beam injector (1.4 MeV, 25 kA/beam) have been completed, and experiments on a nine-beam linear induction accelerator are in progress. A two-beam linear induction accelerator is designed and will be built as a gamma-ray simulator to be used in parallel with Hermes III. The MABE pulsed power systemmore » and accelerator for the multiple beam experiments is described. Results from these experiments and the two-beam design are discussed. 11 refs., 6 figs.« less

  1. A multiple linear regression analysis of hot corrosion attack on a series of nickel base turbine alloys

    NASA Technical Reports Server (NTRS)

    Barrett, C. A.

    1985-01-01

    Multiple linear regression analysis was used to determine an equation for estimating hot corrosion attack for a series of Ni base cast turbine alloys. The U transform (i.e., 1/sin (% A/100) to the 1/2) was shown to give the best estimate of the dependent variable, y. A complete second degree equation is described for the centered" weight chemistries for the elements Cr, Al, Ti, Mo, W, Cb, Ta, and Co. In addition linear terms for the minor elements C, B, and Zr were added for a basic 47 term equation. The best reduced equation was determined by the stepwise selection method with essentially 13 terms. The Cr term was found to be the most important accounting for 60 percent of the explained variability hot corrosion attack.

  2. Plasma adiponectin concentrations are associated with dietary glycemic index in Malaysian patients with type 2 diabetes.

    PubMed

    Loh, Beng-In; Sathyasuryan, Daniel Robert; Mohamed, Hamid Jan Jan

    2013-01-01

    Adiponectin, an adipocyte-derived hormone has been implicated in the control of blood glucose and chronic inflammation in type 2 diabetes. However, limited studies have evaluated dietary factors on plasma adiponectin levels, especially among type 2 diabetic patients in Malaysia. The aim of this study was to investigate the influence of dietary glycemic index on plasma adiponectin concentrations in patients with type 2 diabetes. A cross-sectional study was conducted in 305 type 2 diabetic patients aged 19-75 years from the Penang General Hospital, Malaysia. Socio-demographic information was collected using a standard questionnaire while dietary details were determined by using a pre-validated semi-quantitative food frequency questionnaire. Anthropometry measurement included weight, height, BMI and waist circumference. Plasma adiponectin concentrations were measured using a commercial ELISA kit. Data were analyzed using multiple linear regression. After multivariate adjustment, dietary glycemic index was inversely associated with plasma adiponectin concentrations (β =-0.272, 95% CI -0.262, - 0.094; p<0.001). It was found that in individuals who consumed 1 unit of foods containing high dietary glycemic index that plasma adiponectin level reduced by 0.3 μg/mL. Thirty two percent (31.9%) of the variation in adiponectin concentrations was explained by age, sex, race, smoking status, BMI, waist circumference, HDL-C, triglycerides, magnesium, fiber and dietary glycemic index according to the multiple linear regression model (R2=0.319). These results support the hypothesis that dietary glycemic index influences plasma adiponectin concentrations in patients with type 2 diabetes. Controlled clinical trials are required to confirm our findings and to elucidate the underlying mechanism.

  3. Coupling carbon nanotube film microextraction with desorption corona beam ionization for rapid analysis of Sudan dyes (I-IV) and Rhodamine B in chilli oil.

    PubMed

    Chen, Di; Huang, Yun-Qing; He, Xiao-Mei; Shi, Zhi-Guo; Feng, Yu-Qi

    2015-03-07

    A rapid analysis method by coupling carbon nanotube film (CNTF) microextraction with desorption corona beam ionization (DCBI) was developed for the determination of Sudan dyes (I-IV) and Rhodamine B in chilli oil samples. Typically, CNTF was immersed into the diluted solution of chilli oil for extraction, which was then placed directly under the visible plasma beam tip of the DCBI source for desorption and ionization. Under optimized conditions, five dyes were simultaneously determined using this method. Results showed that the analytes were enriched by the CNTF through the π-π interactions, and the proposed method could significantly improve the sensitivities of these compounds, compared to the direct analysis by DCBI-MS/MS. The method with a linear range of 0.08-12.8 μg g(-1) and good linear relationships (R(2) > 0.93) in a multiple reaction monitoring (MRM) mode was developed. Satisfactory reproducibility was achieved. Relative standard deviations (RSDs) were less than 20.0%. The recoveries ranged from 80.0 to 110.0%, and the limits of detection (LODs) were in the range of 1.4-21 ng g(-1). Finally, the feasibility of the method was further exhibited by the determination of five illegal dyes in chilli powder. These results demonstrate that the proposed method consumes less time and solvent than conventional HPLC-based methods and avoids the contamination of chromatographic column and ion source from non-volatile oil. With the help of a 72-well shaker, multiple samples could be treated simultaneously, which ensures high throughput for the entire pretreatment process. In conclusion, it provides a rapid and high-throughput approach for the determination of such illicit additions in chilli products.

  4. Ghrelin, leptin and insulin in cirrhotic children and adolescents: relationship with cirrhosis severity and nutritional status.

    PubMed

    Dornelles, Cristina T L; Goldani, Helena A S; Wilasco, Maria Inês A; Maurer, Rafael L; Kieling, Carlos O; Porowski, Marilene; Ferreira, Cristina T; Santos, Jorge L; Vieira, Sandra M G; Silveira, Themis R

    2013-01-10

    Ghrelin, leptin, and insulin concentrations are involved in the control of food intake and they seem to be associated with anorexia-cachexia in cirrhotic patients. The present study aimed to investigate the relationship between the nutritional status and fasting ghrelin, leptin and insulin concentrations in pediatric cirrhotic patients. Thirty-nine patients with cirrhosis and 39 healthy controls aged 0-15 years matched by sex and age were enrolled. Severity of liver disease was assessed by Child-Pugh classification, and Pediatric for End Stage Liver Disease (PELD) or Model for End-stage Liver Disease (MELD) scores. Blood samples were collected from patients and controls to assay total ghrelin, acyl ghrelin, leptin and insulin by using a commercial ELISA kit. Anthropometry parameters used were standard deviation score of height-for-age and triceps skinfold thickness-for-age ratio. A multiple linear regression analysis was used to determine the correlation between dependent and independent variables. Acyl ghrelin was significantly lower in cirrhotic patients than in controls [142 (93-278) pg/mL vs 275 (208-481) pg/mL, P=0.001]. After multiple linear regression analysis, total ghrelin and acyl ghrelin showed an inverse correlation with age; acyl ghrelin was associated with the severity of cirrhosis and des-acyl ghrelin with PELD or MELD scores ≥15. Leptin was positively correlated with gender and anthropometric parameters. Insulin was not associated with any variable. Low acyl ghrelin and high des-acyl ghrelin concentrations were associated with cirrhosis severity, whereas low leptin concentration was associated with undernourishment in children and adolescents with cirrhosis. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Does the utilization of dental services associate with masticatory performance in a Japanese urban population?: the Suita study

    PubMed Central

    Kikui, Miki; Kida, Momoyo; Kosaka, Takayuki; Yamamoto, Masaaki; Yoshimuta, Yoko; Yasui, Sakae; Nokubi, Takashi; Maeda, Yoshinobu; Kokubo, Yoshihiro; Watanabe, Makoto; Miyamoto, Yoshihiro

    2015-01-01

    Abstract There are numerous reports on the relationship between regular utilization of dental care services and oral health, but most are based on questionnaires and subjective evaluation. Few have objectively evaluated masticatory performance and its relationship to utilization of dental care services. The purpose of this study was to identify the effect of regular utilization of dental services on masticatory performance. The subjects consisted of 1804 general residents of Suita City, Osaka Prefecture (760 men and 1044 women, mean age 66.5 ± 7.9 years). Regular utilization of dental services and oral hygiene habits (frequency of toothbrushing and use of interdental aids) was surveyed, and periodontal status, occlusal support, and masticatory performance were measured. Masticatory performance was evaluated by a chewing test using gummy jelly. The correlation between age, sex, regular dental utilization, oral hygiene habits, periodontal status or occlusal support, and masticatory performance was analyzed using Spearman's correlation test and t‐test. In addition, multiple linear regression analysis was carried out to investigate the relationship of regular dental utilization with masticatory performance after controlling for other factors. Masticatory performance was significantly correlated to age when using Spearman's correlation test, and to regular dental utilization, periodontal status, or occlusal support with t‐test. Multiple linear regression analysis showed that regular utilization of dental services was significantly related to masticatory performance even after adjusting for age, sex, oral hygiene habits, periodontal status, and occlusal support (standardized partial regression coefficient β = 0.055). These findings suggested that the regular utilization of dental care services is an important factor influencing masticatory performance in a Japanese urban population. PMID:29744141

  6. Multiple imputation in the presence of non-normal data.

    PubMed

    Lee, Katherine J; Carlin, John B

    2017-02-20

    Multiple imputation (MI) is becoming increasingly popular for handling missing data. Standard approaches for MI assume normality for continuous variables (conditionally on the other variables in the imputation model). However, it is unclear how to impute non-normally distributed continuous variables. Using simulation and a case study, we compared various transformations applied prior to imputation, including a novel non-parametric transformation, to imputation on the raw scale and using predictive mean matching (PMM) when imputing non-normal data. We generated data from a range of non-normal distributions, and set 50% to missing completely at random or missing at random. We then imputed missing values on the raw scale, following a zero-skewness log, Box-Cox or non-parametric transformation and using PMM with both type 1 and 2 matching. We compared inferences regarding the marginal mean of the incomplete variable and the association with a fully observed outcome. We also compared results from these approaches in the analysis of depression and anxiety symptoms in parents of very preterm compared with term-born infants. The results provide novel empirical evidence that the decision regarding how to impute a non-normal variable should be based on the nature of the relationship between the variables of interest. If the relationship is linear in the untransformed scale, transformation can introduce bias irrespective of the transformation used. However, if the relationship is non-linear, it may be important to transform the variable to accurately capture this relationship. A useful alternative is to impute the variable using PMM with type 1 matching. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Does the utilization of dental services associate with masticatory performance in a Japanese urban population?: the Suita study.

    PubMed

    Kikui, Miki; Ono, Takahiro; Kida, Momoyo; Kosaka, Takayuki; Yamamoto, Masaaki; Yoshimuta, Yoko; Yasui, Sakae; Nokubi, Takashi; Maeda, Yoshinobu; Kokubo, Yoshihiro; Watanabe, Makoto; Miyamoto, Yoshihiro

    2015-12-01

    There are numerous reports on the relationship between regular utilization of dental care services and oral health, but most are based on questionnaires and subjective evaluation. Few have objectively evaluated masticatory performance and its relationship to utilization of dental care services. The purpose of this study was to identify the effect of regular utilization of dental services on masticatory performance. The subjects consisted of 1804 general residents of Suita City, Osaka Prefecture (760 men and 1044 women, mean age 66.5 ± 7.9 years). Regular utilization of dental services and oral hygiene habits (frequency of toothbrushing and use of interdental aids) was surveyed, and periodontal status, occlusal support, and masticatory performance were measured. Masticatory performance was evaluated by a chewing test using gummy jelly. The correlation between age, sex, regular dental utilization, oral hygiene habits, periodontal status or occlusal support, and masticatory performance was analyzed using Spearman's correlation test and t -test. In addition, multiple linear regression analysis was carried out to investigate the relationship of regular dental utilization with masticatory performance after controlling for other factors. Masticatory performance was significantly correlated to age when using Spearman's correlation test, and to regular dental utilization, periodontal status, or occlusal support with t -test. Multiple linear regression analysis showed that regular utilization of dental services was significantly related to masticatory performance even after adjusting for age, sex, oral hygiene habits, periodontal status, and occlusal support (standardized partial regression coefficient β  = 0.055). These findings suggested that the regular utilization of dental care services is an important factor influencing masticatory performance in a Japanese urban population.

  8. Looking for the Perfect Mentor.

    PubMed

    Sá, Ana Pinheiro; Teixeira-Pinto, Cristina; Veríssimo, Rafaela; Vilas-Boas, Andreia; Firmino-Machado, João

    2015-01-01

    The authors established the profile of the Internal Medicine clinical teachers in Portugal aiming to define a future interventional strategy plan as adequate as possible to the target group and to the problems identified by the residents. Observational, transversal, analytic study. An online anonymous questionnaire was defined, evaluating the demographic characteristics of the clinical teachers, their path in Internal Medicine and their involvement in the residents learning process. We collected 213 valid questionnaires, making for an estimated response rate of 28.4%. Median global satisfaction with the clinical teacher was 4.52 (± 1.33 points) and the classification of the relationship between resident and clinical teacher was 4.86 ± 1.04 points. The perfect clinical teacher is defined by high standards of dedication and responsibility (4.9 ± 1.37 points), practical (4.8 ± 1.12 points) and theoretical skills (4.8 ± 1.07 points). The multiple linear regression model allowed to determine predictors of the residentâs satisfaction with their clinical teacher, justifying 82,5% of the variation of satisfaction with the clinical teacher (R2 = 0.83; R2 a = 0.82). Postgraduate medical education consists of an interaction between several areas of knowledge and intervening variables in the learning process having the clinical teacher in the central role. Overall, the pedagogical abilities were the most valued by the Internal Medicine residents regarding their clinical teacher, as determinants of a quality residentship. This study demonstrates the critical relevance of the clinical teacher in the satisfaction of residents with their residentship. The established multiple linear regression model highlights the impact of the clinical and pedagogical relantionship with the clinical teacher in a relevant increase in the satisfaction with the latter.

  9. Relation of dietary and lifestyle traits to difference in serum leptin of Japanese in Japan and Hawaii: The INTERLIPID Study

    PubMed Central

    Nakamura, Yasuyuki; Ueshima, Hirotsugu; Okuda, Nagako; Miura, Katsuyuki; Kita, Yoshikuni; Okamura, Tomonori; Turin, Tanvir C; Okayama, Akira; Rodriguez, Beatriz; Curb, J David; Stamler, Jeremiah

    2010-01-01

    Background and Aims Previously, we found significantly higher serum leptin in Japanese-Americans in Hawaii than Japanese in Japan. We investigated whether differences in dietary and other lifestyle factors explain higher serum leptin concentrations in Japanese living a Western lifestyle in Hawaii compared with Japanese in Japan. Methods and Results Serum leptin and nutrient intakes were examined by standardized methods in men and women ages 40 to 59 years from two population samples, one Japanese-American in Hawaii (88 men, 94 women), the other Japanese in central Japan (123 men, 111 women). Multiple linear regression models were used to assess role of dietary and other lifestyle traits in accounting for serum leptin difference between Hawaii and Japan. Mean leptin was significantly higher in Hawaii than Japan (7.2±6.8 vs 3.7±2.3 ng/ml in men, P<0.0001; 12.8±6.6 vs 8.5±5.0 in women <0.0001). In men, higher BMI in Hawaii explained over 90% of the difference in serum leptin; in women, only 47%. In multiple linear regression analyses in women, further adjustment for physical activity and dietary factors - - alcohol, dietary fiber, iron- - produced a further reduction in the coefficient for the difference, total reduction 70.7%; P value for the Hawaii-Japan difference became 0.126. Conclusion The significantly higher mean leptin concentration in Hawaii than Japan may be attributable largely to differences in BMI. Differences in nutrient intake in the two samples were associated with only modest relationship to the leptin difference. PMID:20678905

  10. New Method for the Approximation of Corrected Calcium Concentrations in Chronic Kidney Disease Patients.

    PubMed

    Kaku, Yoshio; Ookawara, Susumu; Miyazawa, Haruhisa; Ito, Kiyonori; Ueda, Yuichirou; Hirai, Keiji; Hoshino, Taro; Mori, Honami; Yoshida, Izumi; Morishita, Yoshiyuki; Tabei, Kaoru

    2016-02-01

    The following conventional calcium correction formula (Payne) is broadly applied for serum calcium estimation: corrected total calcium (TCa) (mg/dL) = TCa (mg/dL) + (4 - albumin (g/dL)); however, it is inapplicable to chronic kidney disease (CKD) patients. A total of 2503 venous samples were collected from 942 all-stage CKD patients, and levels of TCa (mg/dL), ionized calcium ([iCa(2+) ] mmol/L), phosphate (mg/dL), albumin (g/dL), and pH, and other clinical parameters were measured. We assumed corrected TCa (the gold standard) to be equal to eight times the iCa(2+) value (measured corrected TCa). Then, we performed stepwise multiple linear regression analysis by using the clinical parameters and derived a simple formula for corrected TCa approximation. The following formula was devised from multiple linear regression analysis: Approximated  corrected TCa (mg/dL) = TCa + 0.25 × (4 - albumin) + 4 × (7.4 - p H) + 0.1 × (6 - phosphate) + 0.3. Receiver operating characteristic curves analysis illustrated that area under the curve of approximated corrected TCa for detection of measured corrected TCa ≥ 8.4 mg/dL and ≤ 10.4 mg/dL were 0.994 and 0.919, respectively. The intraclass correlation coefficient demonstrated superior agreement using this new formula compared to other formulas (new formula: 0.826, Payne: 0.537, Jain: 0.312, Portale: 0.582, Ferrari: 0.362). In CKD patients, TCa correction should include not only albumin but also pH and phosphate. The approximated corrected TCa from this formula demonstrates superior agreement with the measured corrected TCa in comparison to other formulas. © 2016 International Society for Apheresis, Japanese Society for Apheresis, and Japanese Society for Dialysis Therapy.

  11. Polarimetric measures of selected variable stars

    NASA Astrophysics Data System (ADS)

    Elias, N. M., II; Koch, R. H.; Pfeiffer, R. J.

    2008-10-01

    Aims: The purpose of this paper is to summarize and interpret unpublished optical polarimetry for numerous program stars that were observed over the past decades at the Flower and Cook Observatory (FCO), University of Pennsylvania. We also make the individual calibrated measures available for long-term comparisons with new data. Methods: We employ three techniques to search for intrinsic variability within each dataset. First, when the observations for a given star and filter are numerous enough and when a period has been determined previously via photometry or spectroscopy, the polarimetric measures are plotted versus phase. If a statistically significant pattern appears, we attribute it to intrinsic variability. Second, we compare means of the FCO data to means from other workers. If they are statistically different, we conclude that the object exhibits long-term intrinsic variability. Third, we calculate the standard deviation for each program star and filter and compare it to the standard deviation estimated from comparable polarimetric standards. If the standard deviation of the program star is at least three times the value estimated from the polarimetric standards, the former is considered intrinsically variable. All of these statements are strengthened when variability appears in multiple filters. Results: We confirm the existence of an electron-scattering cloud at L1 in the β Per system, and find that LY Aur and HR 8281 possess scattering envelopes. Intrinsic polarization was detected for Nova Cas 1993 as early as day +3. We detected polarization variability near the primary eclipse of 32 Cyg. There is marginal evidence for polarization variability of the β Cepheid type star γ Peg. The other objects of this class exhibited no variability. All but one of the β Cepheid objects (ES Vul) fall on a tight linear relationship between linear polarization and E(B-V), in spite of the fact that the stars lay along different lines of sight. This dependence falls slightly below the classical upper limit of Serkowski, Mathewson, and Ford. The table, which contains the polarization observations of the program stars discussed in this paper, is only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/489/911

  12. Semiparametric bivariate zero-inflated Poisson models with application to studies of abundance for multiple species

    USGS Publications Warehouse

    Arab, Ali; Holan, Scott H.; Wikle, Christopher K.; Wildhaber, Mark L.

    2012-01-01

    Ecological studies involving counts of abundance, presence–absence or occupancy rates often produce data having a substantial proportion of zeros. Furthermore, these types of processes are typically multivariate and only adequately described by complex nonlinear relationships involving externally measured covariates. Ignoring these aspects of the data and implementing standard approaches can lead to models that fail to provide adequate scientific understanding of the underlying ecological processes, possibly resulting in a loss of inferential power. One method of dealing with data having excess zeros is to consider the class of univariate zero-inflated generalized linear models. However, this class of models fails to address the multivariate and nonlinear aspects associated with the data usually encountered in practice. Therefore, we propose a semiparametric bivariate zero-inflated Poisson model that takes into account both of these data attributes. The general modeling framework is hierarchical Bayes and is suitable for a broad range of applications. We demonstrate the effectiveness of our model through a motivating example on modeling catch per unit area for multiple species using data from the Missouri River Benthic Fishes Study, implemented by the United States Geological Survey.

  13. Range of protein detection by selected/multiple reaction monitoring mass spectrometry in an unfractionated human cell culture lysate.

    PubMed

    Ebhardt, H Alexander; Sabidó, Eduard; Hüttenhain, Ruth; Collins, Ben; Aebersold, Ruedi

    2012-04-01

    Selected or multiple reaction monitoring is a targeted mass spectrometry method (S/MRM-MS), in which many peptides are simultaneously and consistently analyzed during a single liquid chromatography-mass spectrometry (LC-S/MRM-MS) measurement. These capabilities make S/MRM-MS an attractive method to monitor a consistent set of proteins over various experimental conditions. To increase throughput for S/MRM-MS it is advantageous to use scheduled methods and unfractionated protein extracts. Here, we established the practically measurable dynamic range of proteins reliably detectable and quantifiable in an unfractionated protein extract from a human cell line using LC-S/MRM-MS. Initially, we analyzed S/MRM transition peak groups in terms of interfering signals and compared S/MRM transition peak groups to MS1-triggered MS2 spectra using dot-product analysis. Finally, using unfractionated protein extract from human cell lysate, we quantified the upper boundary of copies per cell to be 35 million copies per cell, while 7500 copies per cell represents a lower boundary using a single 35 min linear gradient LC-S/MRM-MS measurement on a current, standard commercial instrument. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Bowhead whale localization using asynchronous hydrophones in the Chukchi Sea.

    PubMed

    Warner, Graham A; Dosso, Stan E; Hannay, David E; Dettmer, Jan

    2016-07-01

    This paper estimates bowhead whale locations and uncertainties using non-linear Bayesian inversion of their modally-dispersed calls recorded on asynchronous recorders in the Chukchi Sea, Alaska. Bowhead calls were recorded on a cluster of 7 asynchronous ocean-bottom hydrophones that were separated by 0.5-9.2 km. A warping time-frequency analysis is used to extract relative mode arrival times as a function of frequency for nine frequency-modulated whale calls that dispersed in the shallow water environment. Each call was recorded on multiple hydrophones and the mode arrival times are inverted for: the whale location in the horizontal plane, source instantaneous frequency (IF), water sound-speed profile, seabed geoacoustic parameters, relative recorder clock drifts, and residual error standard deviations, all with estimated uncertainties. A simulation study shows that accurate prior environmental knowledge is not required for accurate localization as long as the inversion treats the environment as unknown. Joint inversion of multiple recorded calls is shown to substantially reduce uncertainties in location, source IF, and relative clock drift. Whale location uncertainties are estimated to be 30-160 m and relative clock drift uncertainties are 3-26 ms.

  15. Female homicide in Rio Grande do Sul, Brazil.

    PubMed

    Leites, Gabriela Tomedi; Meneghel, Stela Nazareth; Hirakata, Vania Noemi

    2014-01-01

    This study aimed to assess the female homicide rate due to aggression in Rio Grande do Sul, Brazil, using this as a "proxy" of femicide. This was an ecological study which correlated the female homicide rate due to aggression in Rio Grande do Sul, according to the 35 microregions defined by the Brazilian Institute of Geography and Statistics (IBGE), with socioeconomic and demographic variables access and health indicators. Pearson's correlation test was performed with the selected variables. After this, multiple linear regressions were performed with variables with p < 0.20. The standardized average of female homicide rate due to aggression in the period from 2003 to 2007 was 3.1 obits per 100 thousand. After multiple regression analysis, the final model included male mortality due to aggression (p = 0.016), the percentage of hospital admissions for alcohol (p = 0.005) and the proportion of ill-defined deaths (p = 0.015). The model have an explanatory power of 39% (adjusted r2 = 0.391). The results are consistent with other studies and indicate a strong relationship between structural violence in society and violence against women, in addition to a higher incidence of female deaths in places with high alcohol hospitalization.

  16. Screening and analysis of the multiple absorbed bioactive components and metabolites in rat plasma after oral administration of Jitai tablets by high-performance liquid chromatography/diode-array detection coupled with electrospray ionization tandem mass spectrometry.

    PubMed

    Wang, Shu-Ping; Liu, Lei; Wang, Ling-Ling; Jiang, Peng; Zhang, Ji-Quan; Zhang, Wei-Dong; Liu, Run-Hui

    2010-06-15

    Based on the serum pharmacochemistry technique and high-performance liquid chromatography/diode-array detection (HPLC/DAD) coupled with electrospray tandem mass spectrometry (HPLC/ESI-MS/MS), a method for screening and analysis of the multiple absorbed bioactive components and metabolites of Jitai tablets (JTT) in orally dosed rat plasma was developed. Plasma was treated by methanol precipitation prior to liquid chromatography, and the separation was carried out on a Symmetry C(18) column, with a linear gradient (0.1% formic acid/water/acetonitrile). Mass spectra were acquired in negative and positive ion modes, respectively. As a result, 26 bioactive components originated from JTT and 5 metabolites were tentatively identified in orally dosed rat plasma by comparing their retention times and MS spectra with those of authentic standards and literature data. It is concluded that an effective and reliable analytical method was set up for screening the bioactive components of Chinese herbal medicine, which provided a meaningful basis for further pharmacology and active mechanism research of JTT. Copyright (c) 2010 John Wiley & Sons, Ltd.

  17. Comparison of pharmacokinetic behavior of two iridoid glycosides in rat plasma after oral administration of crude Cornus officinals and its jiuzhipin by high performance liquid chromatography triple quadrupole mass spectrometry combined with multiple reactions monitoring mode

    PubMed Central

    Chen, Xiaocheng; Cao, Gang; Jiang, Jianping

    2014-01-01

    Objective: The present study examined the pharmacokinetic profiles of two iridoid glycosides named morroniside and loganin in rat plasma after oral administration of crude and processed Cornus officinals. Materials and Methods: A rapid, selective and specific high-performance liquid chromatography/electrospray ionization tandem mass spectrometry with multiple reactions monitoring mode was developed to simultaneously investigate the pharmacokinetic profiles of morroniside and loganin in rat plasma after oral administration of crude C. officinals and its jiuzhipin. Results: The morroniside and loganin in crude and processed C. officinals could be simultaneously determined within 7.4 min. Linear calibration curves were obtained over the concentration ranges of 45.45-4800 ng/mL for all the analytes. The intra-and inter-day precisions relative standard deviation was lesser than 2.84% and 4.12%, respectively. Conclusion: The pharmacokinetic parameters of two iridoid glucosides were also compared systematically between crude and processed C. officinals. This paper provides the theoretical proofs for further explaining the processing mechanism of Traditional Chinese Medicines. PMID:24914290

  18. 40 CFR 437.47 - Pretreatment standards for new sources (PSNS).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... subparts A, B, or C of this part may be subject to Multiple Wastestream Subcategory pretreatment standards representing the application of PSNS set forth in paragraphs (b), (c), (d), or (e) of this section if the... Multiple Wastestream Subcategory standards set forth in paragraphs (b), (c), (d) or (e) of this section; (2...

  19. 40 CFR 437.46 - Pretreatment standards for existing sources (PSES)

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... from subparts A, B, or C of this part may be subject to Multiple Wastestream Subcategory pretreatment standards representing the application of PSES set forth in paragraphs (b), (c), (d), or (e) of this section... applicable Multiple Wastestream Subcategory standards set forth in paragraphs (b), (c), (d) or (e) of this...

  20. Simultaneous multiple non-crossing quantile regression estimation using kernel constraints

    PubMed Central

    Liu, Yufeng; Wu, Yichao

    2011-01-01

    Quantile regression (QR) is a very useful statistical tool for learning the relationship between the response variable and covariates. For many applications, one often needs to estimate multiple conditional quantile functions of the response variable given covariates. Although one can estimate multiple quantiles separately, it is of great interest to estimate them simultaneously. One advantage of simultaneous estimation is that multiple quantiles can share strength among them to gain better estimation accuracy than individually estimated quantile functions. Another important advantage of joint estimation is the feasibility of incorporating simultaneous non-crossing constraints of QR functions. In this paper, we propose a new kernel-based multiple QR estimation technique, namely simultaneous non-crossing quantile regression (SNQR). We use kernel representations for QR functions and apply constraints on the kernel coefficients to avoid crossing. Both unregularised and regularised SNQR techniques are considered. Asymptotic properties such as asymptotic normality of linear SNQR and oracle properties of the sparse linear SNQR are developed. Our numerical results demonstrate the competitive performance of our SNQR over the original individual QR estimation. PMID:22190842

  1. Identifying the Factors That Influence Change in SEBD Using Logistic Regression Analysis

    ERIC Educational Resources Information Center

    Camilleri, Liberato; Cefai, Carmel

    2013-01-01

    Multiple linear regression and ANOVA models are widely used in applications since they provide effective statistical tools for assessing the relationship between a continuous dependent variable and several predictors. However these models rely heavily on linearity and normality assumptions and they do not accommodate categorical dependent…

  2. MAGDM linear-programming models with distinct uncertain preference structures.

    PubMed

    Xu, Zeshui S; Chen, Jian

    2008-10-01

    Group decision making with preference information on alternatives is an interesting and important research topic which has been receiving more and more attention in recent years. The purpose of this paper is to investigate multiple-attribute group decision-making (MAGDM) problems with distinct uncertain preference structures. We develop some linear-programming models for dealing with the MAGDM problems, where the information about attribute weights is incomplete, and the decision makers have their preferences on alternatives. The provided preference information can be represented in the following three distinct uncertain preference structures: 1) interval utility values; 2) interval fuzzy preference relations; and 3) interval multiplicative preference relations. We first establish some linear-programming models based on decision matrix and each of the distinct uncertain preference structures and, then, develop some linear-programming models to integrate all three structures of subjective uncertain preference information provided by the decision makers and the objective information depicted in the decision matrix. Furthermore, we propose a simple and straightforward approach in ranking and selecting the given alternatives. It is worth pointing out that the developed models can also be used to deal with the situations where the three distinct uncertain preference structures are reduced to the traditional ones, i.e., utility values, fuzzy preference relations, and multiplicative preference relations. Finally, we use a practical example to illustrate in detail the calculation process of the developed approach.

  3. Space Flyable Hg(sup +) Frequency Standards

    NASA Technical Reports Server (NTRS)

    Prestage, John D.; Maleki, Lute

    1994-01-01

    We discuss a design for a space based atomic frequency standard (AFS) based on Hg(sup +) ions confined in a linear ion trap. This newly developed AFS should be well suited for space borne applications because it can supply the ultra-high stability of a H-maser but its total mass is comparable to that of a NAVSTAR/GPS cesium clock, i.e., about 11kg. This paper will compare the proposed Hg(sup +) AFS to the present day GPS cesium standards to arrive at the 11 kg mass estimate. The proposed space borne Hg(sup +) standard is based upon the recently developed extended linear ion trap architecture which has reduced the size of existing trapped Hg(sup +) standards to a physics package which is comparable in size to a cesium beam tube. The demonstrated frequency stability to below 10(sup -15) of existing Hg(sup +) standards should be maintained or even improved upon in this new architecture. This clock would deliver far more frequency stability per kilogram than any current day space qualified standard.

  4. On Performance of Linear Multiuser Detectors for Wireless Multimedia Applications

    NASA Astrophysics Data System (ADS)

    Agarwal, Rekha; Reddy, B. V. R.; Bindu, E.; Nayak, Pinki

    In this paper, performance of different multi-rate schemes in DS-CDMA system is evaluated. The analysis of multirate linear multiuser detectors with multiprocessing gain is analyzed for synchronous Code Division Multiple Access (CDMA) systems. Variable data rate is achieved by varying the processing gain. Our conclusion is that bit error rate for multirate and single rate systems can be made same with a tradeoff with number of users in linear multiuser detectors.

  5. Sampled-Data Kalman Filtering and Multiple Model Adaptive Estimation for Infinite-Dimensional Continuous-Time Systems

    DTIC Science & Technology

    2007-03-01

    mathematical frame- 1-6 work of linear algebra and functional analysis [122, 33], while Kalman-Bucy filtering [96, 32] is an especially important...Engineering, Air Force Institute of Technology (AU), Wright- Patterson AFB, Ohio, March 2002. 85. Hoffman, Kenneth and Ray Kunze. Linear Algebra (Second Edition...Engineering, Air Force Institute of Technology (AU), Wright- Patterson AFB, Ohio, December 1989. 189. Strang, Gilbert. Linear Algebra and Its Applications

  6. Analysis of the Multiple-Solution Response of a Flexible Rotor Supported on Non-Linear Squeeze Film Dampers

    NASA Astrophysics Data System (ADS)

    ZHU, C. S.; ROBB, D. A.; EWINS, D. J.

    2002-05-01

    The multiple-solution response of rotors supported on squeeze film dampers is a typical non-linear phenomenon. The behaviour of the multiple-solution response in a flexible rotor supported on two identical squeeze film dampers with centralizing springs is studied by three methods: synchronous circular centred-orbit motion solution, numerical integration method and slow acceleration method using the assumption of a short bearing and cavitated oil film; the differences of computational results obtained by the three different methods are compared in this paper. It is shown that there are three basic forms for the multiple-solution response in the flexible rotor system supported on the squeeze film dampers, which are the resonant, isolated bifurcation and swallowtail bifurcation multiple solutions. In the multiple-solution speed regions, the rotor motion may be subsynchronous, super-subsynchronous, almost-periodic and even chaotic, besides synchronous circular centred, even if the gravity effect is not considered. The assumption of synchronous circular centred-orbit motion for the journal and rotor around the static deflection line can be used only in some special cases; the steady state numerical integration method is very useful, but time consuming. Using the slow acceleration method, not only can the multiple-solution speed regions be detected, but also the non-synchronous response regions.

  7. Genomic prediction based on data from three layer lines using non-linear regression models.

    PubMed

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

    2014-11-06

    Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional occurrence of large negative accuracies when the evaluated line was not included in the training dataset. Furthermore, when using a multi-line training dataset, non-linear models provided information on the genotype data that was complementary to the linear models, which indicates that the underlying data distributions of the three studied lines were indeed heterogeneous.

  8. A combined analysis of genome-wide expression profiling of bipolar disorder in human prefrontal cortex.

    PubMed

    Wang, Jinglu; Qu, Susu; Wang, Weixiao; Guo, Liyuan; Zhang, Kunlin; Chang, Suhua; Wang, Jing

    2016-11-01

    Numbers of gene expression profiling studies of bipolar disorder have been published. Besides different array chips and tissues, variety of the data processes in different cohorts aggravated the inconsistency of results of these genome-wide gene expression profiling studies. By searching the gene expression databases, we obtained six data sets for prefrontal cortex (PFC) of bipolar disorder with raw data and combinable platforms. We used standardized pre-processing and quality control procedures to analyze each data set separately and then combined them into a large gene expression matrix with 101 bipolar disorder subjects and 106 controls. A standard linear mixed-effects model was used to calculate the differentially expressed genes (DEGs). Multiple levels of sensitivity analyses and cross validation with genetic data were conducted. Functional and network analyses were carried out on basis of the DEGs. In the result, we identified 198 unique differentially expressed genes in the PFC of bipolar disorder and control. Among them, 115 DEGs were robust to at least three leave-one-out tests or different pre-processing methods; 51 DEGs were validated with genetic association signals. Pathway enrichment analysis showed these DEGs were related with regulation of neurological system, cell death and apoptosis, and several basic binding processes. Protein-protein interaction network further identified one key hub gene. We have contributed the most comprehensive integrated analysis of bipolar disorder expression profiling studies in PFC to date. The DEGs, especially those with multiple validations, may denote a common signature of bipolar disorder and contribute to the pathogenesis of disease. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Multiple Reaction Monitoring Enables Precise Quantification of 97 Proteins in Dried Blood Spots*

    PubMed Central

    Chambers, Andrew G.; Percy, Andrew J.; Yang, Juncong; Borchers, Christoph H.

    2015-01-01

    The dried blood spot (DBS) methodology provides a minimally invasive approach to sample collection and enables room-temperature storage for most analytes. DBS samples have successfully been analyzed by liquid chromatography multiple reaction monitoring mass spectrometry (LC/MRM-MS) to quantify a large range of small molecule biomarkers and drugs; however, this strategy has only recently been explored for MS-based proteomics applications. Here we report the development of a highly multiplexed MRM assay to quantify endogenous proteins in human DBS samples. This assay uses matching stable isotope-labeled standard peptides for precise, relative quantification, and standard curves to characterize the analytical performance. A total of 169 peptides, corresponding to 97 proteins, were quantified in the final assay with an average linear dynamic range of 207-fold and an average R2 value of 0.987. The total range of this assay spanned almost 5 orders of magnitude from serum albumin (P02768) at 18.0 mg/ml down to cholinesterase (P06276) at 190 ng/ml. The average intra-assay and inter-assay precision for 6 biological samples ranged from 6.1–7.5% CV and 9.5–11.0% CV, respectively. The majority of peptide targets were stable after 154 days at storage temperatures from −20 °C to 37 °C. Furthermore, protein concentration ratios between matching DBS and whole blood samples were largely constant (<20% CV) across six biological samples. This assay represents the highest multiplexing yet achieved for targeted protein quantification in DBS samples and is suitable for biomedical research applications. PMID:26342038

  10. Robust characterization of small grating boxes using rotating stage Mueller matrix polarimeter

    NASA Astrophysics Data System (ADS)

    Foldyna, M.; De Martino, A.; Licitra, C.; Foucher, J.

    2010-03-01

    In this paper we demonstrate the robustness of the Mueller matrix polarimetry used in multiple-azimuth configuration. We first demonstrate the efficiency of the method for the characterization of small pitch gratings filling 250 μm wide square boxes. We used a Mueller matrix polarimeter directly installed in the clean room has motorized rotating stage allowing the access to arbitrary conical grating configurations. The projected beam spot size could be reduced to 60x25 μm, but for the measurements reported here this size was 100x100 μm. The optimal values of parameters of a trapezoidal profile model, acquired for each azimuthal angle separately using a non-linear least-square minimization algorithm, are shown for a typical grating. Further statistical analysis of the azimuth-dependent dimensional parameters provided realistic estimates of the confidence interval giving direct information about the accuracy of the results. The mean values and the standard deviations were calculated for 21 different grating boxes featuring in total 399 measured spectra and fits. The results for all boxes are summarized in a table which compares the optical method to the 3D-AFM. The essential conclusion of our work is that the 3D-AFM values always fall into the confidence intervals provided by the optical method, which means that we have successfully estimated the accuracy of our results without using direct comparison with another, non-optical, method. Moreover, this approach may provide a way to improve the accuracy of grating profile modeling by minimizing the standard deviations evaluated from multiple-azimuths results.

  11. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples.

    PubMed

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-05

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Application of snapshot imaging spectrometer in environmental detection

    NASA Astrophysics Data System (ADS)

    Sun, Kai; Qin, Xiaolei; Zhang, Yu; Wang, Jinqiang

    2017-10-01

    This study aimed at the application of snapshot imaging spectrometer in environmental detection. The simulated sewage and dyeing wastewater were prepared and the optimal experimental conditions were determined. The white LED array was used as the detection light source and the image of the sample was collected by the imaging spectrometer developed in the laboratory to obtain the spectral information of the sample in the range of 400-800 nm. The standard curve between the absorbance and the concentration of the samples was established. The linear range of a single component of Rhoda mine B was 1-50 mg/L, the linear correlation coefficient was more than 0.99, the recovery was 93%-113% and the relative standard deviations (RSD) was 7.5%. The linear range of chemical oxygen demand (COD) standard solution was 50-900mg/L, the linear correlation coefficient was 0.981, the recovery was 91% -106% and the relative standard deviation (RSD) was 6.7%. The rapid, accurate and precise method for detecting dyes showed an excellent promise for on-site and emergency detection in environment. At the request of the proceedings editor, an updated version of this article was published on 17 October 2017. The original version of this article was replaced due to an accidental inversion of Figure 2 and Figure 3. The Figures have been corrected in the updated and republished version.

  13. An implicit boundary integral method for computing electric potential of macromolecules in solvent

    NASA Astrophysics Data System (ADS)

    Zhong, Yimin; Ren, Kui; Tsai, Richard

    2018-04-01

    A numerical method using implicit surface representations is proposed to solve the linearized Poisson-Boltzmann equation that arises in mathematical models for the electrostatics of molecules in solvent. The proposed method uses an implicit boundary integral formulation to derive a linear system defined on Cartesian nodes in a narrowband surrounding the closed surface that separates the molecule and the solvent. The needed implicit surface is constructed from the given atomic description of the molecules, by a sequence of standard level set algorithms. A fast multipole method is applied to accelerate the solution of the linear system. A few numerical studies involving some standard test cases are presented and compared to other existing results.

  14. Modification of the USLE K factor for soil erodibility assessment on calcareous soils in Iran

    NASA Astrophysics Data System (ADS)

    Ostovari, Yaser; Ghorbani-Dashtaki, Shoja; Bahrami, Hossein-Ali; Naderi, Mehdi; Dematte, Jose Alexandre M.; Kerry, Ruth

    2016-11-01

    The measurement of soil erodibility (K) in the field is tedious, time-consuming and expensive; therefore, its prediction through pedotransfer functions (PTFs) could be far less costly and time-consuming. The aim of this study was to develop new PTFs to estimate the K factor using multiple linear regression, Mamdani fuzzy inference systems, and artificial neural networks. For this purpose, K was measured in 40 erosion plots with natural rainfall. Various soil properties including the soil particle size distribution, calcium carbonate equivalent, organic matter, permeability, and wet-aggregate stability were measured. The results showed that the mean measured K was 0.014 t h MJ- 1 mm- 1 and 2.08 times less than the estimated mean K (0.030 t h MJ- 1 mm- 1) using the USLE model. Permeability, wet-aggregate stability, very fine sand, and calcium carbonate were selected as independent variables by forward stepwise regression in order to assess the ability of multiple linear regression, Mamdani fuzzy inference systems and artificial neural networks to predict K. The calcium carbonate equivalent, which is not accounted for in the USLE model, had a significant impact on K in multiple linear regression due to its strong influence on the stability of aggregates and soil permeability. Statistical indices in validation and calibration datasets determined that the artificial neural networks method with the highest R2, lowest RMSE, and lowest ME was the best model for estimating the K factor. A strong correlation (R2 = 0.81, n = 40, p < 0.05) between the estimated K from multiple linear regression and measured K indicates that the use of calcium carbonate equivalent as a predictor variable gives a better estimation of K in areas with calcareous soils.

  15. Spatial summation revealed in the earliest visual evoked component C1 and the effect of attention on its linearity.

    PubMed

    Chen, Juan; Yu, Qing; Zhu, Ziyun; Peng, Yujia; Fang, Fang

    2016-01-01

    In natural scenes, multiple objects are usually presented simultaneously. How do specific areas of the brain respond to multiple objects based on their responses to each individual object? Previous functional magnetic resonance imaging (fMRI) studies have shown that the activity induced by a multiobject stimulus in the primary visual cortex (V1) can be predicted by the linear or nonlinear sum of the activities induced by its component objects. However, there has been little evidence from electroencephelogram (EEG) studies so far. Here we explored how V1 responded to multiple objects by comparing the EEG signals evoked by a three-grating stimulus with those evoked by its two components (the central grating and 2 flanking gratings). We focused on the earliest visual component C1 (onset latency of ∼50 ms) because it has been shown to reflect the feedforward responses of neurons in V1. We found that when the stimulus was unattended, the amplitude of the C1 evoked by the three-grating stimulus roughly equaled the sum of the amplitudes of the C1s evoked by its two components, regardless of the distances between these gratings. When the stimulus was attended, this linear spatial summation existed only when the three gratings were far apart from each other. When the three gratings were close to each other, the spatial summation became compressed. These results suggest that the earliest visual responses in V1 follow a linear summation rule when attention is not involved and that attention can affect the earliest interactions between multiple objects. Copyright © 2016 the American Physiological Society.

  16. Construction of multiple linear regression models using blood biomarkers for selecting against abdominal fat traits in broilers.

    PubMed

    Dong, J Q; Zhang, X Y; Wang, S Z; Jiang, X F; Zhang, K; Ma, G W; Wu, M Q; Li, H; Zhang, H

    2018-01-01

    Plasma very low-density lipoprotein (VLDL) can be used to select for low body fat or abdominal fat (AF) in broilers, but its correlation with AF is limited. We investigated whether any other biochemical indicator can be used in combination with VLDL for a better selective effect. Nineteen plasma biochemical indicators were measured in male chickens from the Northeast Agricultural University broiler lines divergently selected for AF content (NEAUHLF) in the fed state at 46 and 48 d of age. The average concentration of every parameter for the 2 d was used for statistical analysis. Levels of these 19 plasma biochemical parameters were compared between the lean and fat lines. The phenotypic correlations between these plasma biochemical indicators and AF traits were analyzed. Then, multiple linear regression models were constructed to select the best model used for selecting against AF content. and the heritabilities of plasma indicators contained in the best models were estimated. The results showed that 11 plasma biochemical indicators (triglycerides, total bile acid, total protein, globulin, albumin/globulin, aspartate transaminase, alanine transaminase, gamma-glutamyl transpeptidase, uric acid, creatinine, and VLDL) differed significantly between the lean and fat lines (P < 0.01), and correlated significantly with AF traits (P < 0.05). The best multiple linear regression models based on albumin/globulin, VLDL, triglycerides, globulin, total bile acid, and uric acid, had higher R2 (0.73) than the model based only on VLDL (0.21). The plasma parameters included in the best models had moderate heritability estimates (0.21 ≤ h2 ≤ 0.43). These results indicate that these multiple linear regression models can be used to select for lean broiler chickens. © 2017 Poultry Science Association Inc.

  17. Low-Loss Materials for Josephson Qubits

    DTIC Science & Technology

    2014-10-09

    quantum circuit. It also intuitively explains how for a linear circuit the standard results for electrical circuits are obtained, justifying the use of... linear concepts for a weakly non- linear device such as the transmon. It has also become common to use a double sided noise spectrum to represent...loss tangent of large area pad junction. (c) Effective linearized circuit for the double junction, which makes up the admittance $Y$. $L_j$ is the

  18. Composite Linear Models | Division of Cancer Prevention

    Cancer.gov

    By Stuart G. Baker The composite linear models software is a matrix approach to compute maximum likelihood estimates and asymptotic standard errors for models for incomplete multinomial data. It implements the method described in Baker SG. Composite linear models for incomplete multinomial data. Statistics in Medicine 1994;13:609-622. The software includes a library of thirty

  19. Protection of Workers and Third Parties during the Construction of Linear Structures

    NASA Astrophysics Data System (ADS)

    Vlčková, Jitka; Venkrbec, Václav; Henková, Svatava; Chromý, Adam

    2017-12-01

    The minimization of risk in the workplace through a focus on occupational health and safety (OHS) is one of the primary objectives for every construction project. The most serious accidents in the construction industry occur during work on earthworks and linear structures. The character of such structures places them among those posing the greatest threat to the public (referred to as “third parties”). They can be characterized as large structures whose construction may involve the building site extending in a narrow lane alongside previously constructed objects currently in use by the public. Linear structures are often directly connected to existing objects or buildings, making it impossible to guard the whole construction site. However, many OHS problems related to linear structures can be prevented during the design stage. The aim of this article is to introduce a new methodology which has been implemented into a computer program that deals with safety measures at construction sites where work is performed on linear structures. Based on existing experience with the design of such structures and their execution and supervision by safety coordinators, the basic types of linear structures, their location in the terrain, the conditions present during their execution and other marginal conditions and influences were modelled. Basic safety information has been assigned to this elementary information, which is strictly necessary for the construction process. The safety provisions can be grouped according to type, e.g. technical, organizational and other necessary documentation, or into sets of provisions concerning areas such as construction site safety, transport safety, earthworks safety, etc. The selection of the given provisions takes place using multiple criteria. The aim of creating this program is to provide a practical tool for designers, contractors and construction companies. The model can contribute to the sufficient awareness of these participants about technical and organizational provisions that can help them to meet workplace safety requirements. The software for the selection of safety provisions also contains module that can calculate necessary cost estimates using a calculation formula chosen by the user. All software data conform to European standards harmonized for the Czech Republic.

  20. Application of high-performance liquid chromatography-tandem mass spectrometry with a quadrupole/linear ion trap instrument for the analysis of pesticide residues in olive oil.

    PubMed

    Hernando, M D; Ferrer, C; Ulaszewska, M; García-Reyes, J F; Molina-Díaz, A; Fernández-Alba, A R

    2007-11-01

    This article describes the development of an enhanced liquid chromatography-mass spectrometry (LC-MS) method for the analysis of pesticides in olive oil. One hundred pesticides belonging to different classes and that are currently used in agriculture have been included in this method. The LC-MS method was developed using a hybrid quadrupole/linear ion trap (QqQ(LIT)) analyzer. Key features of this technique are the rapid scan acquisition times, high specificity and high sensitivity it enables when the multiple reaction monitoring (MRM) mode or the linear ion-trap operational mode is employed. The application of 5 ms dwell times using a linearly accelerating (LINAC) high-pressure collision cell enabled the analysis of a high number of pesticides, with enough data points acquired for optimal peak definition in MRM operation mode and for satisfactory quantitative determinations to be made. The method quantifies over a linear dynamic range of LOQs (0.03-10 microg kg(-1)) up to 500 microg kg(-1). Matrix effects were evaluated by comparing the slopes of matrix-matched and solvent-based calibration curves. Weak suppression or enhancement of signals was observed (<15% for most-80-of the pesticides). A study to assess the identification criteria based on the MRM ratio was carried out by comparing the variations observed in standard vs matrix (in terms of coefficient of variation, CV%) and within the linear range of concentrations studied. The CV was lower than 15% when the response observed in solvent was compared to that in olive oil. The limit of detection was < or =10 microg kg(-1) for five of the selected pesticides, < or =5 microg kg(-1) for 14, and < or =1 microg kg(-1) for 81 pesticides. For pesticides where additional structural information was necessary for confirmatory purposes-in particular at low concentrations, since the second transition could not be detected-survey scans for enhanced product ion (EPI) and MS3 were developed.

Top