Sample records for variable separation method

  1. Model reduction method using variable-separation for stochastic saddle point problems

    NASA Astrophysics Data System (ADS)

    Jiang, Lijian; Li, Qiuqi

    2018-02-01

    In this paper, we consider a variable-separation (VS) method to solve the stochastic saddle point (SSP) problems. The VS method is applied to obtain the solution in tensor product structure for stochastic partial differential equations (SPDEs) in a mixed formulation. The aim of such a technique is to construct a reduced basis approximation of the solution of the SSP problems. The VS method attempts to get a low rank separated representation of the solution for SSP in a systematic enrichment manner. No iteration is performed at each enrichment step. In order to satisfy the inf-sup condition in the mixed formulation, we enrich the separated terms for the primal system variable at each enrichment step. For the SSP problems by regularization or penalty, we propose a more efficient variable-separation (VS) method, i.e., the variable-separation by penalty method. This can avoid further enrichment of the separated terms in the original mixed formulation. The computation of the variable-separation method decomposes into offline phase and online phase. Sparse low rank tensor approximation method is used to significantly improve the online computation efficiency when the number of separated terms is large. For the applications of SSP problems, we present three numerical examples to illustrate the performance of the proposed methods.

  2. Balancing precision and risk: should multiple detection methods be analyzed separately in N-mixture models?

    USGS Publications Warehouse

    Graves, Tabitha A.; Royle, J. Andrew; Kendall, Katherine C.; Beier, Paul; Stetz, Jeffrey B.; Macleod, Amy C.

    2012-01-01

    Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic) and bear rubs (opportunistic). We used hierarchical abundance models (N-mixture models) with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1) lead to the selection of the same variables as important and (2) provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3) yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight), and (4) improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed against those risks. The analysis framework presented here will be useful for other species exhibiting heterogeneity by detection method.

  3. Separation of variables in Maxwell equations in Plebański-Demiański spacetime

    NASA Astrophysics Data System (ADS)

    Frolov, Valeri P.; Krtouš, Pavel; KubizÅák, David

    2018-05-01

    A new method for separating variables in the Maxwell equations in four- and higher-dimensional Kerr-(A)dS spacetimes proposed recently by Lunin is generalized to any off-shell metric that admits a principal Killing-Yano tensor. The key observation is that Lunin's ansatz for the vector potential can be formulated in a covariant form—in terms of the principal tensor. In particular, focusing on the four-dimensional case we demonstrate separability of Maxwell's equations in the Kerr-NUT-(A)dS and the Plebański-Demiański family of spacetimes. The new method of separation of variables is quite different from the standard approach based on the Newman-Penrose formalism.

  4. A dispersion relationship governing incompressible wall turbulence

    NASA Technical Reports Server (NTRS)

    Tsuge, S.

    1978-01-01

    The method of separation of variables is shown to make turbulent correlation equations of Karman-Howarth type tractable for shear turbulence as well under the condition of neglected triple correlation. The separated dependent variable obeys an Orr-Sommerfeld equation. A new analytical method is developed using a scaling law different from the classical one due to Heisenberg and Lin and more appropriate for wall turbulent profiles. A dispersion relationship between the wave number and the separation constant which has the dimension of a frequency is derived in support of experimental observations of wave or coherent structure of wall turbulence.

  5. Separation of the atmospheric variability into non-Gaussian multidimensional sources by projection pursuit techniques

    NASA Astrophysics Data System (ADS)

    Pires, Carlos A. L.; Ribeiro, Andreia F. S.

    2017-02-01

    We develop an expansion of space-distributed time series into statistically independent uncorrelated subspaces (statistical sources) of low-dimension and exhibiting enhanced non-Gaussian probability distributions with geometrically simple chosen shapes (projection pursuit rationale). The method relies upon a generalization of the principal component analysis that is optimal for Gaussian mixed signals and of the independent component analysis (ICA), optimized to split non-Gaussian scalar sources. The proposed method, supported by information theory concepts and methods, is the independent subspace analysis (ISA) that looks for multi-dimensional, intrinsically synergetic subspaces such as dyads (2D) and triads (3D), not separable by ICA. Basically, we optimize rotated variables maximizing certain nonlinear correlations (contrast functions) coming from the non-Gaussianity of the joint distribution. As a by-product, it provides nonlinear variable changes `unfolding' the subspaces into nearly Gaussian scalars of easier post-processing. Moreover, the new variables still work as nonlinear data exploratory indices of the non-Gaussian variability of the analysed climatic and geophysical fields. The method (ISA, followed by nonlinear unfolding) is tested into three datasets. The first one comes from the Lorenz'63 three-dimensional chaotic model, showing a clear separation into a non-Gaussian dyad plus an independent scalar. The second one is a mixture of propagating waves of random correlated phases in which the emergence of triadic wave resonances imprints a statistical signature in terms of a non-Gaussian non-separable triad. Finally the method is applied to the monthly variability of a high-dimensional quasi-geostrophic (QG) atmospheric model, applied to the Northern Hemispheric winter. We find that quite enhanced non-Gaussian dyads of parabolic shape, perform much better than the unrotated variables in which concerns the separation of the four model's centroid regimes (positive and negative phases of the Arctic Oscillation and of the North Atlantic Oscillation). Triads are also likely in the QG model but of weaker expression than dyads due to the imposed shape and dimension. The study emphasizes the existence of nonlinear dyadic and triadic nonlinear teleconnections.

  6. Design of A Cyclone Separator Using Approximation Method

    NASA Astrophysics Data System (ADS)

    Sin, Bong-Su; Choi, Ji-Won; Lee, Kwon-Hee

    2017-12-01

    A Separator is a device installed in industrial applications to separate mixed objects. The separator of interest in this research is a cyclone type, which is used to separate a steam-brine mixture in a geothermal plant. The most important performance of the cyclone separator is the collection efficiency. The collection efficiency in this study is predicted by performing the CFD (Computational Fluid Dynamics) analysis. This research defines six shape design variables to maximize the collection efficiency. Thus, the collection efficiency is set up as the objective function in optimization process. Since the CFD analysis requires a lot of calculation time, it is impossible to obtain the optimal solution by linking the gradient-based optimization algorithm. Thus, two approximation methods are introduced to obtain an optimum design. In this process, an L18 orthogonal array is adopted as a DOE method, and kriging interpolation method is adopted to generate the metamodel for the collection efficiency. Based on the 18 analysis results, the relative importance of each variable to the collection efficiency is obtained through the ANOVA (analysis of variance). The final design is suggested considering the results obtained from two optimization methods. The fluid flow analysis of the cyclone separator is conducted by using the commercial CFD software, ANSYS-CFX.

  7. Testing common stream sampling methods for broad-scale, long-term monitoring

    Treesearch

    Eric K. Archer; Brett B. Roper; Richard C. Henderson; Nick Bouwes; S. Chad Mellison; Jeffrey L. Kershner

    2004-01-01

    We evaluated sampling variability of stream habitat sampling methods used by the USDA Forest Service and the USDI Bureau of Land Management monitoring program for the upper Columbia River Basin. Three separate studies were conducted to describe the variability of individual measurement techniques, variability between crews, and temporal variation throughout the summer...

  8. Firmness prediction in Prunus persica 'Calrico' peaches by visible/short-wave near infrared spectroscopy and acoustic measurements using optimised linear and non-linear chemometric models.

    PubMed

    Lafuente, Victoria; Herrera, Luis J; Pérez, María del Mar; Val, Jesús; Negueruela, Ignacio

    2015-08-15

    In this work, near infrared spectroscopy (NIR) and an acoustic measure (AWETA) (two non-destructive methods) were applied in Prunus persica fruit 'Calrico' (n = 260) to predict Magness-Taylor (MT) firmness. Separate and combined use of these measures was evaluated and compared using partial least squares (PLS) and least squares support vector machine (LS-SVM) regression methods. Also, a mutual-information-based variable selection method, seeking to find the most significant variables to produce optimal accuracy of the regression models, was applied to a joint set of variables (NIR wavelengths and AWETA measure). The newly proposed combined NIR-AWETA model gave good values of the determination coefficient (R(2)) for PLS and LS-SVM methods (0.77 and 0.78, respectively), improving the reliability of MT firmness prediction in comparison with separate NIR and AWETA predictions. The three variables selected by the variable selection method (AWETA measure plus NIR wavelengths 675 and 697 nm) achieved R(2) values 0.76 and 0.77, PLS and LS-SVM. These results indicated that the proposed mutual-information-based variable selection algorithm was a powerful tool for the selection of the most relevant variables. © 2014 Society of Chemical Industry.

  9. On solving wave equations on fixed bounded intervals involving Robin boundary conditions with time-dependent coefficients

    NASA Astrophysics Data System (ADS)

    van Horssen, Wim T.; Wang, Yandong; Cao, Guohua

    2018-06-01

    In this paper, it is shown how characteristic coordinates, or equivalently how the well-known formula of d'Alembert, can be used to solve initial-boundary value problems for wave equations on fixed, bounded intervals involving Robin type of boundary conditions with time-dependent coefficients. A Robin boundary condition is a condition that specifies a linear combination of the dependent variable and its first order space-derivative on a boundary of the interval. Analytical methods, such as the method of separation of variables (SOV) or the Laplace transform method, are not applicable to those types of problems. The obtained analytical results by applying the proposed method, are in complete agreement with those obtained by using the numerical, finite difference method. For problems with time-independent coefficients in the Robin boundary condition(s), the results of the proposed method also completely agree with those as for instance obtained by the method of separation of variables, or by the finite difference method.

  10. Methods of separation of variables in turbulence theory

    NASA Technical Reports Server (NTRS)

    Tsuge, S.

    1978-01-01

    Two schemes of closing turbulent moment equations are proposed both of which make double correlation equations separated into single-point equations. The first is based on neglected triple correlation, leading to an equation differing from small perturbed gasdynamic equations where the separation constant appears as the frequency. Grid-produced turbulence is described in this light as time-independent, cylindrically-isotropic turbulence. Application to wall turbulence guided by a new asymptotic method for the Orr-Sommerfeld equation reveals a neutrally stable mode of essentially three dimensional nature. The second closure scheme is based on an assumption of identity of the separated variables through which triple and quadruple correlations are formed. The resulting equation adds, to its equivalent of the first scheme, an integral of nonlinear convolution in the frequency describing a role due to triple correlation of direct energy-cascading.

  11. Clinical Trials With Large Numbers of Variables: Important Advantages of Canonical Analysis.

    PubMed

    Cleophas, Ton J

    2016-01-01

    Canonical analysis assesses the combined effects of a set of predictor variables on a set of outcome variables, but it is little used in clinical trials despite the omnipresence of multiple variables. The aim of this study was to assess the performance of canonical analysis as compared with traditional multivariate methods using multivariate analysis of covariance (MANCOVA). As an example, a simulated data file with 12 gene expression levels and 4 drug efficacy scores was used. The correlation coefficient between the 12 predictor and 4 outcome variables was 0.87 (P = 0.0001) meaning that 76% of the variability in the outcome variables was explained by the 12 covariates. Repeated testing after the removal of 5 unimportant predictor and 1 outcome variable produced virtually the same overall result. The MANCOVA identified identical unimportant variables, but it was unable to provide overall statistics. (1) Canonical analysis is remarkable, because it can handle many more variables than traditional multivariate methods such as MANCOVA can. (2) At the same time, it accounts for the relative importance of the separate variables, their interactions and differences in units. (3) Canonical analysis provides overall statistics of the effects of sets of variables, whereas traditional multivariate methods only provide the statistics of the separate variables. (4) Unlike other methods for combining the effects of multiple variables such as factor analysis/partial least squares, canonical analysis is scientifically entirely rigorous. (5) Limitations include that it is less flexible than factor analysis/partial least squares, because only 2 sets of variables are used and because multiple solutions instead of one is offered. We do hope that this article will stimulate clinical investigators to start using this remarkable method.

  12. Separation of Variables and Superintegrability; The symmetry of solvable systems

    NASA Astrophysics Data System (ADS)

    Kalnins, Ernest G.; Kress, Jonathan M.; Miller, Willard, Jr.

    2018-06-01

    Separation of variables methods for solving partial differential equations are of immense theoretical and practical importance in mathematical physics. They are the most powerful tool known for obtaining explicit solutions of the partial differential equations of mathematical physics. The purpose of this book is to give an up-to-date presentation of the theory of separation of variables and its relation to superintegrability. Collating and presenting it in a unified, updated and a more accessible manner, the results scattered in the literature that the authors have prepared is an invaluable resource for mathematicians and mathematical physicists in particular, as well as science, engineering, geological and biological researchers interested in explicit solutions.

  13. Disentangling Global Warming, Multidecadal Variability, and El Niño in Pacific Temperatures

    NASA Astrophysics Data System (ADS)

    Wills, Robert C.; Schneider, Tapio; Wallace, John M.; Battisti, David S.; Hartmann, Dennis L.

    2018-03-01

    A key challenge in climate science is to separate observed temperature changes into components due to internal variability and responses to external forcing. Extended integrations of forced and unforced climate models are often used for this purpose. Here we demonstrate a novel method to separate modes of internal variability from global warming based on differences in time scale and spatial pattern, without relying on climate models. We identify uncorrelated components of Pacific sea surface temperature variability due to global warming, the Pacific Decadal Oscillation (PDO), and the El Niño-Southern Oscillation (ENSO). Our results give statistical representations of PDO and ENSO that are consistent with their being separate processes, operating on different time scales, but are otherwise consistent with canonical definitions. We isolate the multidecadal variability of the PDO and find that it is confined to midlatitudes; tropical sea surface temperatures and their teleconnections mix in higher-frequency variability. This implies that midlatitude PDO anomalies are more persistent than previously thought.

  14. Bi-Level Integrated System Synthesis (BLISS)

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Agte, Jeremy S.; Sandusky, Robert R., Jr.

    1998-01-01

    BLISS is a method for optimization of engineering systems by decomposition. It separates the system level optimization, having a relatively small number of design variables, from the potentially numerous subsystem optimizations that may each have a large number of local design variables. The subsystem optimizations are autonomous and may be conducted concurrently. Subsystem and system optimizations alternate, linked by sensitivity data, producing a design improvement in each iteration. Starting from a best guess initial design, the method improves that design in iterative cycles, each cycle comprised of two steps. In step one, the system level variables are frozen and the improvement is achieved by separate, concurrent, and autonomous optimizations in the local variable subdomains. In step two, further improvement is sought in the space of the system level variables. Optimum sensitivity data link the second step to the first. The method prototype was implemented using MATLAB and iSIGHT programming software and tested on a simplified, conceptual level supersonic business jet design, and a detailed design of an electronic device. Satisfactory convergence and favorable agreement with the benchmark results were observed. Modularity of the method is intended to fit the human organization and map well on the computing technology of concurrent processing.

  15. Analysis of Eigenvalue and Eigenfunction of Klein Gordon Equation Using Asymptotic Iteration Method for Separable Non-central Cylindrical Potential

    NASA Astrophysics Data System (ADS)

    Suparmi, A.; Cari, C.; Lilis Elviyanti, Isnaini

    2018-04-01

    Analysis of relativistic energy and wave function for zero spin particles using Klein Gordon equation was influenced by separable noncentral cylindrical potential was solved by asymptotic iteration method (AIM). By using cylindrical coordinates, the Klein Gordon equation for the case of symmetry spin was reduced to three one-dimensional Schrodinger like equations that were solvable using variable separation method. The relativistic energy was calculated numerically with Matlab software, and the general unnormalized wave function was expressed in hypergeometric terms.

  16. Mapping forest inventory and analysis data attributes within the framework of double sampling for stratification design

    Treesearch

    David C. Chojnacky; Randolph H. Wynne; Christine E. Blinn

    2009-01-01

    Methodology is lacking to easily map Forest Inventory and Analysis (FIA) inventory statistics for all attribute variables without having to develop separate models and methods for each variable. We developed a mapping method that can directly transfer tabular data to a map on which pixels can be added any way desired to estimate carbon (or any other variable) for a...

  17. Method and Apparatus for Separating Particles by Dielectrophoresis

    NASA Technical Reports Server (NTRS)

    Pant, Kapil (Inventor); Wang, Yi (Inventor); Bhatt, Ketan (Inventor); Prabhakarpandian, Balabhasker (Inventor)

    2014-01-01

    Particle separation apparatus separate particles and particle populations using dielectrophoretic (DEP) forces generated by one or more pairs of electrically coupled electrodes separated by a gap. Particles suspended in a fluid are separated by DEP forces generated by the at least one electrode pair at the gap as they travel over a separation zone comprising the electrode pair. Selected particles are deflected relative to the flow of incoming particles by DEP forces that are affected by controlling applied potential, gap width, and the angle linear gaps with respect to fluid flow. The gap between an electrode pair may be a single, linear gap of constant gap, a single linear gap having variable width, or a be in the form of two or more linear gaps having constant or variable gap width having different angles with respect to one another and to the flow.

  18. Variables separation of the spectral BRDF for better understanding color variation in special effect pigment coatings.

    PubMed

    Ferrero, Alejandro; Rabal, Ana María; Campos, Joaquín; Pons, Alicia; Hernanz, María Luisa

    2012-06-01

    A type of representation of the spectral bidirectional reflectance distribution function (BRDF) is proposed that distinctly separates the spectral variable (wavelength) from the geometrical variables (spherical coordinates of the irradiation and viewing directions). Principal components analysis (PCA) is used in order to decompose the spectral BRDF in decorrelated spectral components, and the weight that they have at every geometrical configuration of irradiation/viewing is established. This method was applied to the spectral BRDF measurement of a special effect pigment sample, and four principal components with relevant variance were identified. These four components are enough to reproduce the great diversity of spectral reflectances observed at different geometrical configurations. Since this representation is able to separate spectral and geometrical variables, it facilitates the interpretation of the color variation of special effect pigments coatings versus the geometrical configuration of irradiation/viewing.

  19. A stepwedge-based method for measuring breast density: observer variability and comparison with human reading

    NASA Astrophysics Data System (ADS)

    Diffey, Jenny; Berks, Michael; Hufton, Alan; Chung, Camilla; Verow, Rosanne; Morrison, Joanna; Wilson, Mary; Boggis, Caroline; Morris, Julie; Maxwell, Anthony; Astley, Susan

    2010-04-01

    Breast density is positively linked to the risk of developing breast cancer. We have developed a semi-automated, stepwedge-based method that has been applied to the mammograms of 1,289 women in the UK breast screening programme to measure breast density by volume and area. 116 images were analysed by three independent operators to assess inter-observer variability; 24 of these were analysed on 10 separate occasions by the same operator to determine intra-observer variability. 168 separate images were analysed using the stepwedge method and by two radiologists who independently estimated percentage breast density by area. There was little intra-observer variability in the stepwedge method (average coefficients of variation 3.49% - 5.73%). There were significant differences in the volumes of glandular tissue obtained by the three operators. This was attributed to variations in the operators' definition of the breast edge. For fatty and dense breasts, there was good correlation between breast density assessed by the stepwedge method and the radiologists. This was also observed between radiologists, despite significant inter-observer variation. Based on analysis of thresholds used in the stepwedge method, radiologists' definition of a dense pixel is one in which the percentage of glandular tissue is between 10 and 20% of the total thickness of tissue.

  20. Spectral collocation for multiparameter eigenvalue problems arising from separable boundary value problems

    NASA Astrophysics Data System (ADS)

    Plestenjak, Bor; Gheorghiu, Călin I.; Hochstenbach, Michiel E.

    2015-10-01

    In numerous science and engineering applications a partial differential equation has to be solved on some fairly regular domain that allows the use of the method of separation of variables. In several orthogonal coordinate systems separation of variables applied to the Helmholtz, Laplace, or Schrödinger equation leads to a multiparameter eigenvalue problem (MEP); important cases include Mathieu's system, Lamé's system, and a system of spheroidal wave functions. Although multiparameter approaches are exploited occasionally to solve such equations numerically, MEPs remain less well known, and the variety of available numerical methods is not wide. The classical approach of discretizing the equations using standard finite differences leads to algebraic MEPs with large matrices, which are difficult to solve efficiently. The aim of this paper is to change this perspective. We show that by combining spectral collocation methods and new efficient numerical methods for algebraic MEPs it is possible to solve such problems both very efficiently and accurately. We improve on several previous results available in the literature, and also present a MATLAB toolbox for solving a wide range of problems.

  1. Variations in thematic mapper spectra of soil related to tillage and crop residue management - Initial evaluation

    NASA Technical Reports Server (NTRS)

    Seeley, M. W.; Ruschy, D. L.; Linden, D. R.

    1983-01-01

    A cooperative research project was initiated in 1982 to study differences in thematic mapper spectral characteristics caused by variable tillage and crop residue practices. Initial evaluations of radiometric data suggest that spectral separability of variably tilled soils can be confounded by moisture and weathering effects. Separability of bare tilled soils from those with significant amounts of corn residue is enhanced by wet conditions, but still possible under dry conditions when recent tillage operations have occurred. In addition, thematic mapper data may provide an alternative method to study the radiant energy balance at the soil surface in conjunction with variable tillage systems.

  2. Laplace Boundary-Value Problem in Paraboloidal Coordinates

    ERIC Educational Resources Information Center

    Duggen, L.; Willatzen, M.; Voon, L. C. Lew Yan

    2012-01-01

    This paper illustrates both a problem in mathematical physics, whereby the method of separation of variables, while applicable, leads to three ordinary differential equations that remain fully coupled via two separation constants and a five-term recurrence relation for series solutions, and an exactly solvable problem in electrostatics, as a…

  3. Optimization of Robust HPLC Method for Quantitation of Ambroxol Hydrochloride and Roxithromycin Using a DoE Approach.

    PubMed

    Patel, Rashmin B; Patel, Nilay M; Patel, Mrunali R; Solanki, Ajay B

    2017-03-01

    The aim of this work was to develop and optimize a robust HPLC method for the separation and quantitation of ambroxol hydrochloride and roxithromycin utilizing Design of Experiment (DoE) approach. The Plackett-Burman design was used to assess the impact of independent variables (concentration of organic phase, mobile phase pH, flow rate and column temperature) on peak resolution, USP tailing and number of plates. A central composite design was utilized to evaluate the main, interaction, and quadratic effects of independent variables on the selected dependent variables. The optimized HPLC method was validated based on ICH Q2R1 guideline and was used to separate and quantify ambroxol hydrochloride and roxithromycin in tablet formulations. The findings showed that DoE approach could be effectively applied to optimize a robust HPLC method for quantification of ambroxol hydrochloride and roxithromycin in tablet formulations. Statistical comparison between results of proposed and reported HPLC method revealed no significant difference; indicating the ability of proposed HPLC method for analysis of ambroxol hydrochloride and roxithromycin in pharmaceutical formulations. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Gaia DR2 documentation Chapter 7: Variability

    NASA Astrophysics Data System (ADS)

    Eyer, L.; Guy, L.; Distefano, E.; Clementini, G.; Mowlavi, N.; Rimoldini, L.; Roelens, M.; Audard, M.; Holl, B.; Lanzafame, A.; Lebzelter, T.; Lecoeur-Taïbi, I.; Molnár, L.; Ripepi, V.; Sarro, L.; Jevardat de Fombelle, G.; Nienartowicz, K.; De Ridder, J.; Juhász, Á.; Molinaro, R.; Plachy, E.; Regibo, S.

    2018-04-01

    This chapter of the Gaia DR2 documentation describes the models and methods used on the 22 months of data to produce the Gaia variable star results for Gaia DR2. The variability processing and analysis was based mostly on the calibrated G and integrated BP and RP photometry. The variability analysis approach to the Gaia data has been described in Eyer et al. (2017), and the Gaia DR2 results are presented in Holl et al. (2018). Detailed methods on specific topics will be published in a number of separate articles. Variability behaviour in the colour magnitude diagram is presented in Gaia Collaboration et al. (2018c).

  5. The Modelling of Axially Translating Flexible Beams

    NASA Astrophysics Data System (ADS)

    Theodore, R. J.; Arakeri, J. H.; Ghosal, A.

    1996-04-01

    The axially translating flexible beam with a prismatic joint can be modelled by using the Euler-Bernoulli beam equation together with the convective terms. In general, the method of separation of variables cannot be applied to solve this partial differential equation. In this paper, a non-dimensional form of the Euler Bernoulli beam equation is presented, obtained by using the concept of group velocity, and also the conditions under which separation of variables and assumed modes method can be used. The use of clamped-mass boundary conditions leads to a time-dependent frequency equation for the translating flexible beam. A novel method is presented for solving this time dependent frequency equation by using a differential form of the frequency equation. The assume mode/Lagrangian formulation of dynamics is employed to derive closed form equations of motion. It is shown by using Lyapunov's first method that the dynamic responses of flexural modal variables become unstable during retraction of the flexible beam, which the dynamic response during extension of the beam is stable. Numerical simulation results are presented for the uniform axial motion induced transverse vibration for a typical flexible beam.

  6. Comparison of common components analysis with principal components analysis and independent components analysis: Application to SPME-GC-MS volatolomic signatures.

    PubMed

    Bouhlel, Jihéne; Jouan-Rimbaud Bouveresse, Delphine; Abouelkaram, Said; Baéza, Elisabeth; Jondreville, Catherine; Travel, Angélique; Ratel, Jérémy; Engel, Erwan; Rutledge, Douglas N

    2018-02-01

    The aim of this work is to compare a novel exploratory chemometrics method, Common Components Analysis (CCA), with Principal Components Analysis (PCA) and Independent Components Analysis (ICA). CCA consists in adapting the multi-block statistical method known as Common Components and Specific Weights Analysis (CCSWA or ComDim) by applying it to a single data matrix, with one variable per block. As an application, the three methods were applied to SPME-GC-MS volatolomic signatures of livers in an attempt to reveal volatile organic compounds (VOCs) markers of chicken exposure to different types of micropollutants. An application of CCA to the initial SPME-GC-MS data revealed a drift in the sample Scores along CC2, as a function of injection order, probably resulting from time-related evolution in the instrument. This drift was eliminated by orthogonalization of the data set with respect to CC2, and the resulting data are used as the orthogonalized data input into each of the three methods. Since the first step in CCA is to norm-scale all the variables, preliminary data scaling has no effect on the results, so that CCA was applied only to orthogonalized SPME-GC-MS data, while, PCA and ICA were applied to the "orthogonalized", "orthogonalized and Pareto-scaled", and "orthogonalized and autoscaled" data. The comparison showed that PCA results were highly dependent on the scaling of variables, contrary to ICA where the data scaling did not have a strong influence. Nevertheless, for both PCA and ICA the clearest separations of exposed groups were obtained after autoscaling of variables. The main part of this work was to compare the CCA results using the orthogonalized data with those obtained with PCA and ICA applied to orthogonalized and autoscaled variables. The clearest separations of exposed chicken groups were obtained by CCA. CCA Loadings also clearly identified the variables contributing most to the Common Components giving separations. The PCA Loadings did not highlight the most influencing variables for each separation, whereas the ICA Loadings highlighted the same variables as did CCA. This study shows the potential of CCA for the extraction of pertinent information from a data matrix, using a procedure based on an original optimisation criterion, to produce results that are complementary, and in some cases may be superior, to those of PCA and ICA. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. [Application of Kohonen Self-Organizing Feature Maps in QSAR of human ADMET and kinase data sets].

    PubMed

    Hegymegi-Barakonyi, Bálint; Orfi, László; Kéri, György; Kövesdi, István

    2013-01-01

    QSAR predictions have been proven very useful in a large number of studies for drug design, such as kinase inhibitor design as targets for cancer therapy, however the overall predictability often remains unsatisfactory. To improve predictability of ADMET features and kinase inhibitory data, we present a new method using Kohonen's Self-Organizing Feature Map (SOFM) to cluster molecules based on explanatory variables (X) and separate dissimilar ones. We calculated SOFM clusters for a large number of molecules with human ADMET and kinase inhibitory data, and we showed that chemically similar molecules were in the same SOFM cluster, and within such clusters the QSAR models had significantly better predictability. We used also target variables (Y, e.g. ADMET) jointly with X variables to create a novel type of clustering. With our method, cells of loosely coupled XY data could be identified and separated into different model building sets.

  8. Hydraulics Graphics Package. Users Manual

    DTIC Science & Technology

    1985-11-01

    ENTER: VARIABLE/SEPARATOR/VALUE OR STRING GLBL, TETON DAM FAILURE ENTER: VARIABLE/SEPARATOR/VALUE OR STRING SLOC ,DISCHARGE HISTOGRAM ENTER: VARIABLE...ENTER: VARIABLE/SEPARATOR/VALUE OR STRING YLBL,FLOW IN 1000 CFS ENTER: VARIABLE/SEPARATORVA LUE OR STRING GLBL, TETON DAM FAILURE ENTER: VARIABLE...SEPARATOR/VALUE OR STRING SECNO, 0 ENTER: VARIABLE/SEPARATOR/VALUE OR STRING GO 1ee0. F go L 0 U I Goo. 200. TETON DAM FAILUPE N\\ rLOIJ Alr 4wi. fiNT. I .I

  9. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    NASA Astrophysics Data System (ADS)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  10. WWC Review of the Report "Learning the Control of Variables Strategy in Higher and Lower Achieving Classrooms: Contributions of Explicit Instruction and Experimentation"

    ERIC Educational Resources Information Center

    What Works Clearinghouse, 2012

    2012-01-01

    The study reviewed in this paper examined three separate methods for teaching the "control of variables strategy" ("CVS"), a procedure for conducting a science experiment so that only one variable is tested and all others are held constant, or "controlled." The study analyzed data from a randomized controlled trial of…

  11. Analysis of spatial heterogeneity in normal epithelium and preneoplastic alterations in mouse prostate tumor models

    PubMed Central

    Valkonen, Mira; Ruusuvuori, Pekka; Kartasalo, Kimmo; Nykter, Matti; Visakorpi, Tapio; Latonen, Leena

    2017-01-01

    Cancer involves histological changes in tissue, which is of primary importance in pathological diagnosis and research. Automated histological analysis requires ability to computationally separate pathological alterations from normal tissue with all its variables. On the other hand, understanding connections between genetic alterations and histological attributes requires development of enhanced analysis methods suitable also for small sample sizes. Here, we set out to develop computational methods for early detection and distinction of prostate cancer-related pathological alterations. We use analysis of features from HE stained histological images of normal mouse prostate epithelium, distinguishing the descriptors for variability between ventral, lateral, and dorsal lobes. In addition, we use two common prostate cancer models, Hi-Myc and Pten+/− mice, to build a feature-based machine learning model separating the early pathological lesions provoked by these genetic alterations. This work offers a set of computational methods for separation of early neoplastic lesions in the prostates of model mice, and provides proof-of-principle for linking specific tumor genotypes to quantitative histological characteristics. The results obtained show that separation between different spatial locations within the organ, as well as classification between histologies linked to different genetic backgrounds, can be performed with very high specificity and sensitivity. PMID:28317907

  12. On-resonance Variable Delay Multi Pulse Scheme for Imaging of Fast-exchanging Protons and semi-solid Macromolecules

    PubMed Central

    Xu, Jiadi; Chan, Kannie W.Y.; Xu, Xiang; Yadav, Nibhay; Liu, Guanshu; van Zijl, Peter C. M.

    2016-01-01

    Purpose To develop an on-resonance variable delay multi-pulse (VDMP) scheme to image magnetization transfer contrast (MTC) as well as the chemical exchange saturation transfer (CEST) contrast of total fast-exchanging protons (TFP) with exchange rate above about 1 kHz. Methods A train of high power binomial pulses was applied at the water resonance. The inter-pulse delay, called mixing time, was varied to observe its effect on the water signal reduction, allowing separation and quantification of MTC and CEST contributions due to their different proton transfer rates. The fast-exchanging protons in CEST and MTC are labeled together with the short T2 components in MTC and separated out using a variable mixing time. Results Phantom studies of selected metabolite solutions (glucose, glutamate, creatine, myo-inositol), bovine serum albumin (BSA) and hair conditioner show the capability of on-resonance VDMP to separate out exchangeable protons with exchange rates above 1 kHz. Quantitative MTC and TFP maps were acquired on healthy mouse brains using this method showing strong gray/white matter contrast for the slowly transferring MTC protons while the TFP map was more uniform across the brain but somewhat higher in gray matter. Conclusions The new method provides a simple way of imaging fast-exchanging protons, as well as MTC components with a slow transfer rate. PMID:26900759

  13. Applying probabilistic temporal and multisite data quality control methods to a public health mortality registry in Spain: a systematic approach to quality control of repositories.

    PubMed

    Sáez, Carlos; Zurriaga, Oscar; Pérez-Panadés, Jordi; Melchor, Inma; Robles, Montserrat; García-Gómez, Juan M

    2016-11-01

    To assess the variability in data distributions among data sources and over time through a case study of a large multisite repository as a systematic approach to data quality (DQ). Novel probabilistic DQ control methods based on information theory and geometry are applied to the Public Health Mortality Registry of the Region of Valencia, Spain, with 512 143 entries from 2000 to 2012, disaggregated into 24 health departments. The methods provide DQ metrics and exploratory visualizations for (1) assessing the variability among multiple sources and (2) monitoring and exploring changes with time. The methods are suited to big data and multitype, multivariate, and multimodal data. The repository was partitioned into 2 probabilistically separated temporal subgroups following a change in the Spanish National Death Certificate in 2009. Punctual temporal anomalies were noticed due to a punctual increment in the missing data, along with outlying and clustered health departments due to differences in populations or in practices. Changes in protocols, differences in populations, biased practices, or other systematic DQ problems affected data variability. Even if semantic and integration aspects are addressed in data sharing infrastructures, probabilistic variability may still be present. Solutions include fixing or excluding data and analyzing different sites or time periods separately. A systematic approach to assessing temporal and multisite variability is proposed. Multisite and temporal variability in data distributions affects DQ, hindering data reuse, and an assessment of such variability should be a part of systematic DQ procedures. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Quadratic time dependent Hamiltonians and separation of variables

    NASA Astrophysics Data System (ADS)

    Anzaldo-Meneses, A.

    2017-06-01

    Time dependent quantum problems defined by quadratic Hamiltonians are solved using canonical transformations. The Green's function is obtained and a comparison with the classical Hamilton-Jacobi method leads to important geometrical insights like exterior differential systems, Monge cones and time dependent Gaussian metrics. The Wei-Norman approach is applied using unitary transformations defined in terms of generators of the associated Lie groups, here the semi-direct product of the Heisenberg group and the symplectic group. A new explicit relation for the unitary transformations is given in terms of a finite product of elementary transformations. The sequential application of adequate sets of unitary transformations leads naturally to a new separation of variables method for time dependent Hamiltonians, which is shown to be related to the Inönü-Wigner contraction of Lie groups. The new method allows also a better understanding of interacting particles or coupled modes and opens an alternative way to analyze topological phases in driven systems.

  15. Multiple dual mode counter-current chromatography with variable duration of alternating phase elution steps.

    PubMed

    Kostanyan, Artak E; Erastov, Andrey A; Shishilov, Oleg N

    2014-06-20

    The multiple dual mode (MDM) counter-current chromatography separation processes consist of a succession of two isocratic counter-current steps and are characterized by the shuttle (forward and back) transport of the sample in chromatographic columns. In this paper, the improved MDM method based on variable duration of alternating phase elution steps has been developed and validated. The MDM separation processes with variable duration of phase elution steps are analyzed. Basing on the cell model, analytical solutions are developed for impulse and non-impulse sample loading at the beginning of the column. Using the analytical solutions, a calculation program is presented to facilitate the simulation of MDM with variable duration of phase elution steps, which can be used to select optimal process conditions for the separation of a given feed mixture. Two options of the MDM separation are analyzed: 1 - with one-step solute elution: the separation is conducted so, that the sample is transferred forward and back with upper and lower phases inside the column until the desired separation of the components is reached, and then each individual component elutes entirely within one step; 2 - with multi-step solute elution, when the fractions of individual components are collected in over several steps. It is demonstrated that proper selection of the duration of individual cycles (phase flow times) can greatly increase the separation efficiency of CCC columns. Experiments were carried out using model mixtures of compounds from the GUESSmix with solvent systems hexane/ethyl acetate/methanol/water. The experimental results are compared to the predictions of the theory. A good agreement between theory and experiment has been demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. An Improved Search Approach for Solving Non-Convex Mixed-Integer Non Linear Programming Problems

    NASA Astrophysics Data System (ADS)

    Sitopu, Joni Wilson; Mawengkang, Herman; Syafitri Lubis, Riri

    2018-01-01

    The nonlinear mathematical programming problem addressed in this paper has a structure characterized by a subset of variables restricted to assume discrete values, which are linear and separable from the continuous variables. The strategy of releasing nonbasic variables from their bounds, combined with the “active constraint” method, has been developed. This strategy is used to force the appropriate non-integer basic variables to move to their neighbourhood integer points. Successful implementation of these algorithms was achieved on various test problems.

  17. Inverting Monotonic Nonlinearities by Entropy Maximization

    PubMed Central

    López-de-Ipiña Pena, Karmele; Caiafa, Cesar F.

    2016-01-01

    This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results. PMID:27780261

  18. Inverting Monotonic Nonlinearities by Entropy Maximization.

    PubMed

    Solé-Casals, Jordi; López-de-Ipiña Pena, Karmele; Caiafa, Cesar F

    2016-01-01

    This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results.

  19. Platelet-rich plasma differs according to preparation method and human variability.

    PubMed

    Mazzocca, Augustus D; McCarthy, Mary Beth R; Chowaniec, David M; Cote, Mark P; Romeo, Anthony A; Bradley, James P; Arciero, Robert A; Beitzel, Knut

    2012-02-15

    Varying concentrations of blood components in platelet-rich plasma preparations may contribute to the variable results seen in recently published clinical studies. The purposes of this investigation were (1) to quantify the level of platelets, growth factors, red blood cells, and white blood cells in so-called one-step (clinically used commercial devices) and two-step separation systems and (2) to determine the influence of three separate blood draws on the resulting components of platelet-rich plasma. Three different platelet-rich plasma (PRP) separation methods (on blood samples from eight subjects with a mean age [and standard deviation] of 31.6 ± 10.9 years) were used: two single-spin processes (PRPLP and PRPHP) and a double-spin process (PRPDS) were evaluated for concentrations of platelets, red and white blood cells, and growth factors. Additionally, the effect of three repetitive blood draws on platelet-rich plasma components was evaluated. The content and concentrations of platelets, white blood cells, and growth factors for each method of separation differed significantly. All separation techniques resulted in a significant increase in platelet concentration compared with native blood. Platelet and white blood-cell concentrations of the PRPHP procedure were significantly higher than platelet and white blood-cell concentrations produced by the so-called single-step PRPLP and the so-called two-step PRPDS procedures, although significant differences between PRPLP and PRPDS were not observed. Comparing the results of the three blood draws with regard to the reliability of platelet number and cell counts, wide variations of intra-individual numbers were observed. Single-step procedures are capable of producing sufficient amounts of platelets for clinical usage. Within the evaluated procedures, platelet numbers and numbers of white blood cells differ significantly. The intra-individual results of platelet-rich plasma separations showed wide variations in platelet and cell numbers as well as levels of growth factors regardless of separation method.

  20. Using factorial experimental design to evaluate the separation of plastics by froth flotation.

    PubMed

    Salerno, Davide; Jordão, Helga; La Marca, Floriana; Carvalho, M Teresa

    2018-03-01

    This paper proposes the use of factorial experimental design as a standard experimental method in the application of froth flotation to plastic separation instead of the commonly used OVAT method (manipulation of one variable at a time). Furthermore, as is common practice in minerals flotation, the parameters of the kinetic model were used as process responses rather than the recovery of plastics in the separation products. To explain and illustrate the proposed methodology, a set of 32 experimental tests was performed using mixtures of two polymers with approximately the same density, PVC and PS (with mineral charges), with particle size ranging from 2 to 4 mm. The manipulated variables were frother concentration, air flow rate and pH. A three-level full factorial design was conducted. The models establishing the relationships between the manipulated variables and their interactions with the responses (first order kinetic model parameters) were built. The Corrected Akaike Information Criterion was used to select the best fit model and an analysis of variance (ANOVA) was conducted to identify the statistically significant terms of the model. It was shown that froth flotation can be used to efficiently separate PVC from PS with mineral charges by reducing the floatability of PVC, which largely depends on the action of pH. Within the tested interval, this is the factor that most affects the flotation rate constants. The results obtained show that the pure error may be of the same magnitude as the sum of squares of the errors, suggesting that there is significant variability within the same experimental conditions. Thus, special care is needed when evaluating and generalizing the process. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Extracting Leading Nonlinear Modes of Changing Climate From Global SST Time Series

    NASA Astrophysics Data System (ADS)

    Mukhin, D.; Gavrilov, A.; Loskutov, E. M.; Feigin, A. M.; Kurths, J.

    2017-12-01

    Data-driven modeling of climate requires adequate principal variables extracted from observed high-dimensional data. For constructing such variables it is needed to find spatial-temporal patterns explaining a substantial part of the variability and comprising all dynamically related time series from the data. The difficulties of this task rise from the nonlinearity and non-stationarity of the climate dynamical system. The nonlinearity leads to insufficiency of linear methods of data decomposition for separating different processes entangled in the observed time series. On the other hand, various forcings, both anthropogenic and natural, make the dynamics non-stationary, and we should be able to describe the response of the system to such forcings in order to separate the modes explaining the internal variability. The method we present is aimed to overcome both these problems. The method is based on the Nonlinear Dynamical Mode (NDM) decomposition [1,2], but takes into account external forcing signals. An each mode depends on hidden, unknown a priori, time series which, together with external forcing time series, are mapped onto data space. Finding both the hidden signals and the mapping allows us to study the evolution of the modes' structure in changing external conditions and to compare the roles of the internal variability and forcing in the observed behavior. The method is used for extracting of the principal modes of SST variability on inter-annual and multidecadal time scales accounting the external forcings such as CO2, variations of the solar activity and volcanic activity. The structure of the revealed teleconnection patterns as well as their forecast under different CO2 emission scenarios are discussed.[1] Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical modes of climate variability. Scientific Reports, 5, 15510. [2] Gavrilov, A., Mukhin, D., Loskutov, E., Volodin, E., Feigin, A., & Kurths, J. (2016). Method for reconstructing nonlinear modes with adaptive structure from multidimensional data. Chaos: An Interdisciplinary Journal of Nonlinear Science, 26(12), 123101.

  2. On Darboux's approach to R-separability of variables. Classification of conformally flat 4-dimensional binary metrics

    NASA Astrophysics Data System (ADS)

    Szereszewski, A.; Sym, A.

    2015-09-01

    The standard method of separation of variables in PDEs called the Stäckel-Robertson-Eisenhart (SRE) approach originated in the papers by Robertson (1928 Math. Ann. 98 749-52) and Eisenhart (1934 Ann. Math. 35 284-305) on separability of variables in the Schrödinger equation defined on a pseudo-Riemannian space equipped with orthogonal coordinates, which in turn were based on the purely classical mechanics results by Paul Stäckel (1891, Habilitation Thesis, Halle). These still fundamental results have been further extended in diverse directions by e.g. Havas (1975 J. Math. Phys. 16 1461-8 J. Math. Phys. 16 2476-89) or Koornwinder (1980 Lecture Notes in Mathematics 810 (Berlin: Springer) pp 240-63). The involved separability is always ordinary (factor R = 1) and regular (maximum number of independent parameters in separation equations). A different approach to separation of variables was initiated by Gaston Darboux (1878 Ann. Sci. E.N.S. 7 275-348) which has been almost completely forgotten in today’s research on the subject. Darboux’s paper was devoted to the so-called R-separability of variables in the standard Laplace equation. At the outset he did not make any specific assumption about the separation equations (this is in sharp contrast to the SRE approach). After impressive calculations Darboux obtained a complete solution of the problem. He found not only eleven cases of ordinary separability Eisenhart (1934 Ann. Math. 35 284-305) but also Darboux-Moutard-cyclidic metrics (Bôcher 1894 Ueber die Reihenentwickelungen der Potentialtheorie (Leipzig: Teubner)) and non-regularly separable Dupin-cyclidic metrics as well. In our previous paper Darboux’s approach was extended to the case of the stationary Schrödinger equation on Riemannian spaces admitting orthogonal coordinates. In particular the class of isothermic metrics was defined (isothermicity of the metric is a necessary condition for its R-separability). An important sub-class of isothermic metrics are binary metrics. In this paper we solve the following problem: to classify all conformally flat (of arbitrary signature) 4-dimensional binary metrics. Among them there are 1) those that are separable in the sense of SRE metrics Kalnins-Miller (1978 Trans. Am. Math. Soc. 244 241-61 1982 J. Phys. A: Math. Gen. 15 2699-709 1984 Adv. Math. 51 91-106 1983 SIAM J. Math. Anal. 14 126-37) and 2) new examples of non-Stäckel R-separability in 4 dimensions.

  3. VORSTAB: A computer program for calculating lateral-directional stability derivatives with vortex flow effect

    NASA Technical Reports Server (NTRS)

    Lan, C. Edward

    1985-01-01

    A computer program based on the Quasi-Vortex-Lattice Method of Lan is presented for calculating longitudinal and lateral-directional aerodynamic characteristics of nonplanar wing-body combination. The method is based on the assumption of inviscid subsonic flow. Both attached and vortex-separated flows are treated. For the vortex-separated flow, the calculation is based on the method of suction analogy. The effect of vortex breakdown is accounted for by an empirical method. A summary of the theoretical method, program capabilities, input format, output variables and program job control set-up are described. Three test cases are presented as guides for potential users of the code.

  4. A new approach in space-time analysis of multivariate hydrological data: Application to Brazil's Nordeste region rainfall

    NASA Astrophysics Data System (ADS)

    Sicard, Emeline; Sabatier, Robert; Niel, HéLèNe; Cadier, Eric

    2002-12-01

    The objective of this paper is to implement an original method for spatial and multivariate data, combining a method of three-way array analysis (STATIS) with geostatistical tools. The variables of interest are the monthly amounts of rainfall in the Nordeste region of Brazil, recorded from 1937 to 1975. The principle of the technique is the calculation of a linear combination of the initial variables, containing a large part of the initial variability and taking into account the spatial dependencies. It is a promising method that is able to analyze triple variability: spatial, seasonal, and interannual. In our case, the first component obtained discriminates a group of rain gauges, corresponding approximately to the Agreste, from all the others. The monthly variables of July and August strongly influence this separation. Furthermore, an annual study brings out the stability of the spatial structure of components calculated for each year.

  5. STICK-SLIP-SEPARATION Analysis and Non-Linear Stiffness and Damping Characterization of Friction Contacts Having Variable Normal Load

    NASA Astrophysics Data System (ADS)

    Yang, B. D.; Chu, M. L.; Menq, C. H.

    1998-03-01

    Mechanical systems in which moving components are mutually constrained through contacts often lead to complex contact kinematics involving tangential and normal relative motions. A friction contact model is proposed to characterize this type of contact kinematics that imposes both friction non-linearity and intermittent separation non-linearity on the system. The stick-slip friction phenomenon is analyzed by establishing analytical criteria that predict the transition between stick, slip, and separation of the interface. The established analytical transition criteria are particularly important to the proposed friction contact model for the transition conditions of the contact kinematics are complicated by the effect of normal load variation and possible interface separation. With these transition criteria, the induced friction force on the contact plane and the variable normal load perpendicular to the contact plane, can be predicted for any given cyclic relative motions at the contact interface and hysteresis loops can be produced so as to characterize the equivalent damping and stiffness of the friction contact. These-non-linear damping and stiffness methods along with the harmonic balance method are then used to predict the resonant response of a frictionally constrained two-degree-of-freedom oscillator. The predicted results are compared with those of the time integration method and the damping effect, the resonant frequency shift, and the jump phenomenon are examined.

  6. Identification of independent storm events: Seasonal and spatial variability of times between storms in Alpine area

    NASA Astrophysics Data System (ADS)

    Iadanzaa, Carla; Rianna, Maura; Orlando, Dario; Ubertini, Lucio; Napolitano, Francesco

    2013-10-01

    The aim of the paper is the identification of rain events that trigger landslides through the use of an exponential method to separate stochastic independent events. This activity is carried out within the definition of empirical rainfall thresholds for debris flows and shallow landslides. The study area is the Trento district, which is located in the northeast zone of an Alpine area. The work evaluates the factors that affect the variability in space and time of the critical duration of each rain gauge, defined as the minimum dry period duration that separates two rainy periods that are stochastically independent.

  7. Selective spectroscopic imaging of hyperpolarized pyruvate and its metabolites using a single-echo variable phase advance method in balanced SSFP

    PubMed Central

    Varma, Gopal; Wang, Xiaoen; Vinogradov, Elena; Bhatt, Rupal S.; Sukhatme, Vikas; Seth, Pankaj; Lenkinski, Robert E.; Alsop, David C.; Grant, Aaron K.

    2015-01-01

    Purpose In balanced steady state free precession (bSSFP), the signal intensity has a well-known dependence on the off-resonance frequency, or, equivalently, the phase advance between successive radiofrequency (RF) pulses. The signal profile can be used to resolve the contributions from the spectrally separated metabolites. This work describes a method based on use of a variable RF phase advance to acquire spatial and spectral data in a time-efficient manner for hyperpolarized 13C MRI. Theory and Methods The technique relies on the frequency response from a bSSFP acquisition to acquire relatively rapid, high-resolution images that may be reconstructed to separate contributions from different metabolites. The ability to produce images from spectrally separated metabolites was demonstrated in-vitro, as well as in-vivo following administration of hyperpolarized 1-13C pyruvate in mice with xenograft tumors. Results In-vivo images of pyruvate, alanine, pyruvate hydrate and lactate were reconstructed from 4 images acquired in 2 seconds with an in-plane resolution of 1.25 × 1.25mm2 and 5mm slice thickness. Conclusions The phase advance method allowed acquisition of spectroscopically selective images with high spatial and temporal resolution. This method provides an alternative approach to hyperpolarized 13C spectroscopic MRI that can be combined with other techniques such as multi-echo or fluctuating equilibrium bSSFP. PMID:26507361

  8. Multivariate modelling and personality organization: a comparative study of the Defense Mechanism Test and linguistic expressions.

    PubMed

    Sundbom, E; Jeanneau, M

    1996-03-01

    The main aim of the study is to establish an empirical connection between perceptual defences as measured by the Defense Mechanism Test (DMT)--a projective percept-genetic method--and manifest linguistic expressions based on word pattern analyses. The subjects were 25 psychiatric patients with the diagnoses neurotic personality organization (NPO), borderline personality organization (BPO) and psychotic personality organization (PPO) in accordance with Kernberg's theory. A set of 130 DMT variables and 40 linguistic variables were analyzed by means of partial least squares (PLS) discriminant analysis separately and then pooled together. The overall hypothesis was that it would be possible to define the personality organization of the patients in terms of an amalgam of perceptual defences and word patterns, and that these two kinds of data would confirm each other. The result of the combined PLS analysis revealed a very good separation between the diagnostic groups as measured by the pooled variable sets. Among other things, it was shown that NPO patients are principally characterized by linguistic variables, whereas BPO and PPO patients are better defined by perceptual defences as measured by the DMT method.

  9. Mathematical Methods for Physics and Engineering Third Edition Paperback Set

    NASA Astrophysics Data System (ADS)

    Riley, Ken F.; Hobson, Mike P.; Bence, Stephen J.

    2006-06-01

    Prefaces; 1. Preliminary algebra; 2. Preliminary calculus; 3. Complex numbers and hyperbolic functions; 4. Series and limits; 5. Partial differentiation; 6. Multiple integrals; 7. Vector algebra; 8. Matrices and vector spaces; 9. Normal modes; 10. Vector calculus; 11. Line, surface and volume integrals; 12. Fourier series; 13. Integral transforms; 14. First-order ordinary differential equations; 15. Higher-order ordinary differential equations; 16. Series solutions of ordinary differential equations; 17. Eigenfunction methods for differential equations; 18. Special functions; 19. Quantum operators; 20. Partial differential equations: general and particular; 21. Partial differential equations: separation of variables; 22. Calculus of variations; 23. Integral equations; 24. Complex variables; 25. Application of complex variables; 26. Tensors; 27. Numerical methods; 28. Group theory; 29. Representation theory; 30. Probability; 31. Statistics; Index.

  10. On the phase form of a deformation quantization with separation of variables

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander

    2016-06-01

    Given a star product with separation of variables on a pseudo-Kähler manifold, we obtain a new formal (1, 1)-form from its classifying form and call it the phase form of the star product. The cohomology class of a star product with separation of variables equals the class of its phase form. We show that the phase forms can be arbitrary and they bijectively parametrize the star products with separation of variables. We also describe the action of a change of the formal parameter on a star product with separation of variables, its formal Berezin transform, classifying form, phase form, and canonical trace density.

  11. On Some Separated Algorithms for Separable Nonlinear Least Squares Problems.

    PubMed

    Gan, Min; Chen, C L Philip; Chen, Guang-Yong; Chen, Long

    2017-10-03

    For a class of nonlinear least squares problems, it is usually very beneficial to separate the variables into a linear and a nonlinear part and take full advantage of reliable linear least squares techniques. Consequently, the original problem is turned into a reduced problem which involves only nonlinear parameters. We consider in this paper four separated algorithms for such problems. The first one is the variable projection (VP) algorithm with full Jacobian matrix of Golub and Pereyra. The second and third ones are VP algorithms with simplified Jacobian matrices proposed by Kaufman and Ruano et al. respectively. The fourth one only uses the gradient of the reduced problem. Monte Carlo experiments are conducted to compare the performance of these four algorithms. From the results of the experiments, we find that: 1) the simplified Jacobian proposed by Ruano et al. is not a good choice for the VP algorithm; moreover, it may render the algorithm hard to converge; 2) the fourth algorithm perform moderately among these four algorithms; 3) the VP algorithm with the full Jacobian matrix perform more stable than that of the VP algorithm with Kuafman's simplified one; and 4) the combination of VP algorithm and Levenberg-Marquardt method is more effective than the combination of VP algorithm and Gauss-Newton method.

  12. 17 CFR 270.6c-3 - Exemptions for certain registered variable life insurance separate accounts.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... registered variable life insurance separate accounts. 270.6c-3 Section 270.6c-3 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION (CONTINUED) RULES AND REGULATIONS, INVESTMENT COMPANY ACT OF 1940 § 270.6c-3 Exemptions for certain registered variable life insurance separate accounts. A separate...

  13. Adsorption Kinetics of Manganese (II) in wastewater of Chemical laboratory with Column Method using Sugarcane Bagasse as Adsorbent

    NASA Astrophysics Data System (ADS)

    Zaini, H.; Abubakar, S.; Saifuddin

    2018-01-01

    The purpose of this research is to separate manganese (II) metal in the wastewater using sugarcane bagasse as an adsorbent. Experimental design, the independent variables are contact time (0; 30; 60; 90; 120; 150; 180; 210; 240 minutes, respectively) and activation treatment (without activation, physical activation, activation by H2SO4 0.5 N and activation by NaOH 0.5 N. Fixed variables consist of adsorbent mass (50 g), adsorbent particle size (30 mesh), flow rate (7 L/min) and volume of adsorbent (10 L). The dependent variable is the concentration of manganese. The results showed that the separation process of manganese by adsorption method was influenced by contact time and activation type. The kinetic studies show that the adsorption mechanism satisfies the pseudo-second-order kinetics model. Maximum adsorption capacity (qm) for adsorbents without treatment is 0.971 mg/g, physical treatment is 0.889 mg/g, chemical treatment by H2SO4 is 0.858 mg/g and chemical treatment by NaOH is 1.016 mg/g.

  14. Identification of Variables Associated with Group Separation in Descriptive Discriminant Analysis: Comparison of Methods for Interpreting Structure Coefficients

    ERIC Educational Resources Information Center

    Finch, Holmes

    2010-01-01

    Discriminant Analysis (DA) is a tool commonly used for differentiating among 2 or more groups based on 2 or more predictor variables. DA works by finding 1 or more linear combinations of the predictors that yield maximal difference among the groups. One common goal of researchers using DA is to characterize the nature of group difference by…

  15. Measuring Spatial Accessibility of Health Care Providers – Introduction of a Variable Distance Decay Function within the Floating Catchment Area (FCA) Method

    PubMed Central

    Groneberg, David A.

    2016-01-01

    We integrated recent improvements within the floating catchment area (FCA) method family into an integrated ‘iFCA`method. Within this method we focused on the distance decay function and its parameter. So far only distance decay functions with constant parameters have been applied. Therefore, we developed a variable distance decay function to be used within the FCA method. We were able to replace the impedance coefficient β by readily available distribution parameter (i.e. median and standard deviation (SD)) within a logistic based distance decay function. Hence, the function is shaped individually for every single population location by the median and SD of all population-to-provider distances within a global catchment size. Theoretical application of the variable distance decay function showed conceptually sound results. Furthermore, the existence of effective variable catchment sizes defined by the asymptotic approach to zero of the distance decay function was revealed, satisfying the need for variable catchment sizes. The application of the iFCA method within an urban case study in Berlin (Germany) confirmed the theoretical fit of the suggested method. In summary, we introduced for the first time, a variable distance decay function within an integrated FCA method. This function accounts for individual travel behaviors determined by the distribution of providers. Additionally, the function inherits effective variable catchment sizes and therefore obviates the need for determining variable catchment sizes separately. PMID:27391649

  16. Discrete-continuous variable structural synthesis using dual methods

    NASA Technical Reports Server (NTRS)

    Schmit, L. A.; Fleury, C.

    1980-01-01

    Approximation concepts and dual methods are extended to solve structural synthesis problems involving a mix of discrete and continuous sizing type of design variables. Pure discrete and pure continuous variable problems can be handled as special cases. The basic mathematical programming statement of the structural synthesis problem is converted into a sequence of explicit approximate primal problems of separable form. These problems are solved by constructing continuous explicit dual functions, which are maximized subject to simple nonnegativity constraints on the dual variables. A newly devised gradient projection type of algorithm called DUAL 1, which includes special features for handling dual function gradient discontinuities that arise from the discrete primal variables, is used to find the solution of each dual problem. Computational implementation is accomplished by incorporating the DUAL 1 algorithm into the ACCESS 3 program as a new optimizer option. The power of the method set forth is demonstrated by presenting numerical results for several example problems, including a pure discrete variable treatment of a metallic swept wing and a mixed discrete-continuous variable solution for a thin delta wing with fiber composite skins.

  17. Reduced linear noise approximation for biochemical reaction networks with time-scale separation: The stochastic tQSSA+

    NASA Astrophysics Data System (ADS)

    Herath, Narmada; Del Vecchio, Domitilla

    2018-03-01

    Biochemical reaction networks often involve reactions that take place on different time scales, giving rise to "slow" and "fast" system variables. This property is widely used in the analysis of systems to obtain dynamical models with reduced dimensions. In this paper, we consider stochastic dynamics of biochemical reaction networks modeled using the Linear Noise Approximation (LNA). Under time-scale separation conditions, we obtain a reduced-order LNA that approximates both the slow and fast variables in the system. We mathematically prove that the first and second moments of this reduced-order model converge to those of the full system as the time-scale separation becomes large. These mathematical results, in particular, provide a rigorous justification to the accuracy of LNA models derived using the stochastic total quasi-steady state approximation (tQSSA). Since, in contrast to the stochastic tQSSA, our reduced-order model also provides approximations for the fast variable stochastic properties, we term our method the "stochastic tQSSA+". Finally, we demonstrate the application of our approach on two biochemical network motifs found in gene-regulatory and signal transduction networks.

  18. 17 CFR 270.6e-2 - Exemptions for certain variable life insurance separate accounts.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... variable life insurance separate accounts. 270.6e-2 Section 270.6e-2 Commodity and Securities Exchanges...-2 Exemptions for certain variable life insurance separate accounts. (a) A separate account, and the... a life insurance company pursuant to the insurance laws or code of (i) any state or territory of the...

  19. 17 CFR 270.6e-2 - Exemptions for certain variable life insurance separate accounts.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... variable life insurance separate accounts. 270.6e-2 Section 270.6e-2 Commodity and Securities Exchanges...-2 Exemptions for certain variable life insurance separate accounts. (a) A separate account, and the... a life insurance company pursuant to the insurance laws or code of (i) any state or territory of the...

  20. 17 CFR 270.6e-2 - Exemptions for certain variable life insurance separate accounts.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... variable life insurance separate accounts. 270.6e-2 Section 270.6e-2 Commodity and Securities Exchanges...-2 Exemptions for certain variable life insurance separate accounts. (a) A separate account, and the... a life insurance company pursuant to the insurance laws or code of (i) any state or territory of the...

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niccoli, G.

    The antiperiodic transfer matrices associated to higher spin representations of the rational 6-vertex Yang-Baxter algebra are analyzed by generalizing the approach introduced recently in the framework of Sklyanin's quantum separation of variables (SOV) for cyclic representations, spin-1/2 highest weight representations, and also for spin-1/2 representations of the 6-vertex reflection algebra. Such SOV approach allow us to derive exactly results which represent complicate tasks for more traditional methods based on Bethe ansatz and Baxter Q-operator. In particular, we both prove the completeness of the SOV characterization of the transfer matrix spectrum and its simplicity. Then, the derived characterization of local operatorsmore » by Sklyanin's quantum separate variables and the expression of the scalar products of separate states by determinant formulae allow us to compute the form factors of the local spin operators by one determinant formulae similar to those of the scalar products.« less

  2. Student Solution Manual for Mathematical Methods for Physics and Engineering Third Edition

    NASA Astrophysics Data System (ADS)

    Riley, K. F.; Hobson, M. P.

    2006-03-01

    Preface; 1. Preliminary algebra; 2. Preliminary calculus; 3. Complex numbers and hyperbolic functions; 4. Series and limits; 5. Partial differentiation; 6. Multiple integrals; 7. Vector algebra; 8. Matrices and vector spaces; 9. Normal modes; 10. Vector calculus; 11. Line, surface and volume integrals; 12. Fourier series; 13. Integral transforms; 14. First-order ordinary differential equations; 15. Higher-order ordinary differential equations; 16. Series solutions of ordinary differential equations; 17. Eigenfunction methods for differential equations; 18. Special functions; 19. Quantum operators; 20. Partial differential equations: general and particular; 21. Partial differential equations: separation of variables; 22. Calculus of variations; 23. Integral equations; 24. Complex variables; 25. Application of complex variables; 26. Tensors; 27. Numerical methods; 28. Group theory; 29. Representation theory; 30. Probability; 31. Statistics.

  3. A multiple-time-scale turbulence model based on variable partitioning of turbulent kinetic energy spectrum

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.; Chen, C.-P.

    1988-01-01

    The paper presents a multiple-time-scale turbulence model of a single point closure and a simplified split-spectrum method. Consideration is given to a class of turbulent boundary layer flows and of separated and/or swirling elliptic turbulent flows. For the separated and/or swirling turbulent flows, the present turbulence model yielded significantly improved computational results over those obtained with the standard k-epsilon turbulence model.

  4. Variability estimation of urban wastewater biodegradable fractions by respirometry.

    PubMed

    Lagarde, Fabienne; Tusseau-Vuillemin, Marie-Hélène; Lessard, Paul; Héduit, Alain; Dutrop, François; Mouchel, Jean-Marie

    2005-11-01

    This paper presents a methodology for assessing the variability of biodegradable chemical oxygen demand (COD) fractions in urban wastewaters. Thirteen raw wastewater samples from combined and separate sewers feeding the same plant were characterised, and two optimisation procedures were applied in order to evaluate the variability in biodegradable fractions and related kinetic parameters. Through an overall optimisation on all the samples, a unique kinetic parameter set was obtained with a three-substrate model including an adsorption stage. This method required powerful numerical treatment, but improved the identifiability problem compared to the usual sample-to-sample optimisation. The results showed that the fractionation of samples collected in the combined sewer was much more variable (standard deviation of 70% of the mean values) than the fractionation of the separate sewer samples, and the slowly biodegradable COD fraction was the most significant fraction (45% of the total COD on average). Because these samples were collected under various rain conditions, the standard deviations obtained here on the combined sewer biodegradable fractions could be used as a first estimation of the variability of this type of sewer system.

  5. Another convex combination of product states for the separable Werner state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azuma, Hiroo; Ban, Masashi; CREST, Japan Science and Technology Agency, 1-1-9 Yaesu, Chuo-ku, Tokyo 103-0028

    2006-03-15

    In this paper, we write down the separable Werner state in a two-qubit system explicitly as a convex combination of product states, which is different from the convex combination obtained by Wootters' method. The Werner state in a two-qubit system has a single real parameter and varies from inseparable to separable according to the value of its parameter. We derive a hidden variable model that is induced by our decomposed form for the separable Werner state. From our explicit form of the convex combination of product states, we understand the following: The critical point of the parameter for separability ofmore » the Werner state comes from positivity of local density operators of the qubits.« less

  6. Stability of spanwise-modulated flows behind backward-facing steps

    NASA Astrophysics Data System (ADS)

    Boiko, A. V.; Dovgal, A. V.; Sorokin, A. M.

    2017-10-01

    An overview and synthesis of researches on development of local vortical disturbances in laminar separated flows downstream of backward-facing steps, in which the velocity field depends essentially on two variables are given. Peculiarities of transition to turbulence in such spatially inhomogeneous separated zones are discussed. The experimental data are supplemented by the linear stability characteristics of model velocity profiles of the separated flow computed using both the classical local formulation and the nonlocal approach based on the Floquet theory for partial differential equations with periodic coefficients. The results clarify the response of the local separated flows to their modulation with stationary geometrical and temperature inhomogeneities. The results can be useful for the development of new methods of laminar separation control.

  7. Separation of organic cations using novel background electrolytes by capillary electrophoresis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steiner, S.; Fritz, J.

    2008-02-12

    A background electrolyte for capillary electrophoresis containing tris(-hydroxymethyl) aminomethane (THAM) and ethanesulfonic acid (ESA) gives excellent efficiency for separation of drug cations with actual theoretical plate numbers as high as 300,000. However, the analyte cations often elute too quickly and consequently offer only a narrow window for separation. The best way to correct this is to induce a reverse electroosmotic flow (EOF) that will spread out the peaks by slowing their migration rates, but this has always been difficult to accomplish in a controlled manner. A new method for producing a variable EOF is described in which a low variablemore » concentration of tributylammonium- or triethylammonium ESA is added to the BGE. The additive equilibrates with the capillary wall to give it a positive charge and thereby produce a controlled opposing EOF. Excellent separations of complex drug mixtures were obtained by this method.« less

  8. A streamlined artificial variable free version of simplex method.

    PubMed

    Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad

    2015-01-01

    This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement.

  9. A Streamlined Artificial Variable Free Version of Simplex Method

    PubMed Central

    Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad

    2015-01-01

    This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement. PMID:25767883

  10. Application of quality by design concept to develop a dual gradient elution stability-indicating method for cloxacillin forced degradation studies using combined mixture-process variable models.

    PubMed

    Zhang, Xia; Hu, Changqin

    2017-09-08

    Penicillins are typical of complex ionic samples which likely contain large number of degradation-related impurities (DRIs) with different polarities and charge properties. It is often a challenge to develop selective and robust high performance liquid chromatography (HPLC) methods for the efficient separation of all DRIs. In this study, an analytical quality by design (AQbD) approach was proposed for stability-indicating method development of cloxacillin. The structures, retention and UV characteristics rules of penicillins and their impurities were summarized and served as useful prior knowledge. Through quality risk assessment and screen design, 3 critical process parameters (CPPs) were defined, including 2 mixture variables (MVs) and 1 process variable (PV). A combined mixture-process variable (MPV) design was conducted to evaluate the 3 CPPs simultaneously and a response surface methodology (RSM) was used to achieve the optimal experiment parameters. A dual gradient elution was performed to change buffer pH, mobile-phase type and strength simultaneously. The design spaces (DSs) was evaluated using Monte Carlo simulation to give their possibility of meeting the specifications of CQAs. A Plackett-Burman design was performed to test the robustness around the working points and to decide the normal operating ranges (NORs). Finally, validation was performed following International Conference on Harmonisation (ICH) guidelines. To our knowledge, this is the first study of using MPV design and dual gradient elution to develop HPLC methods and improve separations for complex ionic samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Investigation of Super*Zip separation joint

    NASA Technical Reports Server (NTRS)

    Bement, Laurence J.; Schimmel, Morry L.

    1988-01-01

    An investigation to determine the most likely cause of two failures of five tests on 79 inch diameter Lockheed Super*Zip spacecraft separation joints being used for the development of a Shuttle/Centaur propulsion system. This joint utilizes an explosively expanded tube to fracture surrounding prenotched aluminum plates to achieve planar separation. A test method was developed and more than 300 tests firings were made to provide an understanding of severance mechanisms and the functional performance effects of system variables. An approach for defining functional margin was developed, and specific recommendations were made for improving existing and future systems.

  12. Assessment of a method for measuring serum thyroxine by radioimmunoassay, with use of polyethylene glycol precipitation. [/sup 125/I tracer technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farid, N.R.; Kennedy, C.

    We assessed the efficacy of a new thyroxine radioimmunoassay kit (Abbott) in which polyethylene glycol is used to separate bound from free hormone. Mean serum thyroxine was 88 +- 15 (+-SD) ..mu..g/liter for 96 normal persons. Results for hypothyroid and hyperthyroid persons were clearly separated from those for normal individuals. Women taking oral contraceptive preparations showed variable increases in their serum thyroxine values. The coefficient of variation ranged from 1 to 3% within assay and from 5.4 to 11% among different assays. Excellent parallelism was demonstrated between thyroxine values estimated by this method and those obtained either by competitive proteinmore » binding or by a separate radioimmunoassay for the hormone.« less

  13. Integrative analysis of gene expression and copy number alterations using canonical correlation analysis.

    PubMed

    Soneson, Charlotte; Lilljebjörn, Henrik; Fioretos, Thoas; Fontes, Magnus

    2010-04-15

    With the rapid development of new genetic measurement methods, several types of genetic alterations can be quantified in a high-throughput manner. While the initial focus has been on investigating each data set separately, there is an increasing interest in studying the correlation structure between two or more data sets. Multivariate methods based on Canonical Correlation Analysis (CCA) have been proposed for integrating paired genetic data sets. The high dimensionality of microarray data imposes computational difficulties, which have been addressed for instance by studying the covariance structure of the data, or by reducing the number of variables prior to applying the CCA. In this work, we propose a new method for analyzing high-dimensional paired genetic data sets, which mainly emphasizes the correlation structure and still permits efficient application to very large data sets. The method is implemented by translating a regularized CCA to its dual form, where the computational complexity depends mainly on the number of samples instead of the number of variables. The optimal regularization parameters are chosen by cross-validation. We apply the regularized dual CCA, as well as a classical CCA preceded by a dimension-reducing Principal Components Analysis (PCA), to a paired data set of gene expression changes and copy number alterations in leukemia. Using the correlation-maximizing methods, regularized dual CCA and PCA+CCA, we show that without pre-selection of known disease-relevant genes, and without using information about clinical class membership, an exploratory analysis singles out two patient groups, corresponding to well-known leukemia subtypes. Furthermore, the variables showing the highest relevance to the extracted features agree with previous biological knowledge concerning copy number alterations and gene expression changes in these subtypes. Finally, the correlation-maximizing methods are shown to yield results which are more biologically interpretable than those resulting from a covariance-maximizing method, and provide different insight compared to when each variable set is studied separately using PCA. We conclude that regularized dual CCA as well as PCA+CCA are useful methods for exploratory analysis of paired genetic data sets, and can be efficiently implemented also when the number of variables is very large.

  14. Dual methods and approximation concepts in structural synthesis

    NASA Technical Reports Server (NTRS)

    Fleury, C.; Schmit, L. A., Jr.

    1980-01-01

    Approximation concepts and dual method algorithms are combined to create a method for minimum weight design of structural systems. Approximation concepts convert the basic mathematical programming statement of the structural synthesis problem into a sequence of explicit primal problems of separable form. These problems are solved by constructing explicit dual functions, which are maximized subject to nonnegativity constraints on the dual variables. It is shown that the joining together of approximation concepts and dual methods can be viewed as a generalized optimality criteria approach. The dual method is successfully extended to deal with pure discrete and mixed continuous-discrete design variable problems. The power of the method presented is illustrated with numerical results for example problems, including a metallic swept wing and a thin delta wing with fiber composite skins.

  15. Mixed-Initiative COA Critic Advisors (MICCA)

    DTIC Science & Technology

    2013-02-01

    This java method can modify these strings to provide additional transformations that can’t be expressed in the property language.  Datatype ...vertical bar separator into one string. Instead, setting datatype to “nodeset” for a variable will cause multiple predicates to be generated for

  16. On-resonance variable delay multipulse scheme for imaging of fast-exchanging protons and semisolid macromolecules.

    PubMed

    Xu, Jiadi; Chan, Kannie W Y; Xu, Xiang; Yadav, Nirbhay; Liu, Guanshu; van Zijl, Peter C M

    2017-02-01

    To develop an on-resonance variable delay multipulse (VDMP) scheme to image magnetization transfer contrast (MTC) and the chemical exchange saturation transfer (CEST) contrast of total fast-exchanging protons (TFP) with exchange rate above approximately 1 kHz. A train of high power binomial pulses was applied at the water resonance. The interpulse delay, called mixing time, was varied to observe its effect on the water signal reduction, allowing separation and quantification of MTC and CEST contributions as a result of their different proton transfer rates. The fast-exchanging protons in CEST and MTC are labeled together with the short T 2 components in MTC and separated out using a variable mixing time. Phantom studies of selected metabolite solutions (glucose, glutamate, creatine, myo-inositol), bovine serum albumin (BSA), and hair conditioner show the capability of on-resonance VDMP to separate out exchangeable protons with exchange rates above 1 kHz. Quantitative MTC and TFP maps were acquired on healthy mouse brains using this method, showing strong gray/white matter contrast for the slowly transferring MTC protons, whereas the TFP map was more uniform across the brain but somewhat higher in gray matter. The new method provides a simple way of imaging fast-exchanging protons and MTC components with a slow transfer rate. Magn Reson Med 77:730-739, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  17. Effect of genetic algorithm as a variable selection method on different chemometric models applied for the analysis of binary mixture of amoxicillin and flucloxacillin: A comparative study

    NASA Astrophysics Data System (ADS)

    Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed

    2016-03-01

    Different chemometric models were applied for the quantitative analysis of amoxicillin (AMX), and flucloxacillin (FLX) in their binary mixtures, namely, partial least squares (PLS), spectral residual augmented classical least squares (SRACLS), concentration residual augmented classical least squares (CRACLS) and artificial neural networks (ANNs). All methods were applied with and without variable selection procedure (genetic algorithm GA). The methods were used for the quantitative analysis of the drugs in laboratory prepared mixtures and real market sample via handling the UV spectral data. Robust and simpler models were obtained by applying GA. The proposed methods were found to be rapid, simple and required no preliminary separation steps.

  18. Prescription-drug-related risk in driving: comparing conventional and lasso shrinkage logistic regressions.

    PubMed

    Avalos, Marta; Adroher, Nuria Duran; Lagarde, Emmanuel; Thiessard, Frantz; Grandvalet, Yves; Contrand, Benjamin; Orriols, Ludivine

    2012-09-01

    Large data sets with many variables provide particular challenges when constructing analytic models. Lasso-related methods provide a useful tool, although one that remains unfamiliar to most epidemiologists. We illustrate the application of lasso methods in an analysis of the impact of prescribed drugs on the risk of a road traffic crash, using a large French nationwide database (PLoS Med 2010;7:e1000366). In the original case-control study, the authors analyzed each exposure separately. We use the lasso method, which can simultaneously perform estimation and variable selection in a single model. We compare point estimates and confidence intervals using (1) a separate logistic regression model for each drug with a Bonferroni correction and (2) lasso shrinkage logistic regression analysis. Shrinkage regression had little effect on (bias corrected) point estimates, but led to less conservative results, noticeably for drugs with moderate levels of exposure. Carbamates, carboxamide derivative and fatty acid derivative antiepileptics, drugs used in opioid dependence, and mineral supplements of potassium showed stronger associations. Lasso is a relevant method in the analysis of databases with large number of exposures and can be recommended as an alternative to conventional strategies.

  19. Epidemiologic programs for computers and calculators. A microcomputer program for multiple logistic regression by unconditional and conditional maximum likelihood methods.

    PubMed

    Campos-Filho, N; Franco, E L

    1989-02-01

    A frequent procedure in matched case-control studies is to report results from the multivariate unmatched analyses if they do not differ substantially from the ones obtained after conditioning on the matching variables. Although conceptually simple, this rule requires that an extensive series of logistic regression models be evaluated by both the conditional and unconditional maximum likelihood methods. Most computer programs for logistic regression employ only one maximum likelihood method, which requires that the analyses be performed in separate steps. This paper describes a Pascal microcomputer (IBM PC) program that performs multiple logistic regression by both maximum likelihood estimation methods, which obviates the need for switching between programs to obtain relative risk estimates from both matched and unmatched analyses. The program calculates most standard statistics and allows factoring of categorical or continuous variables by two distinct methods of contrast. A built-in, descriptive statistics option allows the user to inspect the distribution of cases and controls across categories of any given variable.

  20. Vapors-liquid phase separator. [infrared telescope heat sink

    NASA Technical Reports Server (NTRS)

    Frederking, T. H. K.; Brown, G. S.; Chuang, C.; Kamioka, Y.; Kim, Y. I.; Lee, J. M.; Yuan, S. W. K.

    1980-01-01

    The use of porous plugs, mostly with in the form of passive devices with constant area were considered as vapor-liquid phase separators for helium 2 storage vessels under reduced gravity. The incorporation of components with variable cross sectional area as a method of flow rate modification was also investigated. A particular device which uses a shutter-type system for area variation was designed and constructed. This system successfully permitted flor rate changes of up to plus or minus 60% from its mean value.

  1. Application of concepts from cross-recurrence analysis in speech production: an overview and comparison with other nonlinear methods.

    PubMed

    Lancia, Leonardo; Fuchs, Susanne; Tiede, Mark

    2014-06-01

    The aim of this article was to introduce an important tool, cross-recurrence analysis, to speech production applications by showing how it can be adapted to evaluate the similarity of multivariate patterns of articulatory motion. The method differs from classical applications of cross-recurrence analysis because no phase space reconstruction is conducted, and a cleaning algorithm removes the artifacts from the recurrence plot. The main features of the proposed approach are robustness to nonstationarity and efficient separation of amplitude variability from temporal variability. The authors tested these claims by applying their method to synthetic stimuli whose variability had been carefully controlled. The proposed method was also demonstrated in a practical application: It was used to investigate the role of biomechanical constraints in articulatory reorganization as a consequence of speeded repetition of CVCV utterances containing a labial and a coronal consonant. Overall, the proposed approach provided more reliable results than other methods, particularly in the presence of high variability. The proposed method is a useful and appropriate tool for quantifying similarity and dissimilarity in patterns of speech articulator movement, especially in such research areas as speech errors and pathologies, where unpredictable divergent behavior is expected.

  2. Signal extraction and wave field separation in tunnel seismic prediction by independent component analysis

    NASA Astrophysics Data System (ADS)

    Yue, Y.; Jiang, T.; Zhou, Q.

    2017-12-01

    In order to ensure the rationality and the safety of tunnel excavation, the advanced geological prediction has been become an indispensable step in tunneling. However, the extraction of signal and the separation of P and S waves directly influence the accuracy of geological prediction. Generally, the raw data collected in TSP system is low quality because of the numerous disturb factors in tunnel projects, such as the power interference and machine vibration interference. It's difficult for traditional method (band-pass filtering) to remove interference effectively as well as bring little loss to signal. The power interference, machine vibration interference and the signal are original variables and x, y, z component as observation signals, each component of the representation is a linear combination of the original variables, which satisfy applicable conditions of independent component analysis (ICA). We perform finite-difference simulations of elastic wave propagation to synthetic a tunnel seismic reflection record. The method of ICA was adopted to process the three-component data, and the results show that extract the estimates of signal and the signals are highly correlated (the coefficient correlation is up to more than 0.93). In addition, the estimates of interference that separated from ICA and the interference signals are also highly correlated, and the coefficient correlation is up to more than 0.99. Thus, simulation results showed that the ICA is an ideal method for extracting high quality data from mixed signals. For the separation of P and S waves, the conventional separation techniques are based on physical characteristics of wave propagation, which require knowledge of the near-surface P and S waves velocities and density. Whereas the ICA approach is entirely based on statistical differences between P and S waves, and the statistical technique does not require a priori information. The concrete results of the wave field separation will be presented in the meeting. In summary, we can safely draw the conclusion that ICA can not only extract high quality data from the mixed signals, but also can separate P and S waves effectively.

  3. Monte Carlo study of x-ray cross talk in a variable resolution x-ray detector

    NASA Astrophysics Data System (ADS)

    Melnyk, Roman; DiBianca, Frank A.

    2003-06-01

    A variable resolution x-ray (VRX) detector provides a great increase in the spatial resolution of a CT scanner. An important factor that limits the spatial resolution of the detector is x-ray cross-talk. A theoretical study of the x-ray cross-talk is presented in this paper. In the study, two types of the x-ray cross-talk were considered: inter-cell and inter-arm cross-talk. Both types of the x-ray cross-talk were simulated, using the Monte Carlo method, as functions of the detector field of view (FOV). The simulation was repeated for lead and tungsten separators between detector cells. The inter-cell x-ray cross-talk was maximum at the 34-36 cm FOV, but it was low at small and the maximum FOVs. The inter-arm x-ray cross-talk was high at small and medium FOVs, but it was greatly reduced when variable width collimators were placed on the front surfaces of the detector. The inter-cell, but not inter-arm, x-ray cross-talk was lower for tungsten than for lead separators. From the results, x-ray cross-talk in a VRX detector can be minimized by imaging all objects between 24 cm and 40 cm in diameter with the 40 cm FOV, using tungsten separators, and placing variable width collimators in front of the detector.

  4. Effective application of multiple locus variable number of tandem repeats analysis to tracing Staphylococcus aureus in food-processing environment.

    PubMed

    Rešková, Z; Koreňová, J; Kuchta, T

    2014-04-01

    A total of 256 isolates of Staphylococcus aureus were isolated from 98 samples (34 swabs and 64 food samples) obtained from small or medium meat- and cheese-processing plants in Slovakia. The strains were genotypically characterized by multiple locus variable number of tandem repeats analysis (MLVA), involving multiplex polymerase chain reaction (PCR) with subsequent separation of the amplified DNA fragments by an automated flow-through gel electrophoresis. With the panel of isolates, MLVA produced 31 profile types, which was a sufficient discrimination to facilitate the description of spatial and temporal aspects of contamination. Further data on MLVA discrimination were obtained by typing a subpanel of strains by multiple locus sequence typing (MLST). MLVA coupled to automated electrophoresis proved to be an effective, comparatively fast and inexpensive method for tracing S. aureus contamination of food-processing factories. Subspecies genotyping of microbial contaminants in food-processing factories may facilitate identification of spatial and temporal aspects of the contamination. This may help to properly manage the process hygiene. With S. aureus, multiple locus variable number of tandem repeats analysis (MLVA) proved to be an effective method for the purpose, being sufficiently discriminative, yet comparatively fast and inexpensive. The application of automated flow-through gel electrophoresis to separation of DNA fragments produced by multiplex PCR helped to improve the accuracy and speed of the method. © 2013 The Society for Applied Microbiology.

  5. Linear data mining the Wichita clinical matrix suggests sleep and allostatic load involvement in chronic fatigue syndrome.

    PubMed

    Gurbaxani, Brian M; Jones, James F; Goertzel, Benjamin N; Maloney, Elizabeth M

    2006-04-01

    To provide a mathematical introduction to the Wichita (KS, USA) clinical dataset, which is all of the nongenetic data (no microarray or single nucleotide polymorphism data) from the 2-day clinical evaluation, and show the preliminary findings and limitations, of popular, matrix algebra-based data mining techniques. An initial matrix of 440 variables by 227 human subjects was reduced to 183 variables by 164 subjects. Variables were excluded that strongly correlated with chronic fatigue syndrome (CFS) case classification by design (for example, the multidimensional fatigue inventory [MFI] data), that were otherwise self reporting in nature and also tended to correlate strongly with CFS classification, or were sparse or nonvarying between case and control. Subjects were excluded if they did not clearly fall into well-defined CFS classifications, had comorbid depression with melancholic features, or other medical or psychiatric exclusions. The popular data mining techniques, principle components analysis (PCA) and linear discriminant analysis (LDA), were used to determine how well the data separated into groups. Two different feature selection methods helped identify the most discriminating parameters. Although purely biological features (variables) were found to separate CFS cases from controls, including many allostatic load and sleep-related variables, most parameters were not statistically significant individually. However, biological correlates of CFS, such as heart rate and heart rate variability, require further investigation. Feature selection of a limited number of variables from the purely biological dataset produced better separation between groups than a PCA of the entire dataset. Feature selection highlighted the importance of many of the allostatic load variables studied in more detail by Maloney and colleagues in this issue [1] , as well as some sleep-related variables. Nonetheless, matrix linear algebra-based data mining approaches appeared to be of limited utility when compared with more sophisticated nonlinear analyses on richer data types, such as those found in Maloney and colleagues [1] and Goertzel and colleagues [2] in this issue.

  6. Long-term variability in sugarcane bagasse feedstock compositional methods: Sources and magnitude of analytical variability

    DOE PAGES

    Templeton, David W.; Sluiter, Justin B.; Sluiter, Amie; ...

    2016-10-18

    In an effort to find economical, carbon-neutral transportation fuels, biomass feedstock compositional analysis methods are used to monitor, compare, and improve biofuel conversion processes. These methods are empirical, and the analytical variability seen in the feedstock compositional data propagates into variability in the conversion yields, component balances, mass balances, and ultimately the minimum ethanol selling price (MESP). We report the average composition and standard deviations of 119 individually extracted National Institute of Standards and Technology (NIST) bagasse [Reference Material (RM) 8491] run by seven analysts over 7 years. Two additional datasets, using bulk-extracted bagasse (containing 58 and 291 replicates each),more » were examined to separate out the effects of batch, analyst, sugar recovery standard calculation method, and extractions from the total analytical variability seen in the individually extracted dataset. We believe this is the world's largest NIST bagasse compositional analysis dataset and it provides unique insight into the long-term analytical variability. Understanding the long-term variability of the feedstock analysis will help determine the minimum difference that can be detected in yield, mass balance, and efficiency calculations. The long-term data show consistent bagasse component values through time and by different analysts. This suggests that the standard compositional analysis methods were performed consistently and that the bagasse RM itself remained unchanged during this time period. The long-term variability seen here is generally higher than short-term variabilities. It is worth noting that the effect of short-term or long-term feedstock compositional variability on MESP is small, about $0.03 per gallon. The long-term analysis variabilities reported here are plausible minimum values for these methods, though not necessarily average or expected variabilities. We must emphasize the importance of training and good analytical procedures needed to generate this data. As a result, when combined with a robust QA/QC oversight protocol, these empirical methods can be relied upon to generate high-quality data over a long period of time.« less

  7. Long-term variability in sugarcane bagasse feedstock compositional methods: Sources and magnitude of analytical variability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Templeton, David W.; Sluiter, Justin B.; Sluiter, Amie

    In an effort to find economical, carbon-neutral transportation fuels, biomass feedstock compositional analysis methods are used to monitor, compare, and improve biofuel conversion processes. These methods are empirical, and the analytical variability seen in the feedstock compositional data propagates into variability in the conversion yields, component balances, mass balances, and ultimately the minimum ethanol selling price (MESP). We report the average composition and standard deviations of 119 individually extracted National Institute of Standards and Technology (NIST) bagasse [Reference Material (RM) 8491] run by seven analysts over 7 years. Two additional datasets, using bulk-extracted bagasse (containing 58 and 291 replicates each),more » were examined to separate out the effects of batch, analyst, sugar recovery standard calculation method, and extractions from the total analytical variability seen in the individually extracted dataset. We believe this is the world's largest NIST bagasse compositional analysis dataset and it provides unique insight into the long-term analytical variability. Understanding the long-term variability of the feedstock analysis will help determine the minimum difference that can be detected in yield, mass balance, and efficiency calculations. The long-term data show consistent bagasse component values through time and by different analysts. This suggests that the standard compositional analysis methods were performed consistently and that the bagasse RM itself remained unchanged during this time period. The long-term variability seen here is generally higher than short-term variabilities. It is worth noting that the effect of short-term or long-term feedstock compositional variability on MESP is small, about $0.03 per gallon. The long-term analysis variabilities reported here are plausible minimum values for these methods, though not necessarily average or expected variabilities. We must emphasize the importance of training and good analytical procedures needed to generate this data. As a result, when combined with a robust QA/QC oversight protocol, these empirical methods can be relied upon to generate high-quality data over a long period of time.« less

  8. Exact semi-separation of variables in waveguides with non-planar boundaries

    NASA Astrophysics Data System (ADS)

    Athanassoulis, G. A.; Papoutsellis, Ch. E.

    2017-05-01

    Series expansions of unknown fields Φ =∑φn Zn in elongated waveguides are commonly used in acoustics, optics, geophysics, water waves and other applications, in the context of coupled-mode theories (CMTs). The transverse functions Zn are determined by solving local Sturm-Liouville problems (reference waveguides). In most cases, the boundary conditions assigned to Zn cannot be compatible with the physical boundary conditions of Φ, leading to slowly convergent series, and rendering CMTs mild-slope approximations. In the present paper, the heuristic approach introduced in Athanassoulis & Belibassakis (Athanassoulis & Belibassakis 1999 J. Fluid Mech. 389, 275-301) is generalized and justified. It is proved that an appropriately enhanced series expansion becomes an exact, rapidly convergent representation of the field Φ, valid for any smooth, non-planar boundaries and any smooth enough Φ. This series expansion can be differentiated termwise everywhere in the domain, including the boundaries, implementing an exact semi-separation of variables for non-separable domains. The efficiency of the method is illustrated by solving a boundary value problem for the Laplace equation, and computing the corresponding Dirichlet-to-Neumann operator, involved in Hamiltonian equations for nonlinear water waves. The present method provides accurate results with only a few modes for quite general domains. Extensions to general waveguides are also discussed.

  9. Method performance and multi-laboratory assessment of a normal phase high pressure liquid chromatography-fluorescence detection method for the quantitation of flavanols and procyanidins in cocoa and chocolate containing samples.

    PubMed

    Robbins, Rebecca J; Leonczak, Jadwiga; Johnson, J Christopher; Li, Julia; Kwik-Uribe, Catherine; Prior, Ronald L; Gu, Liwei

    2009-06-12

    The quantitative parameters and method performance for a normal-phase HPLC separation of flavanols and procyanidins in chocolate and cocoa-containing food products were optimized and assessed. Single laboratory method performance was examined over three months using three separate secondary standards. RSD(r) ranged from 1.9%, 4.5% to 9.0% for cocoa powder, liquor and chocolate samples containing 74.39, 15.47 and 1.87 mg/g flavanols and procyanidins, respectively. Accuracy was determined by comparison to the NIST Standard Reference Material 2384. Inter-lab assessment indicated that variability was quite low for seven different cocoa-containing samples, with a RSD(R) of less than 10% for the range of samples analyzed.

  10. Synaptic dynamics contribute to long-term single neuron response fluctuations.

    PubMed

    Reinartz, Sebastian; Biro, Istvan; Gal, Asaf; Giugliano, Michele; Marom, Shimon

    2014-01-01

    Firing rate variability at the single neuron level is characterized by long-memory processes and complex statistics over a wide range of time scales (from milliseconds up to several hours). Here, we focus on the contribution of non-stationary efficacy of the ensemble of synapses-activated in response to a given stimulus-on single neuron response variability. We present and validate a method tailored for controlled and specific long-term activation of a single cortical neuron in vitro via synaptic or antidromic stimulation, enabling a clear separation between two determinants of neuronal response variability: membrane excitability dynamics vs. synaptic dynamics. Applying this method we show that, within the range of physiological activation frequencies, the synaptic ensemble of a given neuron is a key contributor to the neuronal response variability, long-memory processes and complex statistics observed over extended time scales. Synaptic transmission dynamics impact on response variability in stimulation rates that are substantially lower compared to stimulation rates that drive excitability resources to fluctuate. Implications to network embedded neurons are discussed.

  11. Deconstructed transverse mass variables

    DOE PAGES

    Ismail, Ahmed; Schwienhorst, Reinhard; Virzi, Joseph S.; ...

    2015-04-02

    Traditional searches for R-parity conserving natural supersymmetry (SUSY) require large transverse mass and missing energy cuts to separate the signal from large backgrounds. SUSY models with compressed spectra inherently produce signal events with small amounts of missing energy that are hard to explore. We use this difficulty to motivate the construction of "deconstructed" transverse mass variables which are designed preserve information on both the norm and direction of the missing momentum. Here, we demonstrate the effectiveness of these variables in searches for the pair production of supersymmetric top-quark partners which subsequently decay into a final state with an isolated lepton,more » jets and missing energy. We show that the use of deconstructed transverse mass variables extends the accessible compressed spectra parameter space beyond the region probed by traditional methods. The parameter space can further be expanded to neutralino masses that are larger than the difference between the stop and top masses. In addition, we also discuss how these variables allow for novel searches of single stop production, in order to directly probe unconstrained stealth stops in the small stop-and neutralino-mass regime. We also demonstrate the utility of these variables for generic gluino and stop searches in all-hadronic final states. Overall, we demonstrate that deconstructed transverse variables are essential to any search wanting to maximize signal separation from the background when the signal has undetected particles in the final state.« less

  12. Deformation quantization with separation of variables of an endomorphism bundle

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander

    2014-01-01

    Given a holomorphic Hermitian vector bundle E and a star-product with separation of variables on a pseudo-Kähler manifold, we construct a star product on the sections of the endomorphism bundle of the dual bundle E∗ which also has the appropriately generalized property of separation of variables. For this star product we prove a generalization of Gammelgaard's graph-theoretic formula.

  13. Efficient Implementation of the Invariant Imbedding T-Matrix Method and the Separation of Variables Method Applied to Large Nonspherical Inhomogeneous Particles

    NASA Technical Reports Server (NTRS)

    Bi, Lei; Yang, Ping; Kattawar, George W.; Mishchenko, Michael I.

    2012-01-01

    Three terms, ''Waterman's T-matrix method'', ''extended boundary condition method (EBCM)'', and ''null field method'', have been interchangeable in the literature to indicate a method based on surface integral equations to calculate the T-matrix. Unlike the previous method, the invariant imbedding method (IIM) calculates the T-matrix by the use of a volume integral equation. In addition, the standard separation of variables method (SOV) can be applied to compute the T-matrix of a sphere centered at the origin of the coordinate system and having a maximal radius such that the sphere remains inscribed within a nonspherical particle. This study explores the feasibility of a numerical combination of the IIM and the SOV, hereafter referred to as the IIMþSOV method, for computing the single-scattering properties of nonspherical dielectric particles, which are, in general, inhomogeneous. The IIMþSOV method is shown to be capable of solving light-scattering problems for large nonspherical particles where the standard EBCM fails to converge. The IIMþSOV method is flexible and applicable to inhomogeneous particles and aggregated nonspherical particles (overlapped circumscribed spheres) representing a challenge to the standard superposition T-matrix method. The IIMþSOV computational program, developed in this study, is validated against EBCM simulated spheroid and cylinder cases with excellent numerical agreement (up to four decimal places). In addition, solutions for cylinders with large aspect ratios, inhomogeneous particles, and two-particle systems are compared with results from discrete dipole approximation (DDA) computations, and comparisons with the improved geometric-optics method (IGOM) are found to be quite encouraging.

  14. Variable ranking based on the estimated degree of separation for two distributions of data by the length of the receiver operating characteristic curve.

    PubMed

    Maswadeh, Waleed M; Snyder, A Peter

    2015-05-30

    Variable responses are fundamental for all experiments, and they can consist of information-rich, redundant, and low signal intensities. A dataset can consist of a collection of variable responses over multiple classes or groups. Usually some of the variables are removed in a dataset that contain very little information. Sometimes all the variables are used in the data analysis phase. It is common practice to discriminate between two distributions of data; however, there is no formal algorithm to arrive at a degree of separation (DS) between two distributions of data. The DS is defined herein as the average of the sum of the areas from the probability density functions (PDFs) of A and B that contain a≥percentage of A and/or B. Thus, DS90 is the average of the sum of the PDF areas of A and B that contain ≥90% of A and/or B. To arrive at a DS value, two synthesized PDFs or very large experimental datasets are required. Experimentally it is common practice to generate relatively small datasets. Therefore, the challenge was to find a statistical parameter that can be used on small datasets to estimate and highly correlate with the DS90 parameter. Established statistical methods include the overlap area of the two data distribution profiles, Welch's t-test, Kolmogorov-Smirnov (K-S) test, Mann-Whitney-Wilcoxon test, and the area under the receiver operating characteristics (ROC) curve (AUC). The area between the ROC curve and diagonal (ACD) and the length of the ROC curve (LROC) are introduced. The established, ACD, and LROC methods were correlated to the DS90 when applied on many pairs of synthesized PDFs. The LROC method provided the best linear correlation with, and estimation of, the DS90. The estimated DS90 from the LROC (DS90-LROC) is applied to a database, as an example, of three Italian wines consisting of thirteen variable responses for variable ranking consideration. An important highlight of the DS90-LROC method is utilizing the LROC curve methodology to test all variables one-at-a-time with all pairs of classes in a dataset. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Double-cross hydrostatic pressure sample injection for chip CE: variable sample plug volume and minimum number of electrodes.

    PubMed

    Luo, Yong; Wu, Dapeng; Zeng, Shaojiang; Gai, Hongwei; Long, Zhicheng; Shen, Zheng; Dai, Zhongpeng; Qin, Jianhua; Lin, Bingcheng

    2006-09-01

    A novel sample injection method for chip CE was presented. This injection method uses hydrostatic pressure, generated by emptying the sample waste reservoir, for sample loading and electrokinetic force for dispensing. The injection was performed on a double-cross microchip. One cross, created by the sample and separation channels, is used for formation of a sample plug. Another cross, formed by the sample and controlling channels, is used for plug control. By varying the electric field in the controlling channel, the sample plug volume can be linearly adjusted. Hydrostatic pressure takes advantage of its ease of generation on a microfluidic chip, without any electrode or external pressure pump, thus allowing a sample injection with a minimum number of electrodes. The potential of this injection method was demonstrated by a four-separation-channel chip CE system. In this system, parallel sample separation can be achieved with only two electrodes, which is otherwise impossible with conventional injection methods. Hydrostatic pressure maintains the sample composition during the sample loading, allowing the injection to be free of injection bias.

  16. Adaptable bioinspired special wetting surface for multifunctional oil/water separation

    NASA Astrophysics Data System (ADS)

    Kavalenka, Maryna N.; Vüllers, Felix; Kumberg, Jana; Zeiger, Claudia; Trouillet, Vanessa; Stein, Sebastian; Ava, Tanzila T.; Li, Chunyan; Worgull, Matthias; Hölscher, Hendrik

    2017-01-01

    Inspired by the multifunctionality of biological surfaces necessary for the survival of an organism in its specific environment, we developed an artificial special wetting nanofur surface which can be adapted to perform different functionalities necessary to efficiently separate oil and water for cleaning accidental oil spills or separating industrial oily wastewater. Initial superhydrophobic nanofur surface is fabricated using a hot pulling method, in which nano- and microhairs are drawn out of the polymer surface during separation from a heated sandblasted steel plate. By using a set of simple modification techniques, which include microperforation, plasma treatment and subsequent control of storage environment, we achieved selective separation of either water or oil, variable oil absorption and continuous gravity driven separation of oil/water mixtures by filtration. Furthermore, these functions can be performed using special wetting nanofur made from various thermoplastics, including biodegradable and recyclable polymers. Additionally, nanofur can be reused after washing it with organic solvents, thus, further helping to reduce the environmental impacts of oil/water separation processes.

  17. Adaptable bioinspired special wetting surface for multifunctional oil/water separation

    PubMed Central

    Kavalenka, Maryna N.; Vüllers, Felix; Kumberg, Jana; Zeiger, Claudia; Trouillet, Vanessa; Stein, Sebastian; Ava, Tanzila T.; Li, Chunyan; Worgull, Matthias; Hölscher, Hendrik

    2017-01-01

    Inspired by the multifunctionality of biological surfaces necessary for the survival of an organism in its specific environment, we developed an artificial special wetting nanofur surface which can be adapted to perform different functionalities necessary to efficiently separate oil and water for cleaning accidental oil spills or separating industrial oily wastewater. Initial superhydrophobic nanofur surface is fabricated using a hot pulling method, in which nano- and microhairs are drawn out of the polymer surface during separation from a heated sandblasted steel plate. By using a set of simple modification techniques, which include microperforation, plasma treatment and subsequent control of storage environment, we achieved selective separation of either water or oil, variable oil absorption and continuous gravity driven separation of oil/water mixtures by filtration. Furthermore, these functions can be performed using special wetting nanofur made from various thermoplastics, including biodegradable and recyclable polymers. Additionally, nanofur can be reused after washing it with organic solvents, thus, further helping to reduce the environmental impacts of oil/water separation processes. PMID:28051163

  18. Colorimetric determination of alkaline phosphatase as indicator of mammalian feces in corn meal: collaborative study.

    PubMed

    Gerber, H

    1986-01-01

    In the official method for rodent filth in corn meal, filth and corn meal are separated in organic solvents, and particles are identified by the presence of hair and a mucous coating. The solvents are toxic, poor separation yields low recoveries, and fecal characteristics are rarely present on all fragments, especially on small particles. The official AOAC alkaline phosphatase test for mammalian feces, 44.181-44.184, has therefore been adapted to determine the presence of mammalian feces in corn meal. The enzyme cleaves phosphate radicals from a test indicator/substrate, phenolphthalein diphosphate. As free phenolphthalein accumulates, a pink-to-red color develops in the gelled test agar medium. In a collaborative study conducted to compare the proposed method with the official method for corn meal, 44.049, the proposed method yielded 45.5% higher recoveries than the official method. Repeatability and reproducibility for the official method were roughly 1.8 times more variable than for the proposed method. The method has been adopted official first action.

  19. Emulsion stability measurements by single electrode capacitance probe (SeCaP) technology

    NASA Astrophysics Data System (ADS)

    Schüller, R. B.; Løkra, S.; Salas-Bringas, C.; Egelandsdal, B.; Engebretsen, B.

    2008-08-01

    This paper describes a new and novel method for the determination of the stability of emulsions. The method is based on the single electrode capacitance technology (SeCaP). A measuring system consisting of eight individual measuring cells, each with a volume of approximately 10 ml, is described in detail. The system has been tested on an emulsion system based on whey proteins (WPC80), oil and water. Xanthan was added to modify the emulsion stability. The results show that the new measuring system is able to quantify the stability of the emulsion in terms of a differential variable. The whole separation process is observed much faster in the SeCaP system than in a conventional separation column. The complete separation process observed visually over 30 h is seen in less than 1.4 h in the SeCaP system.

  20. Solid-phase extraction versus matrix solid-phase dispersion: Application to white grapes.

    PubMed

    Dopico-García, M S; Valentão, P; Jagodziñska, A; Klepczyñska, J; Guerra, L; Andrade, P B; Seabra, R M

    2007-11-15

    The use of matrix solid-phase dispersion (MSPD) was tested to, separately, extract phenolic compounds and organic acids from white grapes. This method was compared with a more conventional analytical method previously developed that combines solid liquid extraction (SL) to simultaneously extract phenolic compounds and organic acids followed by a solid-phase extraction (SPE) to separate the two types of compounds. Although the results were qualitatively similar for both techniques, the levels of extracted compounds were in general quite lower on using MSPD, especially for organic acids. Therefore, SL-SPE method was preferred to analyse white "Vinho Verde" grapes. Twenty samples of 10 different varieties (Alvarinho, Avesso, Asal-Branco, Batoca, Douradinha, Esganoso de Castelo Paiva, Loureiro, Pedernã, Rabigato and Trajadura) from four different locations in Minho (Portugal) were analysed in order to study the effects of variety and origin on the profile of the above mentioned compounds. Principal component analysis (PCA) was applied separately to establish the main sources of variability present in the data sets for phenolic compounds, organic acids and for the global data. PCA of phenolic compounds accounted for the highest variability (77.9%) with two PCs, enabling characterization of the varieties of samples according to their higher content in flavonol derivatives or epicatechin. Additionally, a strong effect of sample origin was observed. Stepwise linear discriminant analysis (SLDA) was used for differentiation of grapes according to the origin and variety, resulting in a correct classification of 100 and 70%, respectively.

  1. The contributions of local and remote atmospheric moisture fluxes to East Asian precipitation and its variability

    NASA Astrophysics Data System (ADS)

    Guo, Liang; Klingaman, Nicholas P.; Demory, Marie-Estelle; Vidale, Pier Luigi; Turner, Andrew G.; Stephan, Claudia C.

    2018-01-01

    We investigate the contribution of the local and remote atmospheric moisture fluxes to East Asia (EA) precipitation and its interannual variability during 1979-2012. We use and expand the Brubaker et al. (J Clim 6:1077-1089,1993) method, which connects the area-mean precipitation to area-mean evaporation and the horizontal moisture flux into the region. Due to its large landmass and hydrological heterogeneity, EA is divided into five sub-regions: Southeast (SE), Tibetan Plateau (TP), Central East (CE), Northwest (NW) and Northeast (NE). For each region, we first separate the contributions to precipitation of local evaporation from those of the horizontal moisture flux by calculating the precipitation recycling ratio: the fraction of precipitation over a region that originates as evaporation from the same region. Then, we separate the horizontal moisture flux across the region's boundaries by direction. We estimate the contributions of the horizontal moisture fluxes from each direction, as well as the local evaporation, to the mean precipitation and its interannual variability. We find that the major contributors to the mean precipitation are not necessarily those that contribute most to the precipitation interannual variability. Over SE, the moisture flux via the southern boundary dominates the mean precipitation and its interannual variability. Over TP, in winter and spring, the moisture flux via the western boundary dominates the mean precipitation; however, variations in local evaporation dominate the precipitation interannual variability. The western moisture flux is the dominant contributor to the mean precipitation over CE, NW and NE. However, the southern or northern moisture flux or the local evaporation dominates the precipitation interannual variability over these regions, depending on the season. Potential mechanisms associated with interannual variability in the moisture flux are identified for each region. The methods and results presented in this study can be readily applied to model simulations, to identify simulation biases in precipitation that relate to the simulated moisture supplies and transport.

  2. Transfer matrix spectrum for cyclic representations of the 6-vertex reflection algebra by quantum separation of variables

    NASA Astrophysics Data System (ADS)

    Pezelier, Baptiste

    2018-02-01

    In this proceeding, we recall the notion of quantum integrable systems on a lattice and then introduce the Sklyanin’s Separation of Variables method. We sum up the main results for the transfer matrix spectral problem for the cyclic representations of the trigonometric 6-vertex reflection algebra associated to the Bazanov-Stroganov Lax operator. These results apply as well to the spectral analysis of the lattice sine-Gordon model with open boundary conditions. The transfer matrix spectrum (both eigenvalues and eigenstates) is completely characterized in terms of the set of solutions to a discrete system of polynomial equations. We state an equivalent characterization as the set of solutions to a Baxter’s like T-Q functional equation, allowing us to rewrite the transfer matrix eigenstates in an algebraic Bethe ansatz form.

  3. Separating Atmospheric and Surface Contributions in Hyperspectral Imager for the Coastal Ocean (HICO) Scenes using Informed Non-Negative Matrix Factorization

    NASA Astrophysics Data System (ADS)

    Wright, L.; Coddington, O.; Pilewskie, P.

    2016-12-01

    Hyperspectral instruments are a growing class of Earth observing sensors designed to improve remote sensing capabilities beyond discrete multi-band sensors by providing tens to hundreds of continuous spectral channels. Improved spectral resolution, range and radiometric accuracy allow the collection of large amounts of spectral data, facilitating thorough characterization of both atmospheric and surface properties. These new instruments require novel approaches for processing imagery and separating surface and atmospheric signals. One approach is numerical source separation, which allows the determination of the underlying physical causes of observed signals. Improved source separation will enable hyperspectral imagery to better address key science questions relevant to climate change, including land-use changes, trends in clouds and atmospheric water vapor, and aerosol characteristics. We developed an Informed Non-negative Matrix Factorization (INMF) method for separating atmospheric and surface sources. INMF offers marked benefits over other commonly employed techniques including non-negativity, which avoids physically impossible results; and adaptability, which tailors the method to hyperspectral source separation. The INMF algorithm is adapted to separate contributions from physically distinct sources using constraints on spectral and spatial variability, and library spectra to improve the initial guess. We also explore methods to produce an initial guess of the spatial separation patterns. Using this INMF algorithm we decompose hyperspectral imagery from the NASA Hyperspectral Imager for the Coastal Ocean (HICO) with a focus on separating surface and atmospheric signal contributions. HICO's coastal ocean focus provides a dataset with a wide range of atmospheric conditions, including high and low aerosol optical thickness and cloud cover, with only minor contributions from the ocean surfaces in order to isolate the contributions of the multiple atmospheric sources.

  4. Determination of pesticides associated with suspended sediments in the San Joaquin River, California, USA, using gas chromatography-ion trap mass spectrometry

    USGS Publications Warehouse

    Bergamaschi, B.A.; Baston, D.S.; Crepeau, K.L.; Kuivila, K.M.

    1999-01-01

    An analytical method useful for the quantification of a range of pesticides and pesticide degradation products associated with suspended sediments was developed by testing a variety of extraction and cleanup schemes. The final extraction and cleanup methods chosen for use are suitable for the quantification of the listed pesticides using gas chromatography-ion trap mass spectrometry and the removal of interfering coextractable organic material found in suspended sediments. Methylene chloride extraction followed by Florisil cleanup proved most effective for separation of coextractives from the pesticide analytes. Removal of elemental sulfur was accomplished with tetrabutylammonium hydrogen sulfite. The suitability of the method for the analysis of a variety of pesticides was evaluated, and the method detection limits (MDLs) were determined (0.1-6.0 ng/g dry weight of sediment) for 21 compounds. Recovery of pesticides dried onto natural sediments averaged 63%. Analysis of duplicate San Joaquin River suspended-sediment samples demonstrated the utility of the method for environmental samples with variability between replicate analyses lower than between environmental samples. Eight of 21 pesticides measured were observed at concentrations ranging from the MDL to more than 80 ng/g dry weight of sediment and exhibited significant temporal variability. Sediment-associated pesticides, therefore, may contribute to the transport of pesticides through aquatic systems and should be studied separately from dissolved pesticides.

  5. The use of experimental design for the development of a capillary zone electrophoresis method for the quantitation of captopril.

    PubMed

    Mukozhiwa, S Y; Khamanga, S M M; Walker, R B

    2017-09-01

    A capillary zone electrophoresis (CZE) method for the quantitation of captopril (CPT) using UV detection was developed. Influence of electrolyte concentration and system variables on electrophoretic separation was evaluated and a central composite design (CCD) was used to optimize the method. Variables investigated were pH, molarity, applied voltage and capillary length. The influence of sodium metabisulphite on the stability of test solutions was also investigated. The use of sodium metabisulphite prevented degradation of CPT over 24 hours. A fused uncoated silica capillary of 67.5cm total and 57.5 cm effective length was used for analysis. The applied voltage and capillary length affected the migration time of CPT significantly. A 20 mM phosphate buffer adjusted to pH 7.0 was used as running buffer and an applied voltage of 23.90 kV was suitable to effect a separation. The optimized electrophoretic conditions produced sharp, well-resolved peaks for CPT and sodium metabisulphite. Linear regression analysis of the response for CPT standards revealed the method was linear (R2 = 0.9995) over the range 5-70 μg/mL. The limits of quantitation and detection were 5 and 1.5 μg/mL. A simple, rapid and reliable CZE method has been developed and successfully applied to the analysis of commercially available CPT products.

  6. Geographic variability of Escherichia coli ribotypes from animals in Idaho and Georgia.

    PubMed

    Hartel, Peter G; Summer, Jacob D; Hill, Jennifer L; Collins, J Victoria; Entry, James A; Segars, William I

    2002-01-01

    Several genotypic methods have been developed for determining the host origin of fecal bacteria in contaminated waters. Some of these methods rely on a host origin database to identify environmental isolates. It is not well understood to what degree these host origin isolates are geographically variable (i.e., cosmopolitan or endemic). This is important because a geographically limited host origin database may or may not be universally applicable. The objective of our study was to use one genotypic method, ribotyping, to determine the geographic variability of the fecal bacterium, Escherichia coli, from one location in Idaho and three locations in Georgia for cattle (Bos taurus), horse (Equus caballus), swine (Sus scrofa), and chicken (Gallus gallus domesticus). A total of 568 fecal E. coli isolates from Kimberly, ID (125 isolates), Athens, GA (210 isolates), Brunswick, GA (102 isolates), and Tifton, GA (131 isolates), yielded 213 ribotypes. The percentage of ribotype sharing within an animal species increased with decreased distance between geographic locations for cattle and horses, but not for swine and chicken. When the E. coli ribotypes among the four host species were compared at one location, the percent of unshared ribotypes was 86, 89, 81, and 79% for Kimberly, Athens, Brunswick, and Tifton, respectively. These data suggest that there is good ribotype separation among host animal species at each location. The ability to match environmental isolates to a host origin database may depend on a large number of environmental and host origin isolates that ideally are not geographically separated.

  7. Evaluation of drainage-area ratio method used to estimate streamflow for the Red River of the North Basin, North Dakota and Minnesota

    USGS Publications Warehouse

    Emerson, Douglas G.; Vecchia, Aldo V.; Dahl, Ann L.

    2005-01-01

    The drainage-area ratio method commonly is used to estimate streamflow for sites where no streamflow data were collected. To evaluate the validity of the drainage-area ratio method and to determine if an improved method could be developed to estimate streamflow, a multiple-regression technique was used to determine if drainage area, main channel slope, and precipitation were significant variables for estimating streamflow in the Red River of the North Basin. A separate regression analysis was performed for streamflow for each of three seasons-- winter, spring, and summer. Drainage area and summer precipitation were the most significant variables. However, the regression equations generally overestimated streamflows for North Dakota stations and underestimated streamflows for Minnesota stations. To correct the bias in the residuals for the two groups of stations, indicator variables were included to allow both the intercept and the coefficient for the logarithm of drainage area to depend on the group. Drainage area was the only significant variable in the revised regression equations. The exponents for the drainage-area ratio were 0.85 for the winter season, 0.91 for the spring season, and 1.02 for the summer season.

  8. An efficient variable projection formulation for separable nonlinear least squares problems.

    PubMed

    Gan, Min; Li, Han-Xiong

    2014-05-01

    We consider in this paper a class of nonlinear least squares problems in which the model can be represented as a linear combination of nonlinear functions. The variable projection algorithm projects the linear parameters out of the problem, leaving the nonlinear least squares problems involving only the nonlinear parameters. To implement the variable projection algorithm more efficiently, we propose a new variable projection functional based on matrix decomposition. The advantage of the proposed formulation is that the size of the decomposed matrix may be much smaller than those of previous ones. The Levenberg-Marquardt algorithm using finite difference method is then applied to minimize the new criterion. Numerical results show that the proposed approach achieves significant reduction in computing time.

  9. Protein Separation by Electrophoretic-Electroosmotic Focusing on Supported Lipid Bilayers

    PubMed Central

    Liu, Chunming; Monson, Christopher F.; Yang, Tinglu; Pace, Hudson; Cremer, Paul S.

    2011-01-01

    An electrophoretic-electroosmotic focusing (EEF) method was developed and used to separate membrane-bound proteins and charged lipids based on their charge-to-size ratio from an initially homogeneous mixture. EEF uses opposing electrophoretic and electroosmotic forces to focus and separate proteins and lipids into narrow bands on supported lipid bilayers (SLBs). Membrane-associated species were focused into specific positions within the SLB in a highly repeatable fashion. The steady-state focusing positions of the proteins could be predicted and controlled by tuning experimental conditions, such as buffer pH, ionic strength, electric field and temperature. Careful tuning of the variables should enable one to separate mixtures of membrane proteins with only subtle differences. The EEF technique was found to be an effective way to separate protein mixtures with low initial concentrations, and it overcame diffusive peak broadening to allow four bands to be separated simultaneously within a 380 μm wide isolated supported membrane patch. PMID:21958061

  10. Possibilities and limitations of the kinetic plot method in supercritical fluid chromatography.

    PubMed

    De Pauw, Ruben; Desmet, Gert; Broeckhoven, Ken

    2013-08-30

    Although supercritical fluid chromatography (SFC) is becoming a technique of increasing importance in the field of analytical chromatography, methods to compare the performance of SFC-columns and separations in an unbiased way are not fully developed. The present study uses mathematical models to investigate the possibilities and limitations of the kinetic plot method in SFC as this easily allows to investigate a wide range of operating pressures, retention and mobile phase conditions. The variable column length (L) kinetic plot method was further investigated in this work. Since the pressure history is identical for each measurement, this method gives the true kinetic performance limit in SFC. The deviations of the traditional way of measuring the performance as a function of flow rate (fixed back pressure and column length) and the isopycnic method with respect to this variable column length method were investigated under a wide range of operational conditions. It is found that using the variable L method, extrapolations towards other pressure drops are not valid in SFC (deviation of ∼15% for extrapolation from 50 to 200bar pressure drop). The isopycnic method provides the best prediction but its use is limited when operating closer towards critical point conditions. When an organic modifier is used, the predictions are improved for both methods with respect to the variable L method (e.g. deviations decreases from 20% to 2% when 20mol% of methanol is added). Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Modeling Pulse Transmission in the Monterey Bay Using Parabolic Equation Methods

    DTIC Science & Technology

    1991-12-01

    Collins 9-13 was chosen for this purpose due its energy conservation scheme , and its ability to efficiently incorporate higher order terms in its...pressure field generated by the PE model into normal modes. Additionally, this process provides increased physical understanding of mode coupling and...separation of variables (i.e. normal modes or fast field), as well as pure numerical schemes such as the parabolic equation methods, can be used. However, as

  12. Research on theoretical optimization and experimental verification of minimum resistance hull form based on Rankine source method

    NASA Astrophysics Data System (ADS)

    Zhang, Bao-Ji; Zhang, Zhu-Xin

    2015-09-01

    To obtain low resistance and high efficiency energy-saving ship, minimum total resistance hull form design method is studied based on potential flow theory of wave-making resistance and considering the effects of tail viscous separation. With the sum of wave resistance and viscous resistance as objective functions and the parameters of B-Spline function as design variables, mathematical models are built using Nonlinear Programming Method (NLP) ensuring the basic limit of displacement and considering rear viscous separation. We develop ship lines optimization procedures with intellectual property rights. Series60 is used as parent ship in optimization design to obtain improved ship (Series60-1) theoretically. Then drag tests for the improved ship (Series60-1) is made to get the actual minimum total resistance hull form.

  13. A Analysis of the Low Frequency Sound Field in Non-Rectangular Enclosures Using the Finite Element Method.

    NASA Astrophysics Data System (ADS)

    Geddes, Earl Russell

    The details of the low frequency sound field for a rectangular room can be studied by the use of an established analytic technique--separation of variables. The solution is straightforward and the results are well-known. A non -rectangular room has boundary conditions which are not separable and therefore other solution techniques must be used. This study shows that the finite element method can be adapted for use in the study of sound fields in arbitrary shaped enclosures. The finite element acoustics problem is formulated and the modification of a standard program, which is necessary for solving acoustic field problems, is examined. The solution of the semi-non-rectangular room problem (one where the floor and ceiling remain parallel) is carried out by a combined finite element/separation of variables approach. The solution results are used to construct the Green's function for the low frequency sound field in five rooms (or data cases): (1) a rectangular (Louden) room; (2) The smallest wall of the Louden room canted 20 degrees from normal; (3) The largest wall of the Louden room canted 20 degrees from normal; (4) both the largest and the smallest walls are canted 20 degrees; and (5) a five-sided room variation of Case 4. Case 1, the rectangular room was calculated using both the finite element method and the separation of variables technique. The results for the two methods are compared in order to access the accuracy of the finite element method models. The modal damping coefficient are calculated and the results examined. The statistics of the source and receiver average normalized RMS P('2) responses in the 80 Hz, 100 Hz, and 125 Hz one-third octave bands are developed. The receiver averaged pressure response is developed to determine the effect of the source locations on the response. Twelve source locations are examined and the results tabulated for comparison. The effect of a finite sized source is looked at briefly. Finally, the standard deviation of the spatial pressure response is studied. The results for this characteristic show that it not significantly different in any of the rooms. The conclusions of the study are that only the frequency variations of the pressure response are affected by a room's shape. Further, in general, the simplest modification of a rectangular room (i.e., changing the angle of only one of the smallest walls), produces the most pronounced decrease of the pressure response variations in the low frequency region.

  14. Advancing the Theory of Nuclear Reactions with Rare Isotopes: From the Laboratory to the Cosmos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elster, Charlotte

    2015-06-01

    The mission of the TORUS Topical Collaboration is to develop new methods that will advance nuclear reaction theory for unstable isotopes by using three-body techniques to improve direct-reaction calculations, and, by using a new partial-fusion theory, to integrate descriptions of direct and compound-nucleus reactions. Ohio University concentrates its efforts on the first part of the mission. Since direct measurements are often not feasible, indirect methods, e.g. (d,p) reactions, should be used. Those (d,p) reactions may be viewed as three-body reactions and described with Faddeev techniques. Faddeev equations in momentum space have a long tradition of utilizing separable interactions in ordermore » to arrive at sets of coupled integral equations in one variable. While there exist several separable representations for the nucleon-nucleon interaction, the optical potential between a neutron (proton) and a nucleus is not readily available in separable form. For this reason we first embarked in introducing a separable representation for complex phenomenological optical potentials of Woods-Saxon type.« less

  15. Grassmann phase space methods for fermions. II. Field theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dalton, B.J., E-mail: bdalton@swin.edu.au; Jeffers, J.; Barnett, S.M.

    In both quantum optics and cold atom physics, the behaviour of bosonic photons and atoms is often treated using phase space methods, where mode annihilation and creation operators are represented by c-number phase space variables, with the density operator equivalent to a distribution function of these variables. The anti-commutation rules for fermion annihilation, creation operators suggests the possibility of using anti-commuting Grassmann variables to represent these operators. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of Grassmann phase space methods in quantum-atom optics to treat fermionic systems is rather rare, thoughmore » fermion coherent states using Grassmann variables are widely used in particle physics. This paper presents a phase space theory for fermion systems based on distribution functionals, which replace the density operator and involve Grassmann fields representing anti-commuting fermion field annihilation, creation operators. It is an extension of a previous phase space theory paper for fermions (Paper I) based on separate modes, in which the density operator is replaced by a distribution function depending on Grassmann phase space variables which represent the mode annihilation and creation operators. This further development of the theory is important for the situation when large numbers of fermions are involved, resulting in too many modes to treat separately. Here Grassmann fields, distribution functionals, functional Fokker–Planck equations and Ito stochastic field equations are involved. Typical applications to a trapped Fermi gas of interacting spin 1/2 fermionic atoms and to multi-component Fermi gases with non-zero range interactions are presented, showing that the Ito stochastic field equations are local in these cases. For the spin 1/2 case we also show how simple solutions can be obtained both for the untrapped case and for an optical lattice trapping potential.« less

  16. Output Feedback Distributed Containment Control for High-Order Nonlinear Multiagent Systems.

    PubMed

    Li, Yafeng; Hua, Changchun; Wu, Shuangshuang; Guan, Xinping

    2017-01-31

    In this paper, we study the problem of output feedback distributed containment control for a class of high-order nonlinear multiagent systems under a fixed undirected graph and a fixed directed graph, respectively. Only the output signals of the systems can be measured. The novel reduced order dynamic gain observer is constructed to estimate the unmeasured state variables of the system with the less conservative condition on nonlinear terms than traditional Lipschitz one. Via the backstepping method, output feedback distributed nonlinear controllers for the followers are designed. By means of the novel first virtual controllers, we separate the estimated state variables of different agents from each other. Consequently, the designed controllers show independence on the estimated state variables of neighbors except outputs information, and the dynamics of each agent can be greatly different, which make the design method have a wider class of applications. Finally, a numerical simulation is presented to illustrate the effectiveness of the proposed method.

  17. A method for monitoring the variability in nuclear absorption characteristics of aviation fuels

    NASA Technical Reports Server (NTRS)

    Sprinkle, Danny R.; Shen, Chih-Ping

    1988-01-01

    A technique for monitoring variability in the nuclear absorption characteristics of aviation fuels has been developed. It is based on a highly collimated low energy gamma radiation source and a sodium iodide counter. The source and the counter assembly are separated by a geometrically well-defined test fuel cell. A computer program for determining the mass attenuation coefficient of the test fuel sample, based on the data acquired for a preset counting period, has been developed and tested on several types of aviation fuel.

  18. Influence of the Separation of Prescription and Dispensation of Medicine on Its Cost in Japanese Prefectures

    PubMed Central

    Yokoi, Masayuki; Tashiro, Takao

    2014-01-01

    We studied how the separation of dispensing and prescribing of medicines between pharmacies and clinics (the “separation system”) can reduce internal medicine costs. To do so, we obtained publicly available data by searching electronic databases and official web pages of the Japanese government and non-profit public service corporations on the Internet. For Japanese medical institutions, participation in the separation system is optional. Consequently, the expansion rate of the separation system for each of the administrative districts is highly variable. The data were subjected to multiple regression analysis; daily internal medicines were the objective variable and expansion rate of the separation system was the explanatory variable. A multiple regression analysis revealed that the expansion rate of the separation system and the rate of replacing brand name medicine with generic medicine showed a significant negative partial correlation with daily internal medicine costs. Thus, the separation system was as effective in reducing medicine costs as the use of generic medicines. Because of its medical economic efficiency, the separation system should be expanded, especially in Asian countries in which the system is underdeveloped. PMID:24999122

  19. Influence of the separation of prescription and dispensation of medicine on its cost in Japanese prefectures.

    PubMed

    Yokoi, Masayuki; Tashiro, Takao

    2014-04-07

    We studied how the separation of dispensing and prescribing of medicines between pharmacies and clinics (the "separation system") can reduce internal medicine costs. To do so, we obtained publicly available data by searching electronic databases and official web pages of the Japanese government and non-profit public service corporations on the Internet. For Japanese medical institutions, participation in the separation system is optional. Consequently, the expansion rate of the separation system for each of the administrative districts is highly variable. The data were subjected to multiple regression analysis; daily internal medicines were the objective variable and expansion rate of the separation system was the explanatory variable. A multiple regression analysis revealed that the expansion rate of the separation system and the rate of replacing brand name medicine with generic medicine showed a significant negative partial correlation with daily internal medicine costs. Thus, the separation system was as effective in reducing medicine costs as the use of generic medicines. Because of its medical economic efficiency, the separation system should be expanded, especially in Asian countries in which the system is underdeveloped.

  20. Separation of variables in anisotropic models: anisotropic Rabi and elliptic Gaudin model in an external magnetic field

    NASA Astrophysics Data System (ADS)

    Skrypnyk, T.

    2017-08-01

    We study the problem of separation of variables for classical integrable Hamiltonian systems governed by non-skew-symmetric non-dynamical so(3)\\otimes so(3) -valued elliptic r-matrices with spectral parameters. We consider several examples of such models, and perform separation of variables for classical anisotropic one- and two-spin Gaudin-type models in an external magnetic field, and for Jaynes-Cummings-Dicke-type models without the rotating wave approximation.

  1. Satellite attitude prediction by multiple time scales method

    NASA Technical Reports Server (NTRS)

    Tao, Y. C.; Ramnath, R.

    1975-01-01

    An investigation is made of the problem of predicting the attitude of satellites under the influence of external disturbing torques. The attitude dynamics are first expressed in a perturbation formulation which is then solved by the multiple scales approach. The independent variable, time, is extended into new scales, fast, slow, etc., and the integration is carried out separately in the new variables. The theory is applied to two different satellite configurations, rigid body and dual spin, each of which may have an asymmetric mass distribution. The disturbing torques considered are gravity gradient and geomagnetic. Finally, as multiple time scales approach separates slow and fast behaviors of satellite attitude motion, this property is used for the design of an attitude control device. A nutation damping control loop, using the geomagnetic torque for an earth pointing dual spin satellite, is designed in terms of the slow equation.

  2. Affinity capillary electrophoresis and fluorescence spectroscopy for studying enantioselective interactions between omeprazole enantiomer and human serum albumin.

    PubMed

    Xu, Yujing; Hong, Tingting; Chen, Xueping; Ji, Yibing

    2017-05-01

    Baseline separation of omeprazole (OME) enantiomers was achieved by affinity capillary electrophoresis (ACE), using human serum albumin (HSA) as the chiral selector. The influence of several experimental variables such as HSA concentration, the type and content of organic modifiers, applied voltage and running buffer concentration on the separation was evaluated. The binding of esomeprazole (S-omeprazole, S-OME) and its R-enantiomer (R-omeprazole, R-OME) to HSA under simulated physiological conditions was studied by ACE and fluorescence spectroscopy which was considered as a reference method. ACE studies demonstrated that the binding constants of the two enantiomers and HSA were 3.18 × 10 3 M -1 and 5.36 × 10 3 M -1 , respectively. The binding properties including the fluorescence quenching mechanisms, binding constants, binding sites and the number of binding sites were obtained by fluorescence spectroscopy. Though the ACE method could not get enough data when compared with the fluorescence spectrum method, the separation and binding studies of chiral drugs could be achieved simultaneously via this method. This study is of great significance for the investigation and clinical application of chiral drugs. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Optimization of gold ore Sumbawa separation using gravity method: Shaking table

    NASA Astrophysics Data System (ADS)

    Ferdana, Achmad Dhaefi; Petrus, Himawan Tri Bayu Murti; Bendiyasa, I. Made; Prijambada, Irfan Dwidya; Hamada, Fumio; Sachiko, Takahi

    2018-04-01

    Most of artisanal small gold mining in Indonesia has been using amalgamation method, which caused negative impact to the environment around ore processing area due to the usage of mercury. One of the more environmental-friendly method for gold processing is gravity method. Shaking table is one of separation equipment of gravity method used to increase concentrate based on difference of specific gravity. The optimum concentration result is influenced by several variables, such as rotational speed shaking, particle size and deck slope. In this research, the range of rotational speed shaking was between 100 rpm and 200 rpm, the particle size was between -100 + 200 mesh and -200 + 300 mesh and deck slope was between 3° and 7°. Gold concentration in concentrate was measured by EDX. The result shows that the optimum condition is obtained at a shaking speed of 200 rpm, with a slope of 7° and particle size of -100 + 200 mesh.

  4. Estimating, Testing, and Comparing Specific Effects in Structural Equation Models: The Phantom Model Approach

    ERIC Educational Resources Information Center

    Macho, Siegfried; Ledermann, Thomas

    2011-01-01

    The phantom model approach for estimating, testing, and comparing specific effects within structural equation models (SEMs) is presented. The rationale underlying this novel method consists in representing the specific effect to be assessed as a total effect within a separate latent variable model, the phantom model that is added to the main…

  5. Calculating terrain indices along streams: A new method for separating stream sides

    Treesearch

    T. J. Grabs; K. G. Jencso; B. L. McGlynn; J. Seibert

    2010-01-01

    There is increasing interest in assessing riparian zones and their hydrological and biogeochemical buffering capacity with indices derived from hydrologic landscape analysis of digital elevation data. Upslope contributing area is a common surrogate for lateral water flows and can be used to assess the variability of local water inflows to riparian zones and streams....

  6. Boundary value problems for multi-term fractional differential equations

    NASA Astrophysics Data System (ADS)

    Daftardar-Gejji, Varsha; Bhalekar, Sachin

    2008-09-01

    Multi-term fractional diffusion-wave equation along with the homogeneous/non-homogeneous boundary conditions has been solved using the method of separation of variables. It is observed that, unlike in the one term case, solution of multi-term fractional diffusion-wave equation is not necessarily non-negative, and hence does not represent anomalous diffusion of any kind.

  7. Modular architecture for robotics and teleoperation

    DOEpatents

    Anderson, Robert J.

    1996-12-03

    Systems and methods for modularization and discretization of real-time robot, telerobot and teleoperation systems using passive, network based control laws. Modules consist of network one-ports and two-ports. Wave variables and position information are passed between modules. The behavior of each module is decomposed into uncoupled linear-time-invariant, and coupled, nonlinear memoryless elements and then are separately discretized.

  8. From metadynamics to dynamics.

    PubMed

    Tiwary, Pratyush; Parrinello, Michele

    2013-12-06

    Metadynamics is a commonly used and successful enhanced sampling method. By the introduction of a history dependent bias which depends on a restricted number of collective variables it can explore complex free energy surfaces characterized by several metastable states separated by large free energy barriers. Here we extend its scope by introducing a simple yet powerful method for calculating the rates of transition between different metastable states. The method does not rely on a previous knowledge of the transition states or reaction coordinates, as long as collective variables are known that can distinguish between the various stable minima in free energy space. We demonstrate that our method recovers the correct escape rates out of these stable states and also preserves the correct sequence of state-to-state transitions, with minimal extra computational effort needed over ordinary metadynamics. We apply the formalism to three different problems and in each case find excellent agreement with the results of long unbiased molecular dynamics runs.

  9. From Metadynamics to Dynamics

    NASA Astrophysics Data System (ADS)

    Tiwary, Pratyush; Parrinello, Michele

    2013-12-01

    Metadynamics is a commonly used and successful enhanced sampling method. By the introduction of a history dependent bias which depends on a restricted number of collective variables it can explore complex free energy surfaces characterized by several metastable states separated by large free energy barriers. Here we extend its scope by introducing a simple yet powerful method for calculating the rates of transition between different metastable states. The method does not rely on a previous knowledge of the transition states or reaction coordinates, as long as collective variables are known that can distinguish between the various stable minima in free energy space. We demonstrate that our method recovers the correct escape rates out of these stable states and also preserves the correct sequence of state-to-state transitions, with minimal extra computational effort needed over ordinary metadynamics. We apply the formalism to three different problems and in each case find excellent agreement with the results of long unbiased molecular dynamics runs.

  10. 77 FR 74231 - Hatteras Variable Trust, et al.; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-13

    ... 6e-3(T)(b)(15) thereunder in cases where a life insurance company separate account supporting variable life insurance contracts (``VLI Accounts'') holds shares of an existing portfolio of the Trust (an... investors also hold shares of the Funds: (i) Any life insurance company separate account supporting variable...

  11. Evaluating variability and uncertainty separately in microbial quantitative risk assessment using two R packages.

    PubMed

    Pouillot, Régis; Delignette-Muller, Marie Laure

    2010-09-01

    Quantitative risk assessment has emerged as a valuable tool to enhance the scientific basis of regulatory decisions in the food safety domain. This article introduces the use of two new computing resources (R packages) specifically developed to help risk assessors in their projects. The first package, "fitdistrplus", gathers tools for choosing and fitting a parametric univariate distribution to a given dataset. The data may be continuous or discrete. Continuous data may be right-, left- or interval-censored as is frequently obtained with analytical methods, with the possibility of various censoring thresholds within the dataset. Bootstrap procedures then allow the assessor to evaluate and model the uncertainty around the parameters and to transfer this information into a quantitative risk assessment model. The second package, "mc2d", helps to build and study two dimensional (or second-order) Monte-Carlo simulations in which the estimation of variability and uncertainty in the risk estimates is separated. This package easily allows the transfer of separated variability and uncertainty along a chain of conditional mathematical and probabilistic models. The usefulness of these packages is illustrated through a risk assessment of hemolytic and uremic syndrome in children linked to the presence of Escherichia coli O157:H7 in ground beef. These R packages are freely available at the Comprehensive R Archive Network (cran.r-project.org). Copyright 2010 Elsevier B.V. All rights reserved.

  12. Relaxation estimation of RMSD in molecular dynamics immunosimulations.

    PubMed

    Schreiner, Wolfgang; Karch, Rudolf; Knapp, Bernhard; Ilieva, Nevena

    2012-01-01

    Molecular dynamics simulations have to be sufficiently long to draw reliable conclusions. However, no method exists to prove that a simulation has converged. We suggest the method of "lagged RMSD-analysis" as a tool to judge if an MD simulation has not yet run long enough. The analysis is based on RMSD values between pairs of configurations separated by variable time intervals Δt. Unless RMSD(Δt) has reached a stationary shape, the simulation has not yet converged.

  13. A simple method to separate red wine nonpolymeric and polymeric phenols by solid-phase extraction.

    PubMed

    Pinelo, Manuel; Laurie, V Felipe; Waterhouse, Andrew L

    2006-04-19

    Simple polyphenols and tannins differ in the way that they contribute to the organoleptic profile of wine and their effects on human health. Very few straightforward techniques to separate red wine nonpolymeric phenols from the polymeric fraction are available in the literature. In general, they are complex, time-consuming, and generate large amounts of waste. In this procedure, the separation of these compounds was achieved using C18 cartridges, three solvents with different elution strengths, and pH adjustments of the experimental matrices. Two full factorial 2(3) experimental designs were performed to find the optimal critical variables and their values, allowing for the maximization of tannin recovery and separation efficiency (SE). Nonpolymeric phenols such as phenolic acids, monomers, and oligomers of flavonol and flavan-3-ols and anthocyanins were removed from the column by means of an aqueous solvent followed by ethyl acetate. The polymeric fraction was then eluted with a combination of methanol/acetone/water. The best results were attained with 1 mL of wine sample, a 10% methanol/water solution (first eluant), ethyl acetate (second eluant), and 66% acetone/water as the polymeric phenols-eluting solution (third eluant), obtaining a SE of ca. 90%. Trials with this method on fruit juices also showed high separation efficiency. Hence, this solid-phase extraction method has been shown to be a simple and efficient alternative for the separation of nonpolymeric phenolic fractions and the polymeric ones, and this method could have important applications to sample purification prior to biological testing due to the nonspecific binding of polymeric phenolics to nearly all enzymes and receptor sites.

  14. Discriminant Analysis of Time Series in the Presence of Within-Group Spectral Variability.

    PubMed

    Krafty, Robert T

    2016-07-01

    Many studies record replicated time series epochs from different groups with the goal of using frequency domain properties to discriminate between the groups. In many applications, there exists variation in cyclical patterns from time series in the same group. Although a number of frequency domain methods for the discriminant analysis of time series have been explored, there is a dearth of models and methods that account for within-group spectral variability. This article proposes a model for groups of time series in which transfer functions are modeled as stochastic variables that can account for both between-group and within-group differences in spectra that are identified from individual replicates. An ensuing discriminant analysis of stochastic cepstra under this model is developed to obtain parsimonious measures of relative power that optimally separate groups in the presence of within-group spectral variability. The approach possess favorable properties in classifying new observations and can be consistently estimated through a simple discriminant analysis of a finite number of estimated cepstral coefficients. Benefits in accounting for within-group spectral variability are empirically illustrated in a simulation study and through an analysis of gait variability.

  15. Simultaneous determination of celecoxib, hydroxycelecoxib, and carboxycelecoxib in human plasma using gradient reversed-phase liquid chromatography with ultraviolet absorbance detection.

    PubMed

    Störmer, Elke; Bauer, Steffen; Kirchheiner, Julia; Brockmöller, Jürgen; Roots, Ivar

    2003-01-05

    A new HPLC method for the simultaneous determination of celecoxib, carboxycelecoxib and hydroxycelecoxib in human plasma samples has been developed. Following a solid-phase extraction procedure, the samples were separated by gradient reversed-phase HLPC (C(18)) and quantified using UV detection at 254 nm. The method was linear over the concentration range 10-500 ng/ml. The intra-assay variability for the three analytes ranged from 4.0 to 12.6% and the inter-assay variability from 4.9 to 14.2%. The achieved limits of quantitation (LOQ) of 10 ng/ml for each analyte allowed the determination of the pharmacokinetic parameters of the analytes after administration of 100 mg celecoxib.

  16. Enhancing Important Fluctuations: Rare Events and Metadynamics from a Conceptual Viewpoint

    NASA Astrophysics Data System (ADS)

    Valsson, Omar; Tiwary, Pratyush; Parrinello, Michele

    2016-05-01

    Atomistic simulations play a central role in many fields of science. However, their usefulness is often limited by the fact that many systems are characterized by several metastable states separated by high barriers, leading to kinetic bottlenecks. Transitions between metastable states are thus rare events that occur on significantly longer timescales than one can simulate in practice. Numerous enhanced sampling methods have been introduced to alleviate this timescale problem, including methods based on identifying a few crucial order parameters or collective variables and enhancing the sampling of these variables. Metadynamics is one such method that has proven successful in a great variety of fields. Here we review the conceptual and theoretical foundations of metadynamics. As demonstrated, metadynamics is not just a practical tool but can also be considered an important development in the theory of statistical mechanics.

  17. Chromatographic fingerprint analysis of yohimbe bark and related dietary supplements using UHPLC/UV/MS.

    PubMed

    Sun, Jianghao; Chen, Pei

    2012-03-05

    A practical ultra high-performance liquid chromatography (UHPLC) method was developed for fingerprint analysis of and determination of yohimbine in yohimbe barks and related dietary supplements. Good separation was achieved using a Waters Acquity BEH C(18) column with gradient elution using 0.1% (v/v) aqueous ammonium hydroxide and 0.1% ammonium hydroxide in methanol as the mobile phases. The study is the first reported chromatographic method that separates corynanthine from yohimbine in yohimbe bark extract. The chromatographic fingerprint analysis was applied to the analysis of 18 yohimbe commercial dietary supplement samples. Quantitation of yohimbine, the traditional method for analysis of yohimbe barks, were also performed to evaluate the results of the fingerprint analysis. Wide variability was observed in fingerprints and yohimbine content among yohimbe dietary supplement samples. For most of the dietary supplements, the yohimbine content was not consistent with the label claims. Copyright © 2011. Published by Elsevier B.V.

  18. Transponder-aided joint calibration and synchronization compensation for distributed radar systems.

    PubMed

    Wang, Wen-Qin

    2015-01-01

    High-precision radiometric calibration and synchronization compensation must be provided for distributed radar system due to separate transmitters and receivers. This paper proposes a transponder-aided joint radiometric calibration, motion compensation and synchronization for distributed radar remote sensing. As the transponder signal can be separated from the normal radar returns, it is used to calibrate the distributed radar for radiometry. Meanwhile, the distributed radar motion compensation and synchronization compensation algorithms are presented by utilizing the transponder signals. This method requires no hardware modifications to both the normal radar transmitter and receiver and no change to the operating pulse repetition frequency (PRF). The distributed radar radiometric calibration and synchronization compensation require only one transponder, but the motion compensation requires six transponders because there are six independent variables in the distributed radar geometry. Furthermore, a maximum likelihood method is used to estimate the transponder signal parameters. The proposed methods are verified by simulation results.

  19. Multi-annual modes in the 20th century temperature variability in reanalyses and CMIP5 models

    NASA Astrophysics Data System (ADS)

    Järvinen, Heikki; Seitola, Teija; Silén, Johan; Räisänen, Jouni

    2016-11-01

    A performance expectation is that Earth system models simulate well the climate mean state and the climate variability. To test this expectation, we decompose two 20th century reanalysis data sets and 12 CMIP5 model simulations for the years 1901-2005 of the monthly mean near-surface air temperature using randomised multi-channel singular spectrum analysis (RMSSA). Due to the relatively short time span, we concentrate on the representation of multi-annual variability which the RMSSA method effectively captures as separate and mutually orthogonal spatio-temporal components. This decomposition is a unique way to separate statistically significant quasi-periodic oscillations from one another in high-dimensional data sets.The main results are as follows. First, the total spectra for the two reanalysis data sets are remarkably similar in all timescales, except that the spectral power in ERA-20C is systematically slightly higher than in 20CR. Apart from the slow components related to multi-decadal periodicities, ENSO oscillations with approximately 3.5- and 5-year periods are the most prominent forms of variability in both reanalyses. In 20CR, these are relatively slightly more pronounced than in ERA-20C. Since about the 1970s, the amplitudes of the 3.5- and 5-year oscillations have increased, presumably due to some combination of forced climate change, intrinsic low-frequency climate variability, or change in global observing network. Second, none of the 12 coupled climate models closely reproduce all aspects of the reanalysis spectra, although some models represent many aspects well. For instance, the GFDL-ESM2M model has two nicely separated ENSO periods although they are relatively too prominent as compared with the reanalyses. There is an extensive Supplement and YouTube videos to illustrate the multi-annual variability of the data sets.

  20. Chemometrics Methods for Specificity, Authenticity and Traceability Analysis of Olive Oils: Principles, Classifications and Applications.

    PubMed

    Messai, Habib; Farman, Muhammad; Sarraj-Laabidi, Abir; Hammami-Semmar, Asma; Semmar, Nabil

    2016-11-17

    Olive oils (OOs) show high chemical variability due to several factors of genetic, environmental and anthropic types. Genetic and environmental factors are responsible for natural compositions and polymorphic diversification resulting in different varietal patterns and phenotypes. Anthropic factors, however, are at the origin of different blends' preparation leading to normative, labelled or adulterated commercial products. Control of complex OO samples requires their (i) characterization by specific markers; (ii) authentication by fingerprint patterns; and (iii) monitoring by traceability analysis. These quality control and management aims require the use of several multivariate statistical tools: specificity highlighting requires ordination methods; authentication checking calls for classification and pattern recognition methods; traceability analysis implies the use of network-based approaches able to separate or extract mixed information and memorized signals from complex matrices. This chapter presents a review of different chemometrics methods applied for the control of OO variability from metabolic and physical-chemical measured characteristics. The different chemometrics methods are illustrated by different study cases on monovarietal and blended OO originated from different countries. Chemometrics tools offer multiple ways for quantitative evaluations and qualitative control of complex chemical variability of OO in relation to several intrinsic and extrinsic factors.

  1. Atmospheric QBO and ENSO indices with high vertical resolution from GNSS radio occultation temperature measurements

    NASA Astrophysics Data System (ADS)

    Wilhelmsen, Hallgeir; Ladstädter, Florian; Scherllin-Pirscher, Barbara; Steiner, Andrea K.

    2018-03-01

    We provide atmospheric temperature variability indices for the tropical troposphere and stratosphere based on global navigation satellite system (GNSS) radio occultation (RO) temperature measurements. By exploiting the high vertical resolution and the uniform distribution of the GNSS RO temperature soundings we introduce two approaches, both based on an empirical orthogonal function (EOF) analysis. The first method utilizes the whole vertical and horizontal RO temperature field from 30° S to 30° N and from 2 to 35 km altitude. The resulting indices, the leading principal components, resemble the well-known patterns of the Quasi-Biennial Oscillation (QBO) and the El Niño-Southern Oscillation (ENSO) in the tropics. They provide some information on the vertical structure; however, they are not vertically resolved. The second method applies the EOF analysis on each altitude level separately and the resulting indices contain information on the horizontal variability at each densely available altitude level. They capture more variability than the indices from the first method and present a mixture of all variability modes contributing at the respective altitude level, including the QBO and ENSO. Compared to commonly used variability indices from QBO winds or ENSO sea surface temperature, these new indices cover the vertical details of the atmospheric variability. Using them as proxies for temperature variability is also of advantage because there is no further need to account for response time lags. Atmospheric variability indices as novel products from RO are expected to be of great benefit for studies on atmospheric dynamics and variability, for climate trend analysis, as well as for climate model evaluation.

  2. Parental Separation, Parental Alcoholism, and Timing of First Sexual Intercourse

    PubMed Central

    Waldron, Mary; Doran, Kelly A.; Bucholz, Kathleen K.; Duncan, Alexis E.; Lynskey, Michael T.; Madden, Pamela A. F.; Sartor, Carolyn E.; Heath, Andrew C.

    2015-01-01

    Purpose We examined timing of first voluntary sexual intercourse as a joint function of parental separation during childhood and parental history of alcoholism. Methods Data were drawn from a birth cohort of female like-sex twins (n=569 African Ancestry [AA], n=3415 European or other Ancestry [EA]). Cox proportional hazards regression was conducted predicting age at first sex from dummy variables coding for parental separation and parental alcoholism. Propensity score analysis was also employed comparing intact and separated families, stratified by predicted probability of separation. Results Earlier sex was reported by EA twins from separated and alcoholic families, compared to EA twins from intact nonalcoholic families, with effects most pronounced through age 14. Among AA twins, effects of parental separation and parental alcoholism were largely nonsignificant. Results of propensity score analyses confirmed unique risks from parental separation in EA families, where consistent effects of parental separation were observed across predicted probability of separation. For AA families there was poor matching on risk-factors presumed to predate separation, which limited interpretability of survival-analytic findings. Conclusions In European American families, parental separation during childhood is an important predictor of early-onset sex, beyond parental alcoholism and other correlated risk-factors. To characterize risk for African Americans associated with parental separation, additional research is needed where matching on confounders can be achieved. PMID:25907653

  3. A drift-diffusion checkpoint model predicts a highly variable and growth-factor-sensitive portion of the cell cycle G1 phase.

    PubMed

    Jones, Zack W; Leander, Rachel; Quaranta, Vito; Harris, Leonard A; Tyson, Darren R

    2018-01-01

    Even among isogenic cells, the time to progress through the cell cycle, or the intermitotic time (IMT), is highly variable. This variability has been a topic of research for several decades and numerous mathematical models have been proposed to explain it. Previously, we developed a top-down, stochastic drift-diffusion+threshold (DDT) model of a cell cycle checkpoint and showed that it can accurately describe experimentally-derived IMT distributions [Leander R, Allen EJ, Garbett SP, Tyson DR, Quaranta V. Derivation and experimental comparison of cell-division probability densities. J. Theor. Biol. 2014;358:129-135]. Here, we use the DDT modeling approach for both descriptive and predictive data analysis. We develop a custom numerical method for the reliable maximum likelihood estimation of model parameters in the absence of a priori knowledge about the number of detectable checkpoints. We employ this method to fit different variants of the DDT model (with one, two, and three checkpoints) to IMT data from multiple cell lines under different growth conditions and drug treatments. We find that a two-checkpoint model best describes the data, consistent with the notion that the cell cycle can be broadly separated into two steps: the commitment to divide and the process of cell division. The model predicts one part of the cell cycle to be highly variable and growth factor sensitive while the other is less variable and relatively refractory to growth factor signaling. Using experimental data that separates IMT into G1 vs. S, G2, and M phases, we show that the model-predicted growth-factor-sensitive part of the cell cycle corresponds to a portion of G1, consistent with previous studies suggesting that the commitment step is the primary source of IMT variability. These results demonstrate that a simple stochastic model, with just a handful of parameters, can provide fundamental insights into the biological underpinnings of cell cycle progression.

  4. Automatic sleep staging using empirical mode decomposition, discrete wavelet transform, time-domain, and nonlinear dynamics features of heart rate variability signals.

    PubMed

    Ebrahimi, Farideh; Setarehdan, Seyed-Kamaledin; Ayala-Moyeda, Jose; Nazeran, Homer

    2013-10-01

    The conventional method for sleep staging is to analyze polysomnograms (PSGs) recorded in a sleep lab. The electroencephalogram (EEG) is one of the most important signals in PSGs but recording and analysis of this signal presents a number of technical challenges, especially at home. Instead, electrocardiograms (ECGs) are much easier to record and may offer an attractive alternative for home sleep monitoring. The heart rate variability (HRV) signal proves suitable for automatic sleep staging. Thirty PSGs from the Sleep Heart Health Study (SHHS) database were used. Three feature sets were extracted from 5- and 0.5-min HRV segments: time-domain features, nonlinear-dynamics features and time-frequency features. The latter was achieved by using empirical mode decomposition (EMD) and discrete wavelet transform (DWT) methods. Normalized energies in important frequency bands of HRV signals were computed using time-frequency methods. ANOVA and t-test were used for statistical evaluations. Automatic sleep staging was based on HRV signal features. The ANOVA followed by a post hoc Bonferroni was used for individual feature assessment. Most features were beneficial for sleep staging. A t-test was used to compare the means of extracted features in 5- and 0.5-min HRV segments. The results showed that the extracted features means were statistically similar for a small number of features. A separability measure showed that time-frequency features, especially EMD features, had larger separation than others. There was not a sizable difference in separability of linear features between 5- and 0.5-min HRV segments but separability of nonlinear features, especially EMD features, decreased in 0.5-min HRV segments. HRV signal features were classified by linear discriminant (LD) and quadratic discriminant (QD) methods. Classification results based on features from 5-min segments surpassed those obtained from 0.5-min segments. The best result was obtained from features using 5-min HRV segments classified by the LD classifier. A combination of linear/nonlinear features from HRV signals is effective in automatic sleep staging. Moreover, time-frequency features are more informative than others. In addition, a separability measure and classification results showed that HRV signal features, especially nonlinear features, extracted from 5-min segments are more discriminative than those from 0.5-min segments in automatic sleep staging. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. Relaxation Estimation of RMSD in Molecular Dynamics Immunosimulations

    PubMed Central

    Schreiner, Wolfgang; Karch, Rudolf; Knapp, Bernhard; Ilieva, Nevena

    2012-01-01

    Molecular dynamics simulations have to be sufficiently long to draw reliable conclusions. However, no method exists to prove that a simulation has converged. We suggest the method of “lagged RMSD-analysis” as a tool to judge if an MD simulation has not yet run long enough. The analysis is based on RMSD values between pairs of configurations separated by variable time intervals Δt. Unless RMSD(Δt) has reached a stationary shape, the simulation has not yet converged. PMID:23019425

  6. Multiple-Locus Variable-Number Tandem-Repeats Analysis of Escherichia coli O157 using PCR multiplexing and multi-colored capillary electrophoresis.

    PubMed

    Lindstedt, Bjørn-Arne; Vardund, Traute; Kapperud, Georg

    2004-08-01

    The Multiple-Locus Variable-Number Tandem-Repeats Analysis (MLVA) method is currently being used as the primary typing tool for Shiga-toxin-producing Escherichia coli (STEC) O157 isolates in our laboratory. The initial assay was performed using a single fluorescent dye and the different patterns were assigned using a gel image. Here, we present a significantly improved assay using multiple dye colors and enhanced PCR multiplexing to increase speed, and ease the interpretation of the results. The different MLVA patterns are now based on allele sizes entered as character values, thus removing the uncertainties introduced when analyzing band patterns from the gel image. We additionally propose an easy numbering scheme for the identification of separate isolates that will facilitate exchange of typing data. Seventy-two human and animal strains of Shiga-toxin-producing E. coli O157 were used for the development of the improved MLVA assay. The method is based on capillary separation of multiplexed PCR products of VNTR loci in the E. coli O157 genome labeled with multiple fluorescent dyes. The different alleles at each locus were then assigned to allele numbers, which were used for strain comparison.

  7. The method of belief scales as a means for dealing with uncertainty in tough regulatory decisions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilch, Martin M.

    Modeling and simulation is playing an increasing role in supporting tough regulatory decisions, which are typically characterized by variabilities and uncertainties in the scenarios, input conditions, failure criteria, model parameters, and even model form. Variability exists when there is a statistically significant database that is fully relevant to the application. Uncertainty, on the other hand, is characterized by some degree of ignorance. A simple algebraic problem was used to illustrate how various risk methodologies address variability and uncertainty in a regulatory context. These traditional risk methodologies include probabilistic methods (including frequensic and Bayesian perspectives) and second-order methods where variabilities andmore » uncertainties are treated separately. Representing uncertainties with (subjective) probability distributions and using probabilistic methods to propagate subjective distributions can lead to results that are not logically consistent with available knowledge and that may not be conservative. The Method of Belief Scales (MBS) is developed as a means to logically aggregate uncertain input information and to propagate that information through the model to a set of results that are scrutable, easily interpretable by the nonexpert, and logically consistent with the available input information. The MBS, particularly in conjunction with sensitivity analyses, has the potential to be more computationally efficient than other risk methodologies. The regulatory language must be tailored to the specific risk methodology if ambiguity and conflict are to be avoided.« less

  8. Development of a non-destructive method for determining protein nitrogen in a yellow fever vaccine by near infrared spectroscopy and multivariate calibration.

    PubMed

    Dabkiewicz, Vanessa Emídio; de Mello Pereira Abrantes, Shirley; Cassella, Ricardo Jorgensen

    2018-08-05

    Near infrared spectroscopy (NIR) with diffuse reflectance associated to multivariate calibration has as main advantage the replacement of the physical separation of interferents by the mathematical separation of their signals, rapidly with no need for reagent consumption, chemical waste production or sample manipulation. Seeking to optimize quality control analyses, this spectroscopic analytical method was shown to be a viable alternative to the classical Kjeldahl method for the determination of protein nitrogen in yellow fever vaccine. The most suitable multivariate calibration was achieved by the partial least squares method (PLS) with multiplicative signal correction (MSC) treatment and data mean centering (MC), using a minimum number of latent variables (LV) equal to 1, with the lower value of the square root of the mean squared prediction error (0.00330) associated with the highest percentage value (91%) of samples. Accuracy ranged 95 to 105% recovery in the 4000-5184 cm -1 region. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Series: Utilization of Differential Equations and Methods for Solving Them in Medical Physics (4).

    PubMed

    Murase, Kenya

    2016-01-01

    Partial differential equations are often used in the field of medical physics. In this (final) issue, the methods for solving the partial differential equations were introduced, which include separation of variables, integral transform (Fourier and Fourier-sine transforms), Green's function, and series expansion methods. Some examples were also introduced, in which the integral transform and Green's function methods were applied to solving Pennes' bioheat transfer equation and the Fourier series expansion method was applied to Navier-Stokes equation for analyzing the wall shear stress in blood vessels.Finally, the author hopes that this series will be helpful for people who engage in medical physics.

  10. Axisymmetric black holes allowing for separation of variables in the Klein-Gordon and Hamilton-Jacobi equations

    NASA Astrophysics Data System (ADS)

    Konoplya, R. A.; Stuchlík, Z.; Zhidenko, A.

    2018-04-01

    We determine the class of axisymmetric and asymptotically flat black-hole spacetimes for which the test Klein-Gordon and Hamilton-Jacobi equations allow for the separation of variables. The known Kerr, Kerr-Newman, Kerr-Sen and some other black-hole metrics in various theories of gravity are within the class of spacetimes described here. It is shown that although the black-hole metric in the Einstein-dilaton-Gauss-Bonnet theory does not allow for the separation of variables (at least in the considered coordinates), for a number of applications it can be effectively approximated by a metric within the above class. This gives us some hope that the class of spacetimes described here may be not only generic for the known solutions allowing for the separation of variables, but also a good approximation for a broader class of metrics, which does not admit such separation. Finally, the generic form of the axisymmetric metric is expanded in the radial direction in terms of the continued fractions and the connection with other black-hole parametrizations is discussed.

  11. A systematic approach to evaluate parameter consistency in the inlet stream of source separated biowaste composting facilities: A case study in Colombia.

    PubMed

    Oviedo-Ocaña, E R; Torres-Lozada, P; Marmolejo-Rebellon, L F; Torres-López, W A; Dominguez, I; Komilis, D; Sánchez, A

    2017-04-01

    Biowaste is commonly the largest fraction of municipal solid waste (MSW) in developing countries. Although composting is an effective method to treat source separated biowaste (SSB), there are certain limitations in terms of operation, partly due to insufficient control to the variability of SSB quality, which affects process kinetics and product quality. This study assesses the variability of the SSB physicochemical quality in a composting facility located in a small town of Colombia, in which SSB collection was performed twice a week. Likewise, the influence of the SSB physicochemical variability on the variability of compost parameters was assessed. Parametric and non-parametric tests (i.e. Student's t-test and the Mann-Whitney test) showed no significant differences in the quality parameters of SSB among collection days, and therefore, it was unnecessary to establish specific operation and maintenance regulations for each collection day. Significant variability was found in eight of the twelve quality parameters analyzed in the inlet stream, with corresponding coefficients of variation (CV) higher than 23%. The CVs for the eight parameters analyzed in the final compost (i.e. pH, moisture, total organic carbon, total nitrogen, C/N ratio, total phosphorus, total potassium and ash) ranged from 9.6% to 49.4%, with significant variations in five of those parameters (CV>20%). The above indicate that variability in the inlet stream can affect the variability of the end-product. Results suggest the need to consider variability of the inlet stream in the performance of composting facilities to achieve a compost of consistent quality. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Separating out the influence of climatic trend, fluctuations, and extreme events on crop yield: a case study in Hunan Province, China

    NASA Astrophysics Data System (ADS)

    Wang, Zhu; Shi, Peijun; Zhang, Zhao; Meng, Yongchang; Luan, Yibo; Wang, Jiwei

    2017-09-01

    Separating out the influence of climatic trend, fluctuations and extreme events on crop yield is of paramount importance to climate change adaptation, resilience, and mitigation. Previous studies lack systematic and explicit assessment of these three fundamental aspects of climate change on crop yield. This research attempts to separate out the impacts on rice yields of climatic trend (linear trend change related to mean value), fluctuations (variability surpassing the "fluctuation threshold" which defined as one standard deviation (1 SD) of the residual between the original data series and the linear trend value for each climatic variable), and extreme events (identified by absolute criterion for each kind of extreme events related to crop yield). The main idea of the research method was to construct climate scenarios combined with crop system simulation model. Comparable climate scenarios were designed to express the impact of each climate change component and, were input to the crop system model (CERES-Rice), which calculated the related simulated yield gap to quantify the percentage impacts of climatic trend, fluctuations, and extreme events. Six Agro-Meteorological Stations (AMS) in Hunan province were selected to study the quantitatively impact of climatic trend, fluctuations and extreme events involving climatic variables (air temperature, precipitation, and sunshine duration) on early rice yield during 1981-2012. The results showed that extreme events were found to have the greatest impact on early rice yield (-2.59 to -15.89%). Followed by climatic fluctuations with a range of -2.60 to -4.46%, and then the climatic trend (4.91-2.12%). Furthermore, the influence of climatic trend on early rice yield presented "trade-offs" among various climate variables and AMS. Climatic trend and extreme events associated with air temperature showed larger effects on early rice yield than other climatic variables, particularly for high-temperature events (-2.11 to -12.99%). Finally, the methodology use to separate out the influences of the climatic trend, fluctuations, and extreme events on crop yield was proved to be feasible and robust. Designing different climate scenarios and feeding them into a crop system model is a potential way to evaluate the quantitative impact of each climate variable.

  13. Chemometrics Methods for Specificity, Authenticity and Traceability Analysis of Olive Oils: Principles, Classifications and Applications

    PubMed Central

    Messai, Habib; Farman, Muhammad; Sarraj-Laabidi, Abir; Hammami-Semmar, Asma; Semmar, Nabil

    2016-01-01

    Background. Olive oils (OOs) show high chemical variability due to several factors of genetic, environmental and anthropic types. Genetic and environmental factors are responsible for natural compositions and polymorphic diversification resulting in different varietal patterns and phenotypes. Anthropic factors, however, are at the origin of different blends’ preparation leading to normative, labelled or adulterated commercial products. Control of complex OO samples requires their (i) characterization by specific markers; (ii) authentication by fingerprint patterns; and (iii) monitoring by traceability analysis. Methods. These quality control and management aims require the use of several multivariate statistical tools: specificity highlighting requires ordination methods; authentication checking calls for classification and pattern recognition methods; traceability analysis implies the use of network-based approaches able to separate or extract mixed information and memorized signals from complex matrices. Results. This chapter presents a review of different chemometrics methods applied for the control of OO variability from metabolic and physical-chemical measured characteristics. The different chemometrics methods are illustrated by different study cases on monovarietal and blended OO originated from different countries. Conclusion. Chemometrics tools offer multiple ways for quantitative evaluations and qualitative control of complex chemical variability of OO in relation to several intrinsic and extrinsic factors. PMID:28231172

  14. The positive effects of different platelet-rich plasma methods on human muscle, bone, and tendon cells.

    PubMed

    Mazzocca, Augustus D; McCarthy, Mary Beth R; Chowaniec, David M; Dugdale, Evan M; Hansen, Derek; Cote, Mark P; Bradley, James P; Romeo, Anthony A; Arciero, Robert A; Beitzel, Knut

    2012-08-01

    Clinical application of platelet-rich plasma (PRP) in the realm of orthopaedic sports medicine has yielded variable results. Differences in separation methods and variability of the individual may contribute to these variable results. To compare the effects of different PRP separation methods on human bone, muscle, and tendon cells in an in vitro model. Controlled laboratory study. Blood collected from 8 participants (mean ± SD age 31.6 ± 10.9 years) was used to obtain PRP preparations. Three different PRP separation methods were used: a single-spin process yielding a lower platelet concentration (PRP(LP)), a single-spin process yielding high platelet and white blood cell concentrations (PRP(HP)), and a double-spin that produces a higher platelet concentration and lower white blood cell concentration (PRP(DS)). Human bone, muscle, and tendon cells obtained from discarded tissue samples during shoulder surgery were placed into culture and treated with the 3 PRP preparations, control media (2% fetal bovine serum [FBS] and 10% FBS), and native blood. Radioactive thymidine assays were obtained to examine cell proliferation, and testing with enzyme-linked immunosorbent assay was used to determine growth factor concentrations. Addition of PRP(LP) to osteocytes, myocytes, and tenocytes significantly increased cell proliferation (P ≤ .05) compared with the controls. Adding PRP(DS) to osteoblasts and tenocytes increased cell proliferation significantly (P ≤ .05), but no significance was shown for its addition to myocytes. The addition of PRP(HP) significantly increased cell proliferation compared with the controls only when added to tenocytes (P ≤ .05). Osteoblasts: Proliferation was significantly increased by addition of PRP(LP) compared with all controls (2% FBS, 10% FBS, native blood) (P ≤ .05). Addition of PRP(DS) led to significantly increased proliferation compared with all controls, native blood, and PRP(HP) (P ≤ .05). Proliferation was significantly less when PRP(HP) was added compared with PRP(DS) (P ≤ .05). Myocytes: Proliferation was significantly increased by addition of PRP(LP) compared with native blood (P ≤ .05). Adding PRP(HP) or PRP(DS) to myocytes showed no significant increase in proliferation compared with the controls or the other separations. Tenocytes: Proliferation was significantly increased by addition of PRP(LP) compared with all controls (2% FBS, 10% FBS, native blood) (P ≤ .05). Addition of PRP(DS) showed a significant increase compared with the controls and native blood. For tenocytes, there was a significant increase (P ≤ .05) seen when PRP(HP) was added compared with the controls and native blood but not compared with the other separations. The primary findings of this study suggest the application of different PRP separations may result in a potential beneficial effect on the clinically relevant target cells in vitro. However, it is unclear which platelet concentration or PRP preparation may be optimal for the treatment of various cell types. In addition, a "more is better" theory for the use of higher platelet concentrations cannot be supported. This study was not intended to prove efficacy but to provide a platform for future research to be built upon. The utilization of different PRP separations may result in a potentially beneficial effect on the clinically relevant target cells in vitro, but it is unclear which platelet concentration or PRP preparation may be optimal for the treatment of various cell types.

  15. Portable Just-in-Time Specialization of Dynamically Typed Scripting Languages

    NASA Astrophysics Data System (ADS)

    Williams, Kevin; McCandless, Jason; Gregg, David

    In this paper, we present a portable approach to JIT compilation for dynamically typed scripting languages. At runtime we generate ANSI C code and use the system's native C compiler to compile this code. The C compiler runs on a separate thread to the interpreter allowing program execution to continue during JIT compilation. Dynamic languages have variables which may change type at any point in execution. Our interpreter profiles variable types at both whole method and partial method granularity. When a frequently executed region of code is discovered, the compilation thread generates a specialized version of the region based on the profiled types. In this paper, we evaluate the level of instruction specialization achieved by our profiling scheme as well as the overall performance of our JIT.

  16. Application of copulas to improve covariance estimation for partial least squares.

    PubMed

    D'Angelo, Gina M; Weissfeld, Lisa A

    2013-02-20

    Dimension reduction techniques, such as partial least squares, are useful for computing summary measures and examining relationships in complex settings. Partial least squares requires an estimate of the covariance matrix as a first step in the analysis, making this estimate critical to the results. In addition, the covariance matrix also forms the basis for other techniques in multivariate analysis, such as principal component analysis and independent component analysis. This paper has been motivated by an example from an imaging study in Alzheimer's disease where there is complete separation between Alzheimer's and control subjects for one of the imaging modalities. This separation occurs in one block of variables and does not occur with the second block of variables resulting in inaccurate estimates of the covariance. We propose the use of a copula to obtain estimates of the covariance in this setting, where one set of variables comes from a mixture distribution. Simulation studies show that the proposed estimator is an improvement over the standard estimators of covariance. We illustrate the methods from the motivating example from a study in the area of Alzheimer's disease. Copyright © 2012 John Wiley & Sons, Ltd.

  17. Blind identification of the kinetic parameters in three-compartment models

    NASA Astrophysics Data System (ADS)

    Riabkov, Dmitri Y.; Di Bella, Edward V. R.

    2004-03-01

    Quantified knowledge of tissue kinetic parameters in the regions of the brain and other organs can offer information useful in clinical and research applications. Dynamic medical imaging with injection of radioactive or paramagnetic tracer can be used for this measurement. The kinetics of some widely used tracers such as [18F]2-fluoro-2-deoxy-D-glucose can be described by a three-compartment physiological model. The kinetic parameters of the tissue can be estimated from dynamically acquired images. Feasibility of estimation by blind identification, which does not require knowledge of the blood input, is considered analytically and numerically in this work for the three-compartment type of tissue response. The non-uniqueness of the two-region case for blind identification of kinetic parameters in three-compartment model is shown; at least three regions are needed for the blind identification to be unique. Numerical results for the accuracy of these blind identification methods in different conditions were considered. Both a separable variables least-squares (SLS) approach and an eigenvector-based algorithm for multichannel blind deconvolution approach were used. The latter showed poor accuracy. Modifications for non-uniform time sampling were also developed. Also, another method which uses a model for the blood input was compared. Results for the macroparameter K, which reflects the metabolic rate of glucose usage, using three regions with noise showed comparable accuracy for the separable variables least squares method and for the input model-based method, and slightly worse accuracy for SLS with the non-uniform sampling modification.

  18. Application of UHPLC for the determination of free amino acids in different cheese varieties.

    PubMed

    Mayer, Helmut K; Fiechter, Gregor

    2013-10-01

    A rapid ultra-high performance liquid chromatography (UHPLC) protocol for the determination of amino acids as their respective 6-aminoquinolyl-N-hydroxysuccinimidyl carbamate (AQC) derivatives was successfully applied for assessing free amino acid levels in commercial cheese samples representing typical product groups (ripening protocols) in cheesemaking. Based on the Waters AccQ.Tag™ method as a high performance liquid chromatography (HPLC) amino acid solution designed for hydrolyzate analyses, method adaptation onto UHPLC was performed, and detection of AQC derivatives was changed from former fluorescence (λ(Ex) 250 nm/λ(Em) 395 nm) to UV (254 nm). Compared to the original HPLC method, UHPLC proved to be superior by facilitating excellent separations of 18 amino acids within 12 min only, thus demonstrating significantly shortened runtimes (>35 min for HPLC) while retaining the original separation chemistry and amino acid elution pattern. Free amino acid levels of the analyzed cheese samples showed a high extent of variability depending on the cheese type, with highest total amounts found for original Italian extra-hard cheeses (up to 9,000 mg/100 g) and lowest for surface mold- or bacterial smear-ripened soft cheeses (200-600 mg/100 g). Despite the intrinsic variability in both total and specific concentrations, the established UHPLC method enabled reliable and interference-free amino acid profiling throughout all cheese types, thus demonstrating a valuable tool to generate high quality data for the characterization of cheese ripening.

  19. Methods to control for unmeasured confounding in pharmacoepidemiology: an overview.

    PubMed

    Uddin, Md Jamal; Groenwold, Rolf H H; Ali, Mohammed Sanni; de Boer, Anthonius; Roes, Kit C B; Chowdhury, Muhammad A B; Klungel, Olaf H

    2016-06-01

    Background Unmeasured confounding is one of the principal problems in pharmacoepidemiologic studies. Several methods have been proposed to detect or control for unmeasured confounding either at the study design phase or the data analysis phase. Aim of the Review To provide an overview of commonly used methods to detect or control for unmeasured confounding and to provide recommendations for proper application in pharmacoepidemiology. Methods/Results Methods to control for unmeasured confounding in the design phase of a study are case only designs (e.g., case-crossover, case-time control, self-controlled case series) and the prior event rate ratio adjustment method. Methods that can be applied in the data analysis phase include, negative control method, perturbation variable method, instrumental variable methods, sensitivity analysis, and ecological analysis. A separate group of methods are those in which additional information on confounders is collected from a substudy. The latter group includes external adjustment, propensity score calibration, two-stage sampling, and multiple imputation. Conclusion As the performance and application of the methods to handle unmeasured confounding may differ across studies and across databases, we stress the importance of using both statistical evidence and substantial clinical knowledge for interpretation of the study results.

  20. How to Calculate Renyi Entropy from Heart Rate Variability, and Why it Matters for Detecting Cardiac Autonomic Neuropathy.

    PubMed

    Cornforth, David J; Tarvainen, Mika P; Jelinek, Herbert F

    2014-01-01

    Cardiac autonomic neuropathy (CAN) is a disease that involves nerve damage leading to an abnormal control of heart rate. An open question is to what extent this condition is detectable from heart rate variability (HRV), which provides information only on successive intervals between heart beats, yet is non-invasive and easy to obtain from a three-lead ECG recording. A variety of measures may be extracted from HRV, including time domain, frequency domain, and more complex non-linear measures. Among the latter, Renyi entropy has been proposed as a suitable measure that can be used to discriminate CAN from controls. However, all entropy methods require estimation of probabilities, and there are a number of ways in which this estimation can be made. In this work, we calculate Renyi entropy using several variations of the histogram method and a density method based on sequences of RR intervals. In all, we calculate Renyi entropy using nine methods and compare their effectiveness in separating the different classes of participants. We found that the histogram method using single RR intervals yields an entropy measure that is either incapable of discriminating CAN from controls, or that it provides little information that could not be gained from the SD of the RR intervals. In contrast, probabilities calculated using a density method based on sequences of RR intervals yield an entropy measure that provides good separation between groups of participants and provides information not available from the SD. The main contribution of this work is that different approaches to calculating probability may affect the success of detecting disease. Our results bring new clarity to the methods used to calculate the Renyi entropy in general, and in particular, to the successful detection of CAN.

  1. How to Calculate Renyi Entropy from Heart Rate Variability, and Why it Matters for Detecting Cardiac Autonomic Neuropathy

    PubMed Central

    Cornforth, David J.;  Tarvainen, Mika P.; Jelinek, Herbert F.

    2014-01-01

    Cardiac autonomic neuropathy (CAN) is a disease that involves nerve damage leading to an abnormal control of heart rate. An open question is to what extent this condition is detectable from heart rate variability (HRV), which provides information only on successive intervals between heart beats, yet is non-invasive and easy to obtain from a three-lead ECG recording. A variety of measures may be extracted from HRV, including time domain, frequency domain, and more complex non-linear measures. Among the latter, Renyi entropy has been proposed as a suitable measure that can be used to discriminate CAN from controls. However, all entropy methods require estimation of probabilities, and there are a number of ways in which this estimation can be made. In this work, we calculate Renyi entropy using several variations of the histogram method and a density method based on sequences of RR intervals. In all, we calculate Renyi entropy using nine methods and compare their effectiveness in separating the different classes of participants. We found that the histogram method using single RR intervals yields an entropy measure that is either incapable of discriminating CAN from controls, or that it provides little information that could not be gained from the SD of the RR intervals. In contrast, probabilities calculated using a density method based on sequences of RR intervals yield an entropy measure that provides good separation between groups of participants and provides information not available from the SD. The main contribution of this work is that different approaches to calculating probability may affect the success of detecting disease. Our results bring new clarity to the methods used to calculate the Renyi entropy in general, and in particular, to the successful detection of CAN. PMID:25250311

  2. Method for chemically analyzing a solution by acoustic means

    DOEpatents

    Beller, Laurence S.

    1997-01-01

    A method and apparatus for determining a type of solution and the concention of that solution by acoustic means. Generally stated, the method consists of: immersing a sound focusing transducer within a first liquid filled container; locating a separately contained specimen solution at a sound focal point within the first container; locating a sound probe adjacent to the specimen, generating a variable intensity sound signal from the transducer; measuring fundamental and multiple harmonic sound signal amplitudes; and then comparing a plot of a specimen sound response with a known solution sound response, thereby determining the solution type and concentration.

  3. Covariance Method of the Tunneling Radiation from High Dimensional Rotating Black Holes

    NASA Astrophysics Data System (ADS)

    Li, Hui-Ling; Han, Yi-Wen; Chen, Shuai-Ru; Ding, Cong

    2018-04-01

    In this paper, Angheben-Nadalini-Vanzo-Zerbini (ANVZ) covariance method is used to study the tunneling radiation from the Kerr-Gödel black hole and Myers-Perry black hole with two independent angular momentum. By solving the Hamilton-Jacobi equation and separating the variables, the radial motion equation of a tunneling particle is obtained. Using near horizon approximation and the distance of the proper pure space, we calculate the tunneling rate and the temperature of Hawking radiation. Thus, the method of ANVZ covariance is extended to the research of high dimensional black hole tunneling radiation.

  4. Absorption and scattering of light by nonspherical particles. [in atmosphere

    NASA Technical Reports Server (NTRS)

    Bohren, C. F.

    1986-01-01

    Using the example of the polarization of scattered light, it is shown that the scattering matrices for identical, randomly ordered particles and for spherical particles are unequal. The spherical assumptions of Mie theory are therefore inconsistent with the random shapes and sizes of atmospheric particulates. The implications for corrections made to extinction measurements of forward scattering light are discussed. Several analytical methods are examined as potential bases for developing more accurate models, including Rayleigh theory, Fraunhoffer Diffraction theory, anomalous diffraction theory, Rayleigh-Gans theory, the separation of variables technique, the Purcell-Pennypacker method, the T-matrix method, and finite difference calculations.

  5. Conditional maximum-entropy method for selecting prior distributions in Bayesian statistics

    NASA Astrophysics Data System (ADS)

    Abe, Sumiyoshi

    2014-11-01

    The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown, in particular, how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.

  6. DNA Barcode Sequence Identification Incorporating Taxonomic Hierarchy and within Taxon Variability

    PubMed Central

    Little, Damon P.

    2011-01-01

    For DNA barcoding to succeed as a scientific endeavor an accurate and expeditious query sequence identification method is needed. Although a global multiple–sequence alignment can be generated for some barcoding markers (e.g. COI, rbcL), not all barcoding markers are as structurally conserved (e.g. matK). Thus, algorithms that depend on global multiple–sequence alignments are not universally applicable. Some sequence identification methods that use local pairwise alignments (e.g. BLAST) are unable to accurately differentiate between highly similar sequences and are not designed to cope with hierarchic phylogenetic relationships or within taxon variability. Here, I present a novel alignment–free sequence identification algorithm–BRONX–that accounts for observed within taxon variability and hierarchic relationships among taxa. BRONX identifies short variable segments and corresponding invariant flanking regions in reference sequences. These flanking regions are used to score variable regions in the query sequence without the production of a global multiple–sequence alignment. By incorporating observed within taxon variability into the scoring procedure, misidentifications arising from shared alleles/haplotypes are minimized. An explicit treatment of more inclusive terminals allows for separate identifications to be made for each taxonomic level and/or for user–defined terminals. BRONX performs better than all other methods when there is imperfect overlap between query and reference sequences (e.g. mini–barcode queries against a full–length barcode database). BRONX consistently produced better identifications at the genus–level for all query types. PMID:21857897

  7. Is temperature the main cause of dengue rise in non-endemic countries? The case of Argentina

    PubMed Central

    2012-01-01

    Background Dengue cases have increased during the last decades, particularly in non-endemic areas, and Argentina was no exception in the southern transmission fringe. Although temperature rise has been blamed for this, human population growth, increased travel and inefficient vector control may also be implicated. The relative contribution of geographic, demographic and climatic of variables on the occurrence of dengue cases was evaluated. Methods According to dengue history in the country, the study was divided in two decades, a first decade corresponding to the reemergence of the disease and the second including several epidemics. Annual dengue risk was modeled by a temperature-based mechanistic model as annual days of possible transmission. The spatial distribution of dengue occurrence was modeled as a function of the output of the mechanistic model, climatic, geographic and demographic variables for both decades. Results According to the temperature-based model dengue risk increased between the two decades, and epidemics of the last decade coincided with high annual risk. Dengue spatial occurrence was best modeled by a combination of climatic, demographic and geographic variables and province as a grouping factor. It was positively associated with days of possible transmission, human population number, population fall and distance to water bodies. When considered separately, the classification performance of demographic variables was higher than that of climatic and geographic variables. Conclusions Temperature, though useful to estimate annual transmission risk, does not fully describe the distribution of dengue occurrence at the country scale. Indeed, when taken separately, climatic variables performed worse than geographic or demographic variables. A combination of the three types was best for this task. PMID:22768874

  8. Informed Source Separation of Atmospheric and Surface Signal Contributions in Shortwave Hyperspectral Imagery using Non-negative Matrix Factorization

    NASA Astrophysics Data System (ADS)

    Wright, L.; Coddington, O.; Pilewskie, P.

    2015-12-01

    Current challenges in Earth remote sensing require improved instrument spectral resolution, spectral coverage, and radiometric accuracy. Hyperspectral instruments, deployed on both aircraft and spacecraft, are a growing class of Earth observing sensors designed to meet these challenges. They collect large amounts of spectral data, allowing thorough characterization of both atmospheric and surface properties. The higher accuracy and increased spectral and spatial resolutions of new imagers require new numerical approaches for processing imagery and separating surface and atmospheric signals. One potential approach is source separation, which allows us to determine the underlying physical causes of observed changes. Improved signal separation will allow hyperspectral instruments to better address key science questions relevant to climate change, including land-use changes, trends in clouds and atmospheric water vapor, and aerosol characteristics. In this work, we investigate a Non-negative Matrix Factorization (NMF) method for the separation of atmospheric and land surface signal sources. NMF offers marked benefits over other commonly employed techniques, including non-negativity, which avoids physically impossible results, and adaptability, which allows the method to be tailored to hyperspectral source separation. We adapt our NMF algorithm to distinguish between contributions from different physically distinct sources by introducing constraints on spectral and spatial variability and by using library spectra to inform separation. We evaluate our NMF algorithm with simulated hyperspectral images as well as hyperspectral imagery from several instruments including, the NASA Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), NASA Hyperspectral Imager for the Coastal Ocean (HICO) and National Ecological Observatory Network (NEON) Imaging Spectrometer.

  9. Upper-Division Student Difficulties with Separation of Variables

    ERIC Educational Resources Information Center

    Wilcox, Bethany R.; Pollock, Steven J.

    2015-01-01

    Separation of variables can be a powerful technique for solving many of the partial differential equations that arise in physics contexts. Upper-division physics students encounter this technique in multiple topical areas including electrostatics and quantum mechanics. To better understand the difficulties students encounter when utilizing the…

  10. Optimization and Validation of a Sensitive Method for HPLC-PDA Simultaneous Determination of Torasemide and Spironolactone in Human Plasma using Central Composite Design.

    PubMed

    Subramanian, Venkatesan; Nagappan, Kannappan; Sandeep Mannemala, Sai

    2015-01-01

    A sensitive, accurate, precise and rapid HPLC-PDA method was developed and validated for the simultaneous determination of torasemide and spironolactone in human plasma using Design of experiments. Central composite design was used to optimize the method using content of acetonitrile, concentration of buffer and pH of mobile phase as independent variables, while the retention factor of spironolactone, resolution between torasemide and phenobarbitone; and retention time of phenobarbitone were chosen as dependent variables. The chromatographic separation was achieved on Phenomenex C(18) column and the mobile phase comprising 20 mM potassium dihydrogen ortho phosphate buffer (pH-3.2) and acetonitrile in 82.5:17.5 v/v pumped at a flow rate of 1.0 mL min(-1). The method was validated according to USFDA guidelines in terms of selectivity, linearity, accuracy, precision, recovery and stability. The limit of quantitation values were 80 and 50 ng mL(-1) for torasemide and spironolactone respectively. Furthermore, the sensitivity and simplicity of the method suggests the validity of method for routine clinical studies.

  11. Selecting minimum dataset soil variables using PLSR as a regressive multivariate method

    NASA Astrophysics Data System (ADS)

    Stellacci, Anna Maria; Armenise, Elena; Castellini, Mirko; Rossi, Roberta; Vitti, Carolina; Leogrande, Rita; De Benedetto, Daniela; Ferrara, Rossana M.; Vivaldi, Gaetano A.

    2017-04-01

    Long-term field experiments and science-based tools that characterize soil status (namely the soil quality indices, SQIs) assume a strategic role in assessing the effect of agronomic techniques and thus in improving soil management especially in marginal environments. Selecting key soil variables able to best represent soil status is a critical step for the calculation of SQIs. Current studies show the effectiveness of statistical methods for variable selection to extract relevant information deriving from multivariate datasets. Principal component analysis (PCA) has been mainly used, however supervised multivariate methods and regressive techniques are progressively being evaluated (Armenise et al., 2013; de Paul Obade et al., 2016; Pulido Moncada et al., 2014). The present study explores the effectiveness of partial least square regression (PLSR) in selecting critical soil variables, using a dataset comparing conventional tillage and sod-seeding on durum wheat. The results were compared to those obtained using PCA and stepwise discriminant analysis (SDA). The soil data derived from a long-term field experiment in Southern Italy. On samples collected in April 2015, the following set of variables was quantified: (i) chemical: total organic carbon and nitrogen (TOC and TN), alkali-extractable C (TEC and humic substances - HA-FA), water extractable N and organic C (WEN and WEOC), Olsen extractable P, exchangeable cations, pH and EC; (ii) physical: texture, dry bulk density (BD), macroporosity (Pmac), air capacity (AC), and relative field capacity (RFC); (iii) biological: carbon of the microbial biomass quantified with the fumigation-extraction method. PCA and SDA were previously applied to the multivariate dataset (Stellacci et al., 2016). PLSR was carried out on mean centered and variance scaled data of predictors (soil variables) and response (wheat yield) variables using the PLS procedure of SAS/STAT. In addition, variable importance for projection (VIP) statistics was used to quantitatively assess the predictors most relevant for response variable estimation and then for variable selection (Andersen and Bro, 2010). PCA and SDA returned TOC and RFC as influential variables both on the set of chemical and physical data analyzed separately as well as on the whole dataset (Stellacci et al., 2016). Highly weighted variables in PCA were also TEC, followed by K, and AC, followed by Pmac and BD, in the first PC (41.2% of total variance); Olsen P and HA-FA in the second PC (12.6%), Ca in the third (10.6%) component. Variables enabling maximum discrimination among treatments for SDA were WEOC, on the whole dataset, humic substances, followed by Olsen P, EC and clay, in the separate data analyses. The highest PLS-VIP statistics were recorded for Olsen P and Pmac, followed by TOC, TEC, pH and Mg for chemical variables and clay, RFC and AC for the physical variables. Results show that different methods may provide different ranking of the selected variables and the presence of a response variable, in regressive techniques, may affect variable selection. Further investigation with different response variables and with multi-year datasets would allow to better define advantages and limits of single or combined approaches. Acknowledgment The work was supported by the projects "BIOTILLAGE, approcci innovative per il miglioramento delle performances ambientali e produttive dei sistemi cerealicoli no-tillage", financed by PSR-Basilicata 2007-2013, and "DESERT, Low-cost water desalination and sensor technology compact module" financed by ERANET-WATERWORKS 2014. References Andersen C.M. and Bro R., 2010. Variable selection in regression - a tutorial. Journal of Chemometrics, 24 728-737. Armenise et al., 2013. Developing a soil quality index to compare soil fitness for agricultural use under different managements in the mediterranean environment. Soil and Tillage Research, 130:91-98. de Paul Obade et al., 2016. A standardized soil quality index for diverse field conditions. Sci. Total Env. 541:424-434. Pulido Moncada et al., 2014. Data-driven analysis of soil quality indicators using limited data. Geoderma, 235:271-278. Stellacci et al., 2016. Comparison of different multivariate methods to select key soil variables for soil quality indices computation. XLV Congress of the Italian Society of Agronomy (SIA), Sassari, 20-22 September 2016.

  12. Statistical analysis of environmental variability within the CELSS breadboard project's biomass production chamber

    NASA Technical Reports Server (NTRS)

    Stutte, G. W.; Chetirkin, P. V.; Mackowiak, C. L.; Fortson, R. E.

    1993-01-01

    Variability in the aerial and root environments of NASA's Breadboard Project's Biomass Production Chamber (BPC) was determined. Data from two lettuce and two potato growouts were utilized. One growout of each crop was conducted prior to separating the upper and lower chambers; the other was subsequent to separation. There were little or no differences in pH, EC, or solution temperature between the upper and lower chamber or within a chamber. Variation in the aerial environment within a chamber was two to three times greater than variation between chambers for air temperature, relative humidity, and PPF. High variability in air velocity, relative to tray position, was observed. Separating the BPC had no effect on PPF, air velocity, solution temperature, pH, or EC. Separation reduced the gradient in air temperature and relative humidity between the upper and lower chambers, but increased the variability within a chamber. Variation between upper and lower chambers was within 5 percent of environmental set-points and of little or no physiological significance. In contrast, the variability within a chamber limits the capability of the BPC to generate statistically reliable data from individual tray treatments at this time.

  13. Risks for Early Substance Involvement Associated with Parental Alcoholism and Parental Separation in an Adolescent Female Cohort*

    PubMed Central

    Waldron, Mary; Vaughan, Ellen L.; Bucholz, Kathleen K.; Lynskey, Michael T.; Sartor, Carolyn E.; Duncan, Alexis E.; Madden, Pamela A.F.; Heath, Andrew C.

    2014-01-01

    Background We examined timing of substance involvement as a joint function of parental history of alcoholism and parental separation during childhood. Method Data were drawn from a large cohort of female like-sex twins [n = 613 African Ancestry (AA), n = 3550 European or other Ancestry (EA)]. Cox proportional hazards regression was conducted predicting age at first use of alcohol, first alcohol intoxication, first use and regular use of cigarettes, and first use of cannabis and other illicit drugs from dummy variables coding for parental alcoholism and parental separation. Propensity score analysis was also conducted comparing intact and separated families by predicted probability of parental separation. Results In EA families, increased risk of substance involvement was found in both alcoholic and separated families, particularly through ages 10 or 14 years, with risk to offspring from alcoholic separated families further increased. In AA families, associations with parental alcoholism and parental separation were weak and with few exceptions statistically nonsignificant. While propensity score findings confirmed unique risks observed in EA families, intact and separated AA families were poorly matched on risk-factors presumed to predate parental separation, especially parental alcoholism, requiring cautious interpretation of AA survival-analytic findings. Conclusion For offspring of European ancestry, parental separation predicts early substance involvement that is not explained by parental alcoholism nor associated family background characteristics. Additional research is needed to better characterize risks associated with parental separation in African American families. PMID:24647368

  14. Application of fractional derivative with exponential law to bi-fractional-order wave equation with frictional memory kernel

    NASA Astrophysics Data System (ADS)

    Cuahutenango-Barro, B.; Taneco-Hernández, M. A.; Gómez-Aguilar, J. F.

    2017-12-01

    Analytical solutions of the wave equation with bi-fractional-order and frictional memory kernel of Mittag-Leffler type are obtained via Caputo-Fabrizio fractional derivative in the Liouville-Caputo sense. Through the method of separation of variables and Laplace transform method we derive closed-form solutions and establish fundamental solutions. Special cases with homogeneous Dirichlet boundary conditions and nonhomogeneous initial conditions, as well as for the external force are considered. Numerical simulations of the special solutions were done and novel behaviors are obtained.

  15. Evaluation of the sustainability of contrasted pig farming systems: economy.

    PubMed

    Ilari-Antoine, E; Bonneau, M; Klauke, T N; Gonzàlez, J; Dourmad, J Y; De Greef, K; Houwers, H W J; Fabrega, E; Zimmer, C; Hviid, M; Van der Oever, B; Edwards, S A

    2014-12-01

    The aim of this paper is to present an efficient tool for evaluating the economy part of the sustainability of pig farming systems. The selected tool IDEA was tested on a sample of farms from 15 contrasted systems in Europe. A statistical analysis was carried out to check the capacity of the indicators to illustrate the variability of the population and to analyze which of these indicators contributed the most towards it. The scores obtained for the farms were consistent with the reality of pig production; the variable distribution showed an important variability of the sample. The principal component analysis and cluster analysis separated the sample into five subgroups, in which the six main indicators significantly differed, which underlines the robustness of the tool. The IDEA method was proven to be easily comprehensible, requiring few initial variables and with an efficient benchmarking system; all six indicators contributed to fully describe a varied and contrasted population.

  16. Benchtop Technologies for Circulating Tumor Cells Separation Based on Biophysical Properties

    PubMed Central

    Low, Wan Shi; Wan Abas, Wan Abu Bakar

    2015-01-01

    Circulating tumor cells (CTCs) are tumor cells that have detached from primary tumor site and are transported via the circulation system. The importance of CTCs as prognostic biomarker is leveraged when multiple studies found that patient with cutoff of 5 CTCs per 7.5 mL blood has poor survival rate. Despite its clinical relevance, the isolation and characterization of CTCs can be quite challenging due to their large morphological variability and the rare presence of CTCs within the blood. Numerous methods have been employed and discussed in the literature for CTCs separation. In this paper, we will focus on label free CTCs isolation methods, in which the biophysical and biomechanical properties of cells (e.g., size, deformability, and electricity) are exploited for CTCs detection. To assess the present state of various isolation methods, key performance metrics such as capture efficiency, cell viability, and throughput will be reported. Finally, we discuss the challenges and future perspectives of CTC isolation technologies. PMID:25977918

  17. Benchtop technologies for circulating tumor cells separation based on biophysical properties.

    PubMed

    Low, Wan Shi; Wan Abas, Wan Abu Bakar

    2015-01-01

    Circulating tumor cells (CTCs) are tumor cells that have detached from primary tumor site and are transported via the circulation system. The importance of CTCs as prognostic biomarker is leveraged when multiple studies found that patient with cutoff of 5 CTCs per 7.5 mL blood has poor survival rate. Despite its clinical relevance, the isolation and characterization of CTCs can be quite challenging due to their large morphological variability and the rare presence of CTCs within the blood. Numerous methods have been employed and discussed in the literature for CTCs separation. In this paper, we will focus on label free CTCs isolation methods, in which the biophysical and biomechanical properties of cells (e.g., size, deformability, and electricity) are exploited for CTCs detection. To assess the present state of various isolation methods, key performance metrics such as capture efficiency, cell viability, and throughput will be reported. Finally, we discuss the challenges and future perspectives of CTC isolation technologies.

  18. Parareal algorithms with local time-integrators for time fractional differential equations

    NASA Astrophysics Data System (ADS)

    Wu, Shu-Lin; Zhou, Tao

    2018-04-01

    It is challenge work to design parareal algorithms for time-fractional differential equations due to the historical effect of the fractional operator. A direct extension of the classical parareal method to such equations will lead to unbalance computational time in each process. In this work, we present an efficient parareal iteration scheme to overcome this issue, by adopting two recently developed local time-integrators for time fractional operators. In both approaches, one introduces auxiliary variables to localized the fractional operator. To this end, we propose a new strategy to perform the coarse grid correction so that the auxiliary variables and the solution variable are corrected separately in a mixed pattern. It is shown that the proposed parareal algorithm admits robust rate of convergence. Numerical examples are presented to support our conclusions.

  19. Comment on "Applications of homogenous balanced principle on investigating exact solutions to a series of time fractional nonlinear PDEs", [Commun Nonlinear Sci Numer Simulat 47 (2017) 253-266

    NASA Astrophysics Data System (ADS)

    Li, Xiangzheng

    2018-06-01

    A counterexample is given to show that the product rule of the Caputo fractional derivatives does not hold except on a special point. The function-expansion method of separation variable proposed by Rui[Commun Nonlinear Sci Numer Simulat 47 (2017) 253-266] based on the product rule must be modified.

  20. The use of functional data analysis to study variability in childrens speech: Further data

    NASA Astrophysics Data System (ADS)

    Koenig, Laura L.; Lucero, Jorge C.

    2002-05-01

    Much previous research has reported increased token-to-token variability in children relative to adults, but the sources and implications of this variability remain matters of debate. Recently, functional data analysis has been used as a tool to gain greater insight into the nature of variability in children's and adults' speech data. In FDA, signals are time-normalized using a smooth function of time. The magnitude of the time-warping function provides an index of phasing (temporal) variability, and a separate index of amplitude variability is calculated from the time-normalized signal. Here, oral airflow data are analyzed from 5-year-olds, 10-year-olds, and adult women producing laryngeal and oral fricatives (/h, s, z/). The preliminary FDA results show that children generally have higher temporal and amplitude indices than adults, suggesting greater variability both in gestural timing and magnitude. However, individual patterns are evident in the relative magnitude of the two indices, and in which consonants show the highest values. The time-varying patterns of flow variability over time in /s/ are also explored as a method of inferring relative variability among laryngeal and oral gestures. [Work supported by NIH and CNPq, Brazil.

  1. Clustering performance comparison using K-means and expectation maximization algorithms.

    PubMed

    Jung, Yong Gyu; Kang, Min Soo; Heo, Jun

    2014-11-14

    Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K -means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K -means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results.

  2. Collective feature selection to identify crucial epistatic variants.

    PubMed

    Verma, Shefali S; Lucas, Anastasia; Zhang, Xinyuan; Veturi, Yogasudha; Dudek, Scott; Li, Binglan; Li, Ruowang; Urbanowicz, Ryan; Moore, Jason H; Kim, Dokyoon; Ritchie, Marylyn D

    2018-01-01

    Machine learning methods have gained popularity and practicality in identifying linear and non-linear effects of variants associated with complex disease/traits. Detection of epistatic interactions still remains a challenge due to the large number of features and relatively small sample size as input, thus leading to the so-called "short fat data" problem. The efficiency of machine learning methods can be increased by limiting the number of input features. Thus, it is very important to perform variable selection before searching for epistasis. Many methods have been evaluated and proposed to perform feature selection, but no single method works best in all scenarios. We demonstrate this by conducting two separate simulation analyses to evaluate the proposed collective feature selection approach. Through our simulation study we propose a collective feature selection approach to select features that are in the "union" of the best performing methods. We explored various parametric, non-parametric, and data mining approaches to perform feature selection. We choose our top performing methods to select the union of the resulting variables based on a user-defined percentage of variants selected from each method to take to downstream analysis. Our simulation analysis shows that non-parametric data mining approaches, such as MDR, may work best under one simulation criteria for the high effect size (penetrance) datasets, while non-parametric methods designed for feature selection, such as Ranger and Gradient boosting, work best under other simulation criteria. Thus, using a collective approach proves to be more beneficial for selecting variables with epistatic effects also in low effect size datasets and different genetic architectures. Following this, we applied our proposed collective feature selection approach to select the top 1% of variables to identify potential interacting variables associated with Body Mass Index (BMI) in ~ 44,000 samples obtained from Geisinger's MyCode Community Health Initiative (on behalf of DiscovEHR collaboration). In this study, we were able to show that selecting variables using a collective feature selection approach could help in selecting true positive epistatic variables more frequently than applying any single method for feature selection via simulation studies. We were able to demonstrate the effectiveness of collective feature selection along with a comparison of many methods in our simulation analysis. We also applied our method to identify non-linear networks associated with obesity.

  3. Operator- and software-related post-experimental variability and source of error in 2-DE analysis.

    PubMed

    Millioni, Renato; Puricelli, Lucia; Sbrignadello, Stefano; Iori, Elisabetta; Murphy, Ellen; Tessari, Paolo

    2012-05-01

    In the field of proteomics, several approaches have been developed for separating proteins and analyzing their differential relative abundance. One of the oldest, yet still widely used, is 2-DE. Despite the continuous advance of new methods, which are less demanding from a technical standpoint, 2-DE is still compelling and has a lot of potential for improvement. The overall variability which affects 2-DE includes biological, experimental, and post-experimental (software-related) variance. It is important to highlight how much of the total variability of this technique is due to post-experimental variability, which, so far, has been largely neglected. In this short review, we have focused on this topic and explained that post-experimental variability and source of error can be further divided into those which are software-dependent and those which are operator-dependent. We discuss these issues in detail, offering suggestions for reducing errors that may affect the quality of results, summarizing the advantages and drawbacks of each approach.

  4. Impact damage resistance of composite fuselage structure, part 1

    NASA Technical Reports Server (NTRS)

    Dost, E. F.; Avery, W. B.; Ilcewicz, L. B.; Grande, D. H.; Coxon, B. R.

    1992-01-01

    The impact damage resistance of laminated composite transport aircraft fuselage structures was studied experimentally. A statistically based designed experiment was used to examine numerous material, laminate, structural, and extrinsic (e.g., impactor type) variables. The relative importance and quantitative measure of the effect of each variable and variable interactions on responses including impactor dynamic response, visibility, and internal damage state were determined. The study utilized 32 three-stiffener panels, each with a unique combination of material type, material forms, and structural geometry. Two manufacturing techniques, tow placement and tape lamination, were used to build panels representative of potential fuselage crown, keel, and lower side-panel designs. Various combinations of impactor variables representing various foreign-object-impact threats to the aircraft were examined. Impacts performed at different structural locations within each panel (e.g., skin midbay, stiffener attaching flange, etc.) were considered separate parallel experiments. The relationship between input variables, measured damage states, and structural response to this damage are presented including recommendations for materials and impact test methods for fuselage structure.

  5. An efficient approach to understanding and predicting the effects of multiple task characteristics on performance.

    PubMed

    Richardson, Miles

    2017-04-01

    In ergonomics there is often a need to identify and predict the separate effects of multiple factors on performance. A cost-effective fractional factorial approach to understanding the relationship between task characteristics and task performance is presented. The method has been shown to provide sufficient independent variability to reveal and predict the effects of task characteristics on performance in two domains. The five steps outlined are: selection of performance measure, task characteristic identification, task design for user trials, data collection, regression model development and task characteristic analysis. The approach can be used for furthering knowledge of task performance, theoretical understanding, experimental control and prediction of task performance. Practitioner Summary: A cost-effective method to identify and predict the separate effects of multiple factors on performance is presented. The five steps allow a better understanding of task factors during the design process.

  6. Computational Study of Fluidic Thrust Vectoring using Separation Control in a Nozzle

    NASA Technical Reports Server (NTRS)

    Deere, Karen; Berrier, Bobby L.; Flamm, Jeffrey D.; Johnson, Stuart K.

    2003-01-01

    A computational investigation of a two- dimensional nozzle was completed to assess the use of fluidic injection to manipulate flow separation and cause thrust vectoring of the primary jet thrust. The nozzle was designed with a recessed cavity to enhance the throat shifting method of fluidic thrust vectoring. The structured-grid, computational fluid dynamics code PAB3D was used to guide the design and analyze over 60 configurations. Nozzle design variables included cavity convergence angle, cavity length, fluidic injection angle, upstream minimum height, aft deck angle, and aft deck shape. All simulations were computed with a static freestream Mach number of 0.05. a nozzle pressure ratio of 3.858, and a fluidic injection flow rate equal to 6 percent of the primary flow rate. Results indicate that the recessed cavity enhances the throat shifting method of fluidic thrust vectoring and allows for greater thrust-vector angles without compromising thrust efficiency.

  7. Combining exposure and effect modeling into an integrated probabilistic environmental risk assessment for nanoparticles.

    PubMed

    Jacobs, Rianne; Meesters, Johannes A J; Ter Braak, Cajo J F; van de Meent, Dik; van der Voet, Hilko

    2016-12-01

    There is a growing need for good environmental risk assessment of engineered nanoparticles (ENPs). Environmental risk assessment of ENPs has been hampered by lack of data and knowledge about ENPs, their environmental fate, and their toxicity. This leads to uncertainty in the risk assessment. To deal with uncertainty in the risk assessment effectively, probabilistic methods are advantageous. In the present study, the authors developed a method to model both the variability and the uncertainty in environmental risk assessment of ENPs. This method is based on the concentration ratio and the ratio of the exposure concentration to the critical effect concentration, both considered to be random. In this method, variability and uncertainty are modeled separately so as to allow the user to see which part of the total variation in the concentration ratio is attributable to uncertainty and which part is attributable to variability. The authors illustrate the use of the method with a simplified aquatic risk assessment of nano-titanium dioxide. The authors' method allows a more transparent risk assessment and can also direct further environmental and toxicological research to the areas in which it is most needed. Environ Toxicol Chem 2016;35:2958-2967. © 2016 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC. © 2016 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC.

  8. Evaluating the efficacy of DNA differential extraction methods for sexual assault evidence.

    PubMed

    Klein, Sonja B; Buoncristiani, Martin R

    2017-07-01

    Analysis of sexual assault evidence, often a mixture of spermatozoa and victim epithelial cells, represents a significant portion of a forensic DNA laboratory's case load. Successful genotyping of sperm DNA from these mixed cell samples, particularly with low amounts of sperm, depends on maximizing sperm DNA recovery and minimizing non-sperm DNA carryover. For evaluating the efficacy of the differential extraction, we present a method which uses a Separation Potential Ratio (SPRED) to consider both sperm DNA recovery and non-sperm DNA removal as variables for determining separation efficiency. In addition, we describe how the ratio of male-to-female DNA in the sperm fraction may be estimated by using the SPRED of the differential extraction method in conjunction with the estimated ratio of male-to-female DNA initially present on the mixed swab. This approach may be useful for evaluating or modifying differential extraction methods, as we demonstrate by comparing experimental results obtained from the traditional differential extraction and the Erase Sperm Isolation Kit (PTC © ) procedures. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Regression calibration for models with two predictor variables measured with error and their interaction, using instrumental variables and longitudinal data.

    PubMed

    Strand, Matthew; Sillau, Stefan; Grunwald, Gary K; Rabinovitch, Nathan

    2014-02-10

    Regression calibration provides a way to obtain unbiased estimators of fixed effects in regression models when one or more predictors are measured with error. Recent development of measurement error methods has focused on models that include interaction terms between measured-with-error predictors, and separately, methods for estimation in models that account for correlated data. In this work, we derive explicit and novel forms of regression calibration estimators and associated asymptotic variances for longitudinal models that include interaction terms, when data from instrumental and unbiased surrogate variables are available but not the actual predictors of interest. The longitudinal data are fit using linear mixed models that contain random intercepts and account for serial correlation and unequally spaced observations. The motivating application involves a longitudinal study of exposure to two pollutants (predictors) - outdoor fine particulate matter and cigarette smoke - and their association in interactive form with levels of a biomarker of inflammation, leukotriene E4 (LTE 4 , outcome) in asthmatic children. Because the exposure concentrations could not be directly observed, we used measurements from a fixed outdoor monitor and urinary cotinine concentrations as instrumental variables, and we used concentrations of fine ambient particulate matter and cigarette smoke measured with error by personal monitors as unbiased surrogate variables. We applied the derived regression calibration methods to estimate coefficients of the unobserved predictors and their interaction, allowing for direct comparison of toxicity of the different pollutants. We used simulations to verify accuracy of inferential methods based on asymptotic theory. Copyright © 2013 John Wiley & Sons, Ltd.

  10. Perturbed invariant subspaces and approximate generalized functional variable separation solution for nonlinear diffusion-convection equations with weak source

    NASA Astrophysics Data System (ADS)

    Xia, Ya-Rong; Zhang, Shun-Li; Xin, Xiang-Peng

    2018-03-01

    In this paper, we propose the concept of the perturbed invariant subspaces (PISs), and study the approximate generalized functional variable separation solution for the nonlinear diffusion-convection equation with weak source by the approximate generalized conditional symmetries (AGCSs) related to the PISs. Complete classification of the perturbed equations which admit the approximate generalized functional separable solutions (AGFSSs) is obtained. As a consequence, some AGFSSs to the resulting equations are explicitly constructed by way of examples.

  11. A Note on Separation of Variables

    ERIC Educational Resources Information Center

    Cherniavsky, Yonah

    2011-01-01

    We write down very simple, necessary and sufficient conditions for the additive and multiplicative separability of variables: v(x[subscript 1], x[subscript 2],..., x[subscript n]) = g[subscript 1](x[subscript 1]) + g[subscript 2](x[subscript 2]) +...+ g[subscript n](x[subscript n]) or u(x[subscript 1], x[subscript 2],..., x[subscript n]) =…

  12. Age-class separation of blue-winged ducks

    USGS Publications Warehouse

    Hohman, W.L.; Moore, J.L.; Twedt, D.J.; Mensik, John G.; Logerwell, E.

    1995-01-01

    Accurate determination of age is of fundamental importance to population and life history studies of waterfowl and their management. Therefore, we developed quantitative methods that separate adult and immature blue-winged teal (Anas discors), cinnamon teal (A. cyanoptera), and northern shovelers (A. clypeata) during spring and summer. To assess suitability of discriminant models using 9 remigial measurements, we compared model performance (% agreement between predicted age and age assigned to birds on the basis of definitive cloacal or rectral feather characteristics) in different flyways (Mississippi and Pacific) and between years (1990-91 and 1991-92). We also applied age-classification models to wings obtained from U.S. Fish and Wildlife Service harvest surveys in the Mississippi and Central-Pacific flyways (wing-bees) for which age had been determined using qualitative characteristics (i.e., remigial markings, shape, or wear). Except for male northern shovelers, models correctly aged lt 90% (range 70-86%) of blue-winged ducks. Model performance varied among species and differed between sexes and years. Proportions of individuals that were correctly aged were greater for males (range 63-86%) than females (range 39-69%). Models for northern shovelers performed better in flyway comparisons within year (1991-92, La. model applied to Calif. birds, and Calif. model applied to La. birds: 90 and 94% for M, and 89 and 76% for F, respectively) than in annual comparisons within the Mississippi Flyway (1991-92 model applied to 1990-91 data: 79% for M, 50% for F). Exclusion of measurements that varied by flyway or year did not improve model performance. Quantitative methods appear to be of limited value for age separation of female blue-winged ducks. Close agreement between predicted age and age assigned to wings from the wing-bees suggests that qualitative and quantitative methods may be equally accurate for age separation of male blue-winged ducks. We interpret annual and flyway differences in remigial measurements and reduced performance of age classification models as evidence of high variability in size of blue-winged ducks' remiges. Variability in remigial size of these and other small-bodied waterfowl may be related to nutrition during molt.

  13. Validation of spatial variability in downscaling results from the VALUE perfect predictor experiment

    NASA Astrophysics Data System (ADS)

    Widmann, Martin; Bedia, Joaquin; Gutiérrez, Jose Manuel; Maraun, Douglas; Huth, Radan; Fischer, Andreas; Keller, Denise; Hertig, Elke; Vrac, Mathieu; Wibig, Joanna; Pagé, Christian; Cardoso, Rita M.; Soares, Pedro MM; Bosshard, Thomas; Casado, Maria Jesus; Ramos, Petra

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research. Within VALUE a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods has been developed. In the first validation experiment the downscaling methods are validated in a setup with perfect predictors taken from the ERA-interim reanalysis for the period 1997 - 2008. This allows to investigate the isolated skill of downscaling methods without further error contributions from the large-scale predictors. One aspect of the validation is the representation of spatial variability. As part of the VALUE validation we have compared various properties of the spatial variability of downscaled daily temperature and precipitation with the corresponding properties in observations. We have used two test validation datasets, one European-wide set of 86 stations, and one higher-density network of 50 stations in Germany. Here we present results based on three approaches, namely the analysis of i.) correlation matrices, ii.) pairwise joint threshold exceedances, and iii.) regions of similar variability. We summarise the information contained in correlation matrices by calculating the dependence of the correlations on distance and deriving decorrelation lengths, as well as by determining the independent degrees of freedom. Probabilities for joint threshold exceedances and (where appropriate) non-exceedances are calculated for various user-relevant thresholds related for instance to extreme precipitation or frost and heat days. The dependence of these probabilities on distance is again characterised by calculating typical length scales that separate dependent from independent exceedances. Regionalisation is based on rotated Principal Component Analysis. The results indicate which downscaling methods are preferable if the dependency of variability at different locations is relevant for the user.

  14. Performance of maximum likelihood mixture models to estimate nursery habitat contributions to fish stocks: a case study on sea bream Sparus aurata

    PubMed Central

    Darnaude, Audrey M.

    2016-01-01

    Background Mixture models (MM) can be used to describe mixed stocks considering three sets of parameters: the total number of contributing sources, their chemical baseline signatures and their mixing proportions. When all nursery sources have been previously identified and sampled for juvenile fish to produce baseline nursery-signatures, mixing proportions are the only unknown set of parameters to be estimated from the mixed-stock data. Otherwise, the number of sources, as well as some/all nursery-signatures may need to be also estimated from the mixed-stock data. Our goal was to assess bias and uncertainty in these MM parameters when estimated using unconditional maximum likelihood approaches (ML-MM), under several incomplete sampling and nursery-signature separation scenarios. Methods We used a comprehensive dataset containing otolith elemental signatures of 301 juvenile Sparus aurata, sampled in three contrasting years (2008, 2010, 2011), from four distinct nursery habitats. (Mediterranean lagoons) Artificial nursery-source and mixed-stock datasets were produced considering: five different sampling scenarios where 0–4 lagoons were excluded from the nursery-source dataset and six nursery-signature separation scenarios that simulated data separated 0.5, 1.5, 2.5, 3.5, 4.5 and 5.5 standard deviations among nursery-signature centroids. Bias (BI) and uncertainty (SE) were computed to assess reliability for each of the three sets of MM parameters. Results Both bias and uncertainty in mixing proportion estimates were low (BI ≤ 0.14, SE ≤ 0.06) when all nursery-sources were sampled but exhibited large variability among cohorts and increased with the number of non-sampled sources up to BI = 0.24 and SE = 0.11. Bias and variability in baseline signature estimates also increased with the number of non-sampled sources, but tended to be less biased, and more uncertain than mixing proportion ones, across all sampling scenarios (BI < 0.13, SE < 0.29). Increasing separation among nursery signatures improved reliability of mixing proportion estimates, but lead to non-linear responses in baseline signature parameters. Low uncertainty, but a consistent underestimation bias affected the estimated number of nursery sources, across all incomplete sampling scenarios. Discussion ML-MM produced reliable estimates of mixing proportions and nursery-signatures under an important range of incomplete sampling and nursery-signature separation scenarios. This method failed, however, in estimating the true number of nursery sources, reflecting a pervasive issue affecting mixture models, within and beyond the ML framework. Large differences in bias and uncertainty found among cohorts were linked to differences in separation of chemical signatures among nursery habitats. Simulation approaches, such as those presented here, could be useful to evaluate sensitivity of MM results to separation and variability in nursery-signatures for other species, habitats or cohorts. PMID:27761305

  15. The cumulative effects of forest disturbance and climate variability on streamflow components in a large forest-dominated watershed

    NASA Astrophysics Data System (ADS)

    Li, Qiang; Wei, Xiaohua; Zhang, Mingfang; Liu, Wenfei; Giles-Hansen, Krysta; Wang, Yi

    2018-02-01

    Assessing how forest disturbance and climate variability affect streamflow components is critical for watershed management, ecosystem protection, and engineering design. Previous studies have mainly evaluated the effects of forest disturbance on total streamflow, rarely with attention given to its components (e.g., base flow and surface runoff), particularly in large watersheds (>1000 km2). In this study, the Upper Similkameen River watershed (1810 km2), an international watershed situated between Canada and the USA, was selected to examine how forest disturbance and climate variability interactively affect total streamflow, baseflow, and surface runoff. Baseflow was separated using a combination of the recursive digital filter method and conductivity mass balance method. Time series analysis and modified double mass curves were then employed to quantitatively separate the relative contributions of forest disturbance and climate variability to each streamflow component. Our results showed that average annual baseflow and baseflow index (baseflow/streamflow) were 113.3 ± 35.6 mm year-1 and 0.27 for 1954-2013, respectively. Forest disturbance increased annual streamflow, baseflow, and surface runoff of 27.7 ± 13.7 mm, 7.4 ± 3.6 mm, and 18.4 ± 12.9 mm, respectively, with its relative contributions to the changes in respective streamflow components being 27.0 ± 23.0%, 29.2 ± 23.1%, and 25.7 ± 23.4%, respectively. In contrast, climate variability decreased them by 74.9 ± 13.7 mm, 17.9 ± 3.6 mm, and 53.3 ± 12.9 mm, respectively, with its relative contributions to the changes in respective streamflow components being 73.0 ± 23.0%, 70.8 ± 23.1% and 73.1 ± 23.4%, respectively. Despite working in opposite ways, the impacts of climate variability on annual streamflow, baseflow, and surface runoff were of a much greater magnitude than forest disturbance impacts. This study has important implications for the protection of aquatic habitat, engineering design, and watershed planning in the context of future forest disturbance and climate change.

  16. Method for chemically analyzing a solution by acoustic means

    DOEpatents

    Beller, L.S.

    1997-04-22

    A method and apparatus are disclosed for determining a type of solution and the concentration of that solution by acoustic means. Generally stated, the method consists of: immersing a sound focusing transducer within a first liquid filled container; locating a separately contained specimen solution at a sound focal point within the first container; locating a sound probe adjacent to the specimen, generating a variable intensity sound signal from the transducer; measuring fundamental and multiple harmonic sound signal amplitudes; and then comparing a plot of a specimen sound response with a known solution sound response, thereby determining the solution type and concentration. 10 figs.

  17. Method for evaluating wind turbine wake effects on wind farm performance

    NASA Technical Reports Server (NTRS)

    Neustadter, H. E.; Spera, D. A.

    1985-01-01

    A method of testing the performance of a cluster of wind turbine units an data analysis equations are presented which together form a simple and direct procedure for determining the reduction in energy output caused by the wake of an upwind turbine. This method appears to solve the problems presented by data scatter and wind variability. Test data from the three-unit Mod-2 wind turbine cluster at Goldendale, Washington, are analyzed to illustrate the application of the proposed method. In this sample case the reduction in energy was found to be about 10 percent when the Mod-2 units were separated a distance equal to seven diameters and winds were below rated.

  18. Prescription, Dispensation, and Generic Medicine Replacement Ratios: Influence on Japanese Medicine Costs

    PubMed Central

    Yokoi, Masayuki; Tashiro, Takao

    2016-01-01

    This study used publicly available data to examine the effect of the separation of dispensing and prescribing medicines between pharmacists in pharmacies and doctors in medical institutions (the separation system) and the generic medicine replacement ratio on the cost of various medicines in Japanese prefectures. For Japanese medical institutions, participation in the separation system is optional. Consequently, the expansion rate of the separation system for each administrative district is highly variable. In our multiple regression analysis, the dependent variables were the costs of daily medicines, specifically, total, internal, external, and injection medicines, as well as medical devices, and the independent variables were the expansion rate of the separation system and generic medicine replacement ratio. The expansion rate of the separation system showed a significant negative partial correlation with the daily costs of total, internal, and injection medicines as well as medical devices. Moreover, the rate of replacing brand name medicines with generic medicines showed a significant negative partial correlation with the daily costs of total and internal medicines. However, external and injection medicines and medical devices did not because only a few or no generic products of these types were sold in the Japanese market. Otherwise, expansion of the separation system was effective in reducing medicine costs, except in the case of external medicines. This suggests that the cost efficiency effect of the separation system does not function all the time. PMID:26234979

  19. Experimental Investigation of Normal Shock Boundary-Layer Interaction with Hybrid Flow Control

    NASA Technical Reports Server (NTRS)

    Vyas, Manan A.; Hirt, Stefanie M.; Anderson, Bernhard H.

    2012-01-01

    Hybrid flow control, a combination of micro-ramps and micro-jets, was experimentally investigated in the 15x15 cm Supersonic Wind Tunnel (SWT) at the NASA Glenn Research Center. Full factorial, a design of experiments (DOE) method, was used to develop a test matrix with variables such as inter-ramp spacing, ramp height and chord length, and micro-jet injection flow ratio. A total of 17 configurations were tested with various parameters to meet the DOE criteria. In addition to boundary-layer measurements, oil flow visualization was used to qualitatively understand shock induced flow separation characteristics. The flow visualization showed the normal shock location, size of the separation, path of the downstream moving counter-rotating vortices, and corner flow effects. The results show that hybrid flow control demonstrates promise in reducing the size of shock boundary-layer interactions and resulting flow separation by means of energizing the boundary layer.

  20. Blood pressure variability in man: its relation to high blood pressure, age and baroreflex sensitivity.

    PubMed

    Mancia, G; Ferrari, A; Gregorini, L; Parati, G; Pomidossi, G; Bertinieri, G; Grassi, G; Zanchetti, A

    1980-12-01

    1. Intra-arterial blood pressure and heart rate were recorded for 24 h in ambulant hospitalized patients of variable age who had normal blood pressure or essential hypertension. Mean 24 h values, standard deviations and variation coefficient were obtained as the averages of values separately analysed for 48 consecutive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation aations and variation coefficient were obtained as the averages of values separately analysed for 48 consecurive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for heart rate were smaller. 3. In hypertensive subjects standard deviation for mean arterial pressure was greater than in normotensive subjects of similar ages, but this was not the case for variation coefficient, which was slightly smaller in the former than in the latter group. Normotensive and hypertensive subjects showed no difference in standard deviation and variation coefficient for heart rate. 4. In both normotensive and hypertensive subjects standard deviation and even more so variation coefficient were slightly or not related to arterial baroreflex sensitivity as measured by various methods (phenylephrine, neck suction etc.). 5. It is concluded that blood pressure variability increases and heart rate variability decreases with age, but that changes in variability are not so obvious in hypertension. Also, differences in variability among subjects are only marginally explained by differences in baroreflex function.

  1. Highly Selective and Considerable Subcritical Butane Extraction to Separate Abamectin in Green Tea.

    PubMed

    Zhang, Yating; Gu, Lingbiao; Wang, Fei; Kong, Lingjun; Pang, Huili; Qin, Guangyong

    2017-06-01

    We specially carried out the subcritical butane extraction to separate abamectin from tea leaves. Four parameters, such as extraction temperature, extraction time, number of extraction cycles, and solid-liquid ratio were studied and optimized through the response surface methodology with design matrix developed by Box-Behnken. Seventeen experiments with three various factors and three variable levels were employed to investigate the effect of these parameters on the extraction of abamectin. Besides, catechins, theanine, caffeine, and aroma components were determined by both high-performance liquid chromatography and gas chromatography-mass spectrometry to evaluate the tea quality before and after the extraction. The results showed that the extraction temperature was the uppermost parameter compared with others. The optimal extraction conditions selected as follows: extraction temperature, 42°C; number of extraction cycles and extraction time, 1 and 30 min, respectively; and solid-liquid ratio, 1:10. Based on the above study, the separation efficiency of abamectin was up to 93.95%. It is notable that there has a quite low loss rate, including the negligible damage of aroma components, the bits reduce of catechins within the range of 0.7%-13.1%, and a handful lessen of caffeine and theanine of 1.81% and 2.6%, respectively. The proposed method suggested subcritical butane possesses solubility for lipid-soluble pesticides, and since most of the pesticides are attached to the surfaces of tea, thus the as-applied method was successfully effective to separate abamectin because of the so practical and promising method.

  2. Mixed Effects Modeling Using Stochastic Differential Equations: Illustrated by Pharmacokinetic Data of Nicotinic Acid in Obese Zucker Rats.

    PubMed

    Leander, Jacob; Almquist, Joachim; Ahlström, Christine; Gabrielsson, Johan; Jirstrand, Mats

    2015-05-01

    Inclusion of stochastic differential equations in mixed effects models provides means to quantify and distinguish three sources of variability in data. In addition to the two commonly encountered sources, measurement error and interindividual variability, we also consider uncertainty in the dynamical model itself. To this end, we extend the ordinary differential equation setting used in nonlinear mixed effects models to include stochastic differential equations. The approximate population likelihood is derived using the first-order conditional estimation with interaction method and extended Kalman filtering. To illustrate the application of the stochastic differential mixed effects model, two pharmacokinetic models are considered. First, we use a stochastic one-compartmental model with first-order input and nonlinear elimination to generate synthetic data in a simulated study. We show that by using the proposed method, the three sources of variability can be successfully separated. If the stochastic part is neglected, the parameter estimates become biased, and the measurement error variance is significantly overestimated. Second, we consider an extension to a stochastic pharmacokinetic model in a preclinical study of nicotinic acid kinetics in obese Zucker rats. The parameter estimates are compared between a deterministic and a stochastic NiAc disposition model, respectively. Discrepancies between model predictions and observations, previously described as measurement noise only, are now separated into a comparatively lower level of measurement noise and a significant uncertainty in model dynamics. These examples demonstrate that stochastic differential mixed effects models are useful tools for identifying incomplete or inaccurate model dynamics and for reducing potential bias in parameter estimates due to such model deficiencies.

  3. The contribution of local and transport processes to phytoplankton biomass variability over different timescales in the Upper James River, Virginia

    NASA Astrophysics Data System (ADS)

    Qin, Qubin; Shen, Jian

    2017-09-01

    Although both local processes (photosynthesis, respiration, grazing, and settling), and transport processes (advective transport and diffusive transport) significantly affect local phytoplankton dynamics, it is difficult to separate their contributions and to investigate the relative importance of each process to the local variability of phytoplankton biomass over different timescales. A method of using the transport rate is introduced to quantify the contribution of transport processes. By combining the time-varying transport rate and high-frequency observed chlorophyll a data, we can explicitly examine the impact of local and transport processes on phytoplankton biomass over a range of timescales from hourly to annually. For the Upper James River, results show that the relative importance of local and transport processes differs on different timescales. Local processes dominate phytoplankton variability on daily to weekly timescales, whereas the contribution of transport processes increases on seasonal to annual timescales and reaches equilibrium with local processes. With the use of the transport rate and high-frequency chlorophyll a data, a method similar to the open water oxygen method for metabolism is also presented to estimate phytoplankton primary production.

  4. A fluorescence-based method for rapid and direct determination of polybrominated diphenyl ethers in water

    DOE PAGES

    Shan, Huimei; Liu, Chongxuan; Wang, Zheming; ...

    2015-01-01

    A new method was developed for rapid and direct measurement of polybrominated diphenyl ethers (PBDEs) in aqueous samples using fluorescence spectroscopy. The fluorescence spectra of tri- to deca-BDE (BDE 28, 47, 99, 153, 190, and 209) commonly found in environment were measured at variable emission and excitation wavelengths. The results revealed that the PBDEs have distinct fluorescence spectral profiles and peak positions that can be exploited to identify these species and determine their concentrations in aqueous solutions. The detection limits as determined in deionized water spiked with PBDEs are 1.71-5.82 ng/L for BDE 28, BDE 47, BDE 190, and BDEmore » 209 and 45.55–69.95 ng/L for BDE 99 and BDE 153. The effects of environmental variables including pH, humic substance, and groundwater chemical composition on PBDEs measurements were also investigated. These environmental variables affected fluorescence intensity, but their effect can be corrected through linear additivity and separation of spectral signal contribution. Compared with conventional GC-based analytical methods, the fluorescence spectroscopy method is more efficient as it only uses a small amount of samples (2-4 mL), avoids lengthy complicated concentration and extraction steps, and has a low detection limit of a few ng/L.« less

  5. A reduced successive quadratic programming strategy for errors-in-variables estimation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tjoa, I.-B.; Biegler, L. T.; Carnegie-Mellon Univ.

    Parameter estimation problems in process engineering represent a special class of nonlinear optimization problems, because the maximum likelihood structure of the objective function can be exploited. Within this class, the errors in variables method (EVM) is particularly interesting. Here we seek a weighted least-squares fit to the measurements with an underdetermined process model. Thus, both the number of variables and degrees of freedom available for optimization increase linearly with the number of data sets. Large optimization problems of this type can be particularly challenging and expensive to solve because, for general-purpose nonlinear programming (NLP) algorithms, the computational effort increases atmore » least quadratically with problem size. In this study we develop a tailored NLP strategy for EVM problems. The method is based on a reduced Hessian approach to successive quadratic programming (SQP), but with the decomposition performed separately for each data set. This leads to the elimination of all variables but the model parameters, which are determined by a QP coordination step. In this way the computational effort remains linear in the number of data sets. Moreover, unlike previous approaches to the EVM problem, global and superlinear properties of the SQP algorithm apply naturally. Also, the method directly incorporates inequality constraints on the model parameters (although not on the fitted variables). This approach is demonstrated on five example problems with up to 102 degrees of freedom. Compared to general-purpose NLP algorithms, large improvements in computational performance are observed.« less

  6. A new technique for spectrophotometric determination of pseudoephedrine and guaifenesin in syrup and synthetic mixture.

    PubMed

    Riahi, Siavash; Hadiloo, Farshad; Milani, Seyed Mohammad R; Davarkhah, Nazila; Ganjali, Mohammad R; Norouzi, Parviz; Seyfi, Payam

    2011-05-01

    The accuracy in predicting different chemometric methods was compared when applied on ordinary UV spectra and first order derivative spectra. Principal component regression (PCR) and partial least squares with one dependent variable (PLS1) and two dependent variables (PLS2) were applied on spectral data of pharmaceutical formula containing pseudoephedrine (PDP) and guaifenesin (GFN). The ability to derivative in resolved overlapping spectra chloropheniramine maleate was evaluated when multivariate methods are adopted for analysis of two component mixtures without using any chemical pretreatment. The chemometrics models were tested on an external validation dataset and finally applied to the analysis of pharmaceuticals. Significant advantages were found in analysis of the real samples when the calibration models from derivative spectra were used. It should also be mentioned that the proposed method is a simple and rapid way requiring no preliminary separation steps and can be used easily for the analysis of these compounds, especially in quality control laboratories. Copyright © 2011 John Wiley & Sons, Ltd.

  7. Towards an Intellectual Component of Joint Doctrine: The Philosophy and Practice of Experimental Intelligence

    DTIC Science & Technology

    2002-05-13

    alternative: feedback from the environment. This was Darwin’s great insight, that an agent can improve its internal models without any paranormal ...identifying the variables of war and establishing their interrelations.”26 Clausewitz and Schneider considered a critical analysis of history as the only...to separate the enduring principles from the accidental anomalies. This critical analysis of history is the method that comprises Dewey’s pattern of

  8. Using Variable-Length Aligned Fragment Pairs and an Improved Transition Function for Flexible Protein Structure Alignment.

    PubMed

    Cao, Hu; Lu, Yonggang

    2017-01-01

    With the rapid growth of known protein 3D structures in number, how to efficiently compare protein structures becomes an essential and challenging problem in computational structural biology. At present, many protein structure alignment methods have been developed. Among all these methods, flexible structure alignment methods are shown to be superior to rigid structure alignment methods in identifying structure similarities between proteins, which have gone through conformational changes. It is also found that the methods based on aligned fragment pairs (AFPs) have a special advantage over other approaches in balancing global structure similarities and local structure similarities. Accordingly, we propose a new flexible protein structure alignment method based on variable-length AFPs. Compared with other methods, the proposed method possesses three main advantages. First, it is based on variable-length AFPs. The length of each AFP is separately determined to maximally represent a local similar structure fragment, which reduces the number of AFPs. Second, it uses local coordinate systems, which simplify the computation at each step of the expansion of AFPs during the AFP identification. Third, it decreases the number of twists by rewarding the situation where nonconsecutive AFPs share the same transformation in the alignment, which is realized by dynamic programming with an improved transition function. The experimental data show that compared with FlexProt, FATCAT, and FlexSnap, the proposed method can achieve comparable results by introducing fewer twists. Meanwhile, it can generate results similar to those of the FATCAT method in much less running time due to the reduced number of AFPs.

  9. How to regress and predict in a Bland-Altman plot? Review and contribution based on tolerance intervals and correlated-errors-in-variables models.

    PubMed

    Francq, Bernard G; Govaerts, Bernadette

    2016-06-30

    Two main methodologies for assessing equivalence in method-comparison studies are presented separately in the literature. The first one is the well-known and widely applied Bland-Altman approach with its agreement intervals, where two methods are considered interchangeable if their differences are not clinically significant. The second approach is based on errors-in-variables regression in a classical (X,Y) plot and focuses on confidence intervals, whereby two methods are considered equivalent when providing similar measures notwithstanding the random measurement errors. This paper reconciles these two methodologies and shows their similarities and differences using both real data and simulations. A new consistent correlated-errors-in-variables regression is introduced as the errors are shown to be correlated in the Bland-Altman plot. Indeed, the coverage probabilities collapse and the biases soar when this correlation is ignored. Novel tolerance intervals are compared with agreement intervals with or without replicated data, and novel predictive intervals are introduced to predict a single measure in an (X,Y) plot or in a Bland-Atman plot with excellent coverage probabilities. We conclude that the (correlated)-errors-in-variables regressions should not be avoided in method comparison studies, although the Bland-Altman approach is usually applied to avert their complexity. We argue that tolerance or predictive intervals are better alternatives than agreement intervals, and we provide guidelines for practitioners regarding method comparison studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Identification of ecological thresholds from variations in phytoplankton communities among lakes: contribution to the definition of environmental standards.

    PubMed

    Roubeix, Vincent; Danis, Pierre-Alain; Feret, Thibaut; Baudoin, Jean-Marc

    2016-04-01

    In aquatic ecosystems, the identification of ecological thresholds may be useful for managers as it can help to diagnose ecosystem health and to identify key levers to enable the success of preservation and restoration measures. A recent statistical method, gradient forest, based on random forests, was used to detect thresholds of phytoplankton community change in lakes along different environmental gradients. It performs exploratory analyses of multivariate biological and environmental data to estimate the location and importance of community thresholds along gradients. The method was applied to a data set of 224 French lakes which were characterized by 29 environmental variables and the mean abundances of 196 phytoplankton species. Results showed the high importance of geographic variables for the prediction of species abundances at the scale of the study. A second analysis was performed on a subset of lakes defined by geographic thresholds and presenting a higher biological homogeneity. Community thresholds were identified for the most important physico-chemical variables including water transparency, total phosphorus, ammonia, nitrates, and dissolved organic carbon. Gradient forest appeared as a powerful method at a first exploratory step, to detect ecological thresholds at large spatial scale. The thresholds that were identified here must be reinforced by the separate analysis of other aquatic communities and may be used then to set protective environmental standards after consideration of natural variability among lakes.

  11. Improved geometric variables for predicting disturbed flow at the normal carotid bifurcation

    NASA Astrophysics Data System (ADS)

    Bijari, Payam B.; Antiga, Luca; Steinman, David A.

    2011-03-01

    Recent work from our group has shown the primacy of the bifurcation area ratio and tortuosity in determining the amount of disturbed flow at the carotid bifurcation, believed to be a local risk factor for the carotid atherosclerosis. We have also presented fast and reliable methods of extraction of geometry from routine 3D contrast-enhanced magnetic resonance angiography, as the necessary step along the way for large-scale trials of such local risk factors. In the present study, we refine our original geometric variables to better reflect the underlying fluid mechanical principles. Flaring of the bifurcation, leading to flow separation, is defined by the maximum relative expansion of the common carotid artery (CCA), proximal to the bifurcation apex. The beneficial effect of curvature on flow inertia, via its suppression of flow separation, is now characterized by the tortuosity of CCA as it enters the flare region. Based on data from 50 normal carotid bifurcations, multiple linear regressions of these new independent geometric predictors against the dependent disturbed flow burden reveals adjusted R2 values approaching 0.5, better than the values closer to 0.3 achieved using the original variables. The excellent scan-rescan reproducibility demonstrated for our earlier geometric variables is shown to be preserved for the new definitions. Improved prediction of disturbed flow by robust and reproducible vascular geometry offers a practical pathway to large-scale studies of local risk factors in atherosclerosis.

  12. Psychological Separation, Ethnic Identity and Adjustment in Chicano/Latinos.

    ERIC Educational Resources Information Center

    Rodriguez, Ester R.; Bernstein, Bianca L.

    This study examined the relationship between psychological separation and college adjustment in a Chicano/Latino sample, a group which has traditionally not valued psychological separation (N=137). Ethnic identity as a moderator variable was also explored. The Psychological Separation Inventory, Student Adjustment to College Questionnaire, and the…

  13. Airfoil Design and Optimization by the One-Shot Method

    NASA Technical Reports Server (NTRS)

    Kuruvila, G.; Taasan, Shlomo; Salas, M. D.

    1995-01-01

    An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Lagrange multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.

  14. The El Nino/Southern Oscillation and Future Soybean Prices

    NASA Technical Reports Server (NTRS)

    Keppenne, C.

    1993-01-01

    Recently, it was shown that the application of a method combining singular spectrum analysis (SSA) and the maximum entropy method to univariate indicators of the coupled ocean-atmosphere El Nino/Southern Oscillation (ENSO) phenomenon can be helpful in determining whether an El Nino (EN) or La Nina (LN) event will occur. SSA - a variant of principal component analysis applied in the time domain - filters out variability unrelated to ENSO and separates the quasi-biennial (QB), two-to-three year variability, from a lower-frequency (LF) four-to-six year EN-LN cycle; the total variance associated with ENSO combines the QB and LF modes. ENSO has been known to affect weather conditions over much of the globe. For example, EN events have been connected with unusually rainy weather over the Central and Western US, while the opposite phases of the oscillation (LN) have been plausibly associated with extreme dry conditions over much of the same geographical area...

  15. Inferring time derivatives including cell growth rates using Gaussian processes

    NASA Astrophysics Data System (ADS)

    Swain, Peter S.; Stevenson, Keiran; Leary, Allen; Montano-Gutierrez, Luis F.; Clark, Ivan B. N.; Vogel, Jackie; Pilizota, Teuta

    2016-12-01

    Often the time derivative of a measured variable is of as much interest as the variable itself. For a growing population of biological cells, for example, the population's growth rate is typically more important than its size. Here we introduce a non-parametric method to infer first and second time derivatives as a function of time from time-series data. Our approach is based on Gaussian processes and applies to a wide range of data. In tests, the method is at least as accurate as others, but has several advantages: it estimates errors both in the inference and in any summary statistics, such as lag times, and allows interpolation with the corresponding error estimation. As illustrations, we infer growth rates of microbial cells, the rate of assembly of an amyloid fibril and both the speed and acceleration of two separating spindle pole bodies. Our algorithm should thus be broadly applicable.

  16. Airfoil optimization by the one-shot method

    NASA Technical Reports Server (NTRS)

    Kuruvila, G.; Taasan, Shlomo; Salas, M. D.

    1994-01-01

    An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (Governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Language multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.

  17. Comparison of three methods for registration of abdominal/pelvic volume data sets from functional-anatomic scans

    NASA Astrophysics Data System (ADS)

    Mahmoud, Faaiza; Ton, Anthony; Crafoord, Joakim; Kramer, Elissa L.; Maguire, Gerald Q., Jr.; Noz, Marilyn E.; Zeleznik, Michael P.

    2000-06-01

    The purpose of this work was to evaluate three volumetric registration methods in terms of technique, user-friendliness and time requirements. CT and SPECT data from 11 patients were interactively registered using: a 3D method involving only affine transformation; a mixed 3D - 2D non-affine (warping) method; and a 3D non-affine (warping) method. In the first method representative isosurfaces are generated from the anatomical images. Registration proceeds through translation, rotation, and scaling in all three space variables. Resulting isosurfaces are fused and quantitative measurements are possible. In the second method, the 3D volumes are rendered co-planar by performing an oblique projection. Corresponding landmark pairs are chosen on matching axial slice sets. A polynomial warp is then applied. This method has undergone extensive validation and was used to evaluate the results. The third method employs visualization tools. The data model allows images to be localized within two separate volumes. Landmarks are chosen on separate slices. Polynomial warping coefficients are generated and data points from one volume are moved to the corresponding new positions. The two landmark methods were the least time consuming (10 to 30 minutes from start to finish), but did demand a good knowledge of anatomy. The affine method was tedious and required a fair understanding of 3D geometry.

  18. Comparison of hydrochemical tracers to estimate source contributions to peak flow in a small, forested, headwater catchment

    USGS Publications Warehouse

    Rice, Karen C.; Hornberger, George M.

    1998-01-01

    Three-component (throughfall, soil water, groundwater) hydrograph separations at peak flow were performed on 10 storms over a 2-year period in a small forested catchment in north-central Maryland using an iterative and an exact solution. Seven pairs of tracers (deuterium and oxygen 18, deuterium and chloride, deuterium and sodium, deuterium and silica, chloride and silica, chloride and sodium, and sodium and silica) were used for three-component hydrograph separation for each storm at peak flow to determine whether or not the assumptions of hydrograph separation routinely can be met, to assess the adequacy of some commonly used tracers, to identify patterns in hydrograph-separation results, and to develop conceptual models for the patterns observed. Results of the three-component separations were not always physically meaningful, suggesting that assumptions of hydrograph separation had been violated. Uncertainties in solutions to equations for hydrograph separations were large, partly as a result of violations of assumptions used in deriving the separation equations and partly as a result of improper identification of chemical compositions of end-members. Results of three-component separations using commonly used tracers were widely variable. Consistent patterns in the amount of subsurface water contributing to peak flow (45-100%) were observed, no matter which separation method or combination of tracers was used. A general conceptual model for the sequence of contributions from the three end-members could be developed for 9 of the 10 storms. Overall results indicated that hydrochemical and hydrometric measurements need to be coupled in order to perform meaningful hydrograph separations.

  19. A computer-vision-based rotating speed estimation method for motor bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoxian; Guo, Jie; Lu, Siliang; Shen, Changqing; He, Qingbo

    2017-06-01

    Diagnosis of motor bearing faults under variable speed is a problem. In this study, a new computer-vision-based order tracking method is proposed to address this problem. First, a video recorded by a high-speed camera is analyzed with the speeded-up robust feature extraction and matching algorithm to obtain the instantaneous rotating speed (IRS) of the motor. Subsequently, an audio signal recorded by a microphone is equi-angle resampled for order tracking in accordance with the IRS curve, through which the frequency-domain signal is transferred to an angular-domain one. The envelope order spectrum is then calculated to determine the fault characteristic order, and finally the bearing fault pattern is determined. The effectiveness and robustness of the proposed method are verified with two brushless direct-current motor test rigs, in which two defective bearings and a healthy bearing are tested separately. This study provides a new noninvasive measurement approach that simultaneously avoids the installation of a tachometer and overcomes the disadvantages of tacholess order tracking methods for motor bearing fault diagnosis under variable speed.

  20. The psychological adjustment of children from separated families: The role of selected social support variables.

    PubMed

    Bouchard, C; Drapeau, S

    1991-06-01

    This study investigates the impact of social support on children's psychological adjustment following the divorce of their parents. Seventy-one (71) children from separated families and 120 children from intact families participated in the study. Data were collected twice. Children from separated families listed support networks of lower density with more sitters and teachers contributing both to emotional support and to negative interactions. Social support variables contribute more in predicting the psychological status of children from separated families than of children from intact families. Insufficient income, dissatisfaction with family life, lower density of the support network and higher ratio of negative interactions are predictive of children behavior problems.

  1. Enantiomeric separation of non-protein amino acids by electrokinetic chromatography.

    PubMed

    Pérez-Míguez, Raquel; Marina, María Luisa; Castro-Puyana, María

    2016-10-07

    New analytical methodologies enabling the enantiomeric separation of a group of non-protein amino acids of interest in the pharmaceutical and food analysis fields were developed in this work using Electrokinetic Chromatography. The use of FMOC as derivatization reagent and the subsequent separation using acidic conditions (formate buffer at pH 2.0) and anionic cyclodextrins as chiral selectors allowed the chiral separation of eight from the ten non-protein amino acids studied. Pyroglutamic acid, norvaline, norleucine, 3,4-dihydroxyphenilalanine, 2-aminoadipic acid, and selenomethionine were enantiomericaly separated using sulfated-α-CD while sulfated-γ-CD enabled the enantiomeric separation of norvaline, 3,4-dihydroxyphenilalanine, 2-aminoadipic acid, selenomethionie, citrulline, and pipecolic acid. Moreover, the potential of the developed methodologies was demonstrated in the analysis of citrulline and its enantiomeric impurity in food supplements. For that purpose, experimental and instrumental variables were optimized and the analytical characteristics of the proposed method were evaluated. LODs of 2.1×10 -7 and 1.8×10 -7 M for d- and l-citrulline, respectively, were obtained. d-Cit was not detectable in any of the six food supplement samples analyzed showing that the effect of storage time on the racemization of citrulline was negligible. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Separation of 20 coumarin derivatives using the capillary electrophoresis method optimized by a series of Doehlert experimental designs.

    PubMed

    Woźniakiewicz, Michał; Gładysz, Marta; Nowak, Paweł M; Kędzior, Justyna; Kościelniak, Paweł

    2017-05-15

    The aim of this study was to develop the first CE-based method enabling separation of 20 structurally similar coumarin derivatives. To facilitate method optimization a series of three consequent Doehlert experimental designs with the response surface methodology was employed, using number of peaks and the adjusted time of analysis as the selected responses. Initially, three variables were examined: buffer pH, ionic strength and temperature (No. 1 Doehlert design). The optimal conditions provided only partial separation, on that account, several buffer additives were examined at the next step: organic cosolvents and cyclodextrin (No. 2 Doehlert design). The optimal cyclodextrin type was also selected experimentally. The most promising results were obtained for the buffers fortified with methanol, acetonitrile and heptakis(2,3,6-tri-O-methyl)-β-cyclodextrin. Since these additives may potentially affect acid-base equilibrium and ionization state of analytes, the third Doehlert design (No. 3) was used to reconcile concentration of these additives with optimal pH. Ultimately, the total separation of all 20 compounds was achieved using the borate buffer at basic pH 9.5 in the presence of 10mM cyclodextrin, 9% (v/v) acetonitrile and 36% (v/v) methanol. Identity of all compounds was confirmed using the in-lab build UV-VIS spectra library. The developed method succeeded in identification of coumarin derivatives in three real samples. It demonstrates a huge resolving power of CE assisted by addition of cyclodextrins and organic cosolvents. Our unique optimization approach, based on the three Doehlert designs, seems to be prospective for future applications of this technique. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Cooperative Coevolution with Formula-Based Variable Grouping for Large-Scale Global Optimization.

    PubMed

    Wang, Yuping; Liu, Haiyan; Wei, Fei; Zong, Tingting; Li, Xiaodong

    2017-08-09

    For a large-scale global optimization (LSGO) problem, divide-and-conquer is usually considered an effective strategy to decompose the problem into smaller subproblems, each of which can then be solved individually. Among these decomposition methods, variable grouping is shown to be promising in recent years. Existing variable grouping methods usually assume the problem to be black-box (i.e., assuming that an analytical model of the objective function is unknown), and they attempt to learn appropriate variable grouping that would allow for a better decomposition of the problem. In such cases, these variable grouping methods do not make a direct use of the formula of the objective function. However, it can be argued that many real-world problems are white-box problems, that is, the formulas of objective functions are often known a priori. These formulas of the objective functions provide rich information which can then be used to design an effective variable group method. In this article, a formula-based grouping strategy (FBG) for white-box problems is first proposed. It groups variables directly via the formula of an objective function which usually consists of a finite number of operations (i.e., four arithmetic operations "[Formula: see text]", "[Formula: see text]", "[Formula: see text]", "[Formula: see text]" and composite operations of basic elementary functions). In FBG, the operations are classified into two classes: one resulting in nonseparable variables, and the other resulting in separable variables. In FBG, variables can be automatically grouped into a suitable number of non-interacting subcomponents, with variables in each subcomponent being interdependent. FBG can easily be applied to any white-box problem and can be integrated into a cooperative coevolution framework. Based on FBG, a novel cooperative coevolution algorithm with formula-based variable grouping (so-called CCF) is proposed in this article for decomposing a large-scale white-box problem into several smaller subproblems and optimizing them respectively. To further enhance the efficiency of CCF, a new local search scheme is designed to improve the solution quality. To verify the efficiency of CCF, experiments are conducted on the standard LSGO benchmark suites of CEC'2008, CEC'2010, CEC'2013, and a real-world problem. Our results suggest that the performance of CCF is very competitive when compared with those of the state-of-the-art LSGO algorithms.

  4. Response surface methodology for the determination of the design space of enantiomeric separations on cinchona-based zwitterionic chiral stationary phases by high performance liquid chromatography.

    PubMed

    Hanafi, Rasha Sayed; Lämmerhofer, Michael

    2018-01-26

    Quality-by-Design approach for enantioselective HPLC method development surpasses Quality-by-Testing in offering the optimal separation conditions with the least number of experiments and in its ability to describe the method's Design Space visually which helps to determine enantiorecognition to a significant extent. Although some schemes exist for enantiomeric separations on Cinchona-based zwitterionic stationary phases, the exact design space and the weights by which each of the chromatographic parameters influences the separation have not yet been statistically studied. In the current work, a screening design followed by a Response Surface Methodology optimization design were adopted for enantioseparation optimization of 3 model drugs namely the acidic Fmoc leucine, the amphoteric tryptophan and the basic salbutamol. The screening design proved that the acid/base additives are of utmost importance for the 3 chiral drugs, and that among 3 different pairs of acids and bases, acetic acid and diethylamine is the couple able to provide acceptable resolution at variable conditions. Visualization of the response surface of the retention factor, separation factor and resolution helped describe accurately the magnitude by which each chromatographic factor (% MeOH, concentration and ratio of acid base modifiers) affects the separation while interacting with other parameters. The global optima compromising highest enantioresolution with the least run time for the 3 chiral model drugs varied extremely, where it was best to set low % methanol with equal ratio of acid-base modifiers for the acidic drug, very high % methanol and 10-fold higher concentration of the acid for the amphoteric drug while 20 folds of the base modifier with moderate %methanol were needed for the basic drug. Considering the selected drugs as models for many series of structurally related compounds, the design space defined and the optimum conditions computed are the key for method development on cinchona-based chiral stationary phases. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Noninvasive methods in space cardiology.

    PubMed

    Baevsky, R M

    1997-01-01

    The development and application of noninvasive methods in space cardiology is discussed. These methods are used in astronautics both to gain new insights into the impact of weightlessness conditions on the human organism and to help solve problems involved in the medical monitoring of space crew members. The cardiovascular system is a major target for the action of microgravity. Noninvasive methods used to examine the cardiovascular system during space flights over the past 30 years are listed. Special attention is given to methods for studying heart rate variability and contactless recording of physiologic functions during night sleep. Analysis of heart rate variability highlights an important principle of space cardiology-gaining the maximum amount of information while recording as little data as possible. With this method, the degree of strain experienced by the systems of autonomic regulation and the adaptational capabilities of the body can be assessed at various stages of a space flight. Discriminant analysis of heart rate variability data enables the psycho-emotional component of stress to be separated from the component associated with the impact of weightlessness. A major advance in space medicine has been the development of techniques for contactless recording of pulse rates, breathing frequency, myocardial contractility, and motor activity during sleep using a sensor installed on the cosmonaut's sleeping bag. The data obtained can be used to study ultradian rhythms, which reflect the activity of higher autonomic centers. An important role of these centers in mobilizing functional reserves of the body to ensure its relatively stable adaptation to weightless conditions is shown.

  6. Characterization of polymerized liposomes using a combination of dc and cyclical electrical field-flow fractionation.

    PubMed

    Sant, Himanshu J; Chakravarty, Siddharth; Merugu, Srinivas; Ferguson, Colin G; Gale, Bruce K

    2012-10-02

    Characterization of polymerized liposomes (PolyPIPosomes) was carried out using a combination of normal dc electrical field-flow fractionation and cyclical electrical field-flow fractionation (CyElFFF) as an analytical technique. The constant nature of the carrier fluid and channel configuration for this technique eliminates many variables associated with multidimensional analysis. CyElFFF uses an oscillating field to induce separation and is performed in the same channel as standard dc electrical field-flow fractionation separation. Theory and experimental methods to characterize nanoparticles in terms of their sizes and electrophoretic mobilities are discussed in this paper. Polystyrene nanoparticles are used for system calibration and characterization of the separation performance, whereas polymerized liposomes are used to demonstrate the applicability of the system to biomedical samples. This paper is also the first to report separation and a higher effective field when CyElFFF is operated at very low applied voltages. The technique is shown to have the ability to quantify both particle size and electrophoretic mobility distributions for colloidal polystyrene nanoparticles and PolyPIPosomes.

  7. Streamflow variability and classification using false nearest neighbor method

    NASA Astrophysics Data System (ADS)

    Vignesh, R.; Jothiprakash, V.; Sivakumar, B.

    2015-12-01

    Understanding regional streamflow dynamics and patterns continues to be a challenging problem. The present study introduces the false nearest neighbor (FNN) algorithm, a nonlinear dynamic-based method, to examine the spatial variability of streamflow over a region. The FNN method is a dimensionality-based approach, where the dimension of the time series represents its variability. The method uses phase space reconstruction and nearest neighbor concepts, and identifies false neighbors in the reconstructed phase space. The FNN method is applied to monthly streamflow data monitored over a period of 53 years (1950-2002) in an extensive network of 639 stations in the contiguous United States (US). Since selection of delay time in phase space reconstruction may influence the FNN outcomes, analysis is carried out for five different delay time values: monthly, seasonal, and annual separation of data as well as delay time values obtained using autocorrelation function (ACF) and average mutual information (AMI) methods. The FNN dimensions for the 639 streamflow series are generally identified to range from 4 to 12 (with very few exceptional cases), indicating a wide range of variability in the dynamics of streamflow across the contiguous US. However, the FNN dimensions for a majority of the streamflow series are found to be low (less than or equal to 6), suggesting low level of complexity in streamflow dynamics in most of the individual stations and over many sub-regions. The FNN dimension estimates also reveal that streamflow dynamics in the western parts of the US (including far west, northwestern, and southwestern parts) generally exhibit much greater variability compared to that in the eastern parts of the US (including far east, northeastern, and southeastern parts), although there are also differences among 'pockets' within these regions. These results are useful for identification of appropriate model complexity at individual stations, patterns across regions and sub-regions, interpolation and extrapolation of data, and catchment classification. An attempt is also made to relate the FNN dimensions with catchment characteristics and streamflow statistical properties.

  8. Predictive Value of Beat-to-Beat QT Variability Index across the Continuum of Left Ventricular Dysfunction: Competing Risks of Non-cardiac or Cardiovascular Death, and Sudden or Non-Sudden Cardiac Death

    PubMed Central

    Tereshchenko, Larisa G.; Cygankiewicz, Iwona; McNitt, Scott; Vazquez, Rafael; Bayes-Genis, Antoni; Han, Lichy; Sur, Sanjoli; Couderc, Jean-Philippe; Berger, Ronald D.; de Luna, Antoni Bayes; Zareba, Wojciech

    2012-01-01

    Background The goal of this study was to determine the predictive value of beat-to-beat QT variability in heart failure (HF) patients across the continuum of left ventricular dysfunction. Methods and Results Beat-to-beat QT variability index (QTVI), heart rate variance (LogHRV), normalized QT variance (QTVN), and coherence between heart rate variability and QT variability have been measured at rest during sinus rhythm in 533 participants of the Muerte Subita en Insuficiencia Cardiaca (MUSIC) HF study (mean age 63.1±11.7; males 70.6%; LVEF >35% in 254 [48%]) and in 181 healthy participants from the Intercity Digital Electrocardiogram Alliance (IDEAL) database. During a median of 3.7 years of follow-up, 116 patients died, 52 from sudden cardiac death (SCD). In multivariate competing risk analyses, the highest QTVI quartile was associated with cardiovascular death [hazard ratio (HR) 1.67(95%CI 1.14-2.47), P=0.009] and in particular with non-sudden cardiac death [HR 2.91(1.69-5.01), P<0.001]. Elevated QTVI separated 97.5% of healthy individuals from subjects at risk for cardiovascular [HR 1.57(1.04-2.35), P=0.031], and non-sudden cardiac death in multivariate competing risk model [HR 2.58(1.13-3.78), P=0.001]. No interaction between QTVI and LVEF was found. QTVI predicted neither non-cardiac death (P=0.546) nor SCD (P=0.945). Decreased heart rate variability (HRV) rather than increased QT variability was the reason for increased QTVI in this study. Conclusions Increased QTVI due to depressed HRV predicts cardiovascular mortality and non-sudden cardiac death, but neither SCD nor excracardiac mortality in HF across the continuum of left ventricular dysfunction. Abnormally augmented QTVI separates 97.5% of healthy individuals from HF patients at risk. PMID:22730411

  9. A user's guide for V174, a program using a finite difference method to analyze transonic flow over oscillating wings

    NASA Technical Reports Server (NTRS)

    Butler, T. D.; Weatherill, W. H.; Sebastian, J. D.; Ehlers, F. E.

    1977-01-01

    The design and usage of a pilot program using a finite difference method for calculating the pressure distributions over harmonically oscillating wings in transonic flow are discussed. The procedure used is based on separating the velocity potential into steady and unsteady parts and linearizing the resulting unsteady differential equation for small disturbances. The steady velocity potential which must be obtained from some other program, is required for input. The unsteady differential equation is linear, complex in form with spatially varying coefficients. Because sinusoidal motion is assumed, time is not a variable. The numerical solution is obtained through a finite difference formulation and a line relaxation solution method.

  10. Method for the generation of variable density metal vapors which bypasses the liquidus phase

    DOEpatents

    Kunnmann, Walter; Larese, John Z.

    2001-01-01

    The present invention provides a method for producing a metal vapor that includes the steps of combining a metal and graphite in a vessel to form a mixture; heating the mixture to a first temperature in an argon gas atmosphere to form a metal carbide; maintaining the first temperature for a period of time; heating the metal carbide to a second temperature to form a metal vapor; withdrawing the metal vapor and the argon gas from the vessel; and separating the metal vapor from the argon gas. Metal vapors made using this method can be used to produce uniform powders of the metal oxide that have narrow size distribution and high purity.

  11. Ramsey method for Auger-electron interference induced by an attosecond twin pulse

    NASA Astrophysics Data System (ADS)

    Buth, Christian; Schafer, Kenneth J.

    2015-02-01

    We examine the archetype of an interference experiment for Auger electrons: two electron wave packets are launched by inner-shell ionizing a krypton atom using two attosecond light pulses with a variable time delay. This setting is an attosecond realization of the Ramsey method of separated oscillatory fields. Interference of the two ejected Auger-electron wave packets is predicted, indicating that the coherence between the two pulses is passed to the Auger electrons. For the detection of the interference pattern an accurate coincidence measurement of photo- and Auger electrons is necessary. The method allows one to control inner-shell electron dynamics on an attosecond timescale and represents a sensitive indicator for decoherence.

  12. Geometric morphometric analysis of intratrackway variability: a case study on theropod and ornithopod dinosaur trackways from Münchehagen (Lower Cretaceous, Germany).

    PubMed

    Lallensack, Jens N; van Heteren, Anneke H; Wings, Oliver

    2016-01-01

    A profound understanding of the influence of trackmaker anatomy, foot movements and substrate properties is crucial for any interpretation of fossil tracks. In this case study we analyze variability of footprint shape within one large theropod (T3), one medium-sized theropod (T2) and one ornithopod (I1) trackway from the Lower Cretaceous of Münchehagen (Lower Saxony, Germany) in order to determine the informativeness of individual features and measurements for ichnotaxonomy, trackmaker identification, and the discrimination between left and right footprints. Landmark analysis is employed based on interpretative outline drawings derived from photogrammetric data, allowing for the location of variability within the footprint and the assessment of covariation of separate footprint parts. Objective methods to define the margins of a footprint are tested and shown to be sufficiently accurate to reproduce the most important results. The lateral hypex and the heel are the most variable regions in the two theropod trackways. As indicated by principal component analysis, a posterior shift of the lateral hypex is correlated with an anterior shift of the margin of the heel. This pattern is less pronounced in the ornithopod trackway, indicating that variation patterns can differ in separate trackways. In all trackways, hypices vary independently from each other, suggesting that their relative position a questionable feature for ichnotaxonomic purposes. Most criteria commonly employed to differentiate between left and right footprints assigned to theropods are found to be reasonably reliable. The described ornithopod footprints are asymmetrical, again allowing for a left-right differentiation. Strikingly, 12 out of 19 measured footprints of the T2 trackway are stepped over the trackway midline, rendering the trackway pattern a misleading left-right criterion for this trackway. Traditional measurements were unable to differentiate between the theropod and the ornithopod trackways. Geometric morphometric analysis reveals potential for improvement of existing discriminant methods.

  13. Geometric morphometric analysis of intratrackway variability: a case study on theropod and ornithopod dinosaur trackways from Münchehagen (Lower Cretaceous, Germany)

    PubMed Central

    van Heteren, Anneke H.; Wings, Oliver

    2016-01-01

    A profound understanding of the influence of trackmaker anatomy, foot movements and substrate properties is crucial for any interpretation of fossil tracks. In this case study we analyze variability of footprint shape within one large theropod (T3), one medium-sized theropod (T2) and one ornithopod (I1) trackway from the Lower Cretaceous of Münchehagen (Lower Saxony, Germany) in order to determine the informativeness of individual features and measurements for ichnotaxonomy, trackmaker identification, and the discrimination between left and right footprints. Landmark analysis is employed based on interpretative outline drawings derived from photogrammetric data, allowing for the location of variability within the footprint and the assessment of covariation of separate footprint parts. Objective methods to define the margins of a footprint are tested and shown to be sufficiently accurate to reproduce the most important results. The lateral hypex and the heel are the most variable regions in the two theropod trackways. As indicated by principal component analysis, a posterior shift of the lateral hypex is correlated with an anterior shift of the margin of the heel. This pattern is less pronounced in the ornithopod trackway, indicating that variation patterns can differ in separate trackways. In all trackways, hypices vary independently from each other, suggesting that their relative position a questionable feature for ichnotaxonomic purposes. Most criteria commonly employed to differentiate between left and right footprints assigned to theropods are found to be reasonably reliable. The described ornithopod footprints are asymmetrical, again allowing for a left–right differentiation. Strikingly, 12 out of 19 measured footprints of the T2 trackway are stepped over the trackway midline, rendering the trackway pattern a misleading left–right criterion for this trackway. Traditional measurements were unable to differentiate between the theropod and the ornithopod trackways. Geometric morphometric analysis reveals potential for improvement of existing discriminant methods. PMID:27330855

  14. String limit of the isotropic Heisenberg chain in the four-particle sector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antipov, A. G., E-mail: aga2@csa.ru; Komarov, I. V., E-mail: ivkoma@rambler.r

    2008-05-15

    The quantum method of variable separation is applied to the spectral problem of the isotropic Heisenberg model. The Baxter difference equation is resolved by means of a special quasiclassical asymptotic expansion. States are identified by multiplicities of limiting values of the Bethe parameters. The string limit of the four-particle sector is investigated. String solutions are singled out and classified. It is shown that only a minor fraction of solutions demonstrate string behavior.

  15. Direct perturbation theory for the dark soliton solution to the nonlinear Schroedinger equation with normal dispersion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu Jialu; Yang Chunnuan; Cai Hao

    2007-04-15

    After finding the basic solutions of the linearized nonlinear Schroedinger equation by the method of separation of variables, the perturbation theory for the dark soliton solution is constructed by linear Green's function theory. In application to the self-induced Raman scattering, the adiabatic corrections to the soliton's parameters are obtained and the remaining correction term is given as a pure integral with respect to the continuous spectral parameter.

  16. Advanced Computational Methods for Study of Electromagnetic Compatibility

    DTIC Science & Technology

    2011-03-31

    following result establishes the super-algebraic convergence of Gper ,Lk to Gperk : Theorem 2.1 (Bruno, Shipman, Turc, Venakides) If k is not a Wood...Gperk (x,x ′)− Gper ,Lk (x,x ′)| ≤ CL 1 2 −p. Figure 7 demonstrates the excellent accuracies arising from use of Theorem 2.1. Separable variables...representations of non-adjacent interactions. In order to further accelerate the evaluation of Gper ,Lk , we derive Taylor series expansions of quantities Gk

  17. Electroosmotic flow in capillary channels filled with nonconstant viscosity electrolytes: exact solution of the Navier-Stokes equation.

    PubMed

    Otevrel, Marek; Klepárník, Karel

    2002-10-01

    The partial differential equation describing unsteady velocity profile of electroosmotic flow (EOF) in a cylindrical capillary filled with a nonconstant viscosity electrolyte was derived. Analytical solution, based on the general Navier-Stokes equation, was found for constant viscosity electrolytes using the separation of variables (Fourier method). For the case of a nonconstant viscosity electrolyte, the steady-state velocity profile was calculated assuming that the viscosity decreases exponentially in the direction from the wall to the capillary center. Since the respective equations with nonconstant viscosity term are not solvable in general, the method of continuous binding conditions was used to solve this problem. In this method, an arbitrary viscosity profile can be modeled. The theoretical conclusions show that the relaxation times at which an EOF approaches the steady state are too short to have an impact on a separation process in any real systems. A viscous layer at the wall affects EOF significantly, if it is thicker than the Debye length of the electric double layer. The presented description of the EOF dynamics is applicable to any microfluidic systems.

  18. Modeling continuous covariates with a "spike" at zero: Bivariate approaches.

    PubMed

    Jenkner, Carolin; Lorenz, Eva; Becher, Heiko; Sauerbrei, Willi

    2016-07-01

    In epidemiology and clinical research, predictors often take value zero for a large amount of observations while the distribution of the remaining observations is continuous. These predictors are called variables with a spike at zero. Examples include smoking or alcohol consumption. Recently, an extension of the fractional polynomial (FP) procedure, a technique for modeling nonlinear relationships, was proposed to deal with such situations. To indicate whether or not a value is zero, a binary variable is added to the model. In a two stage procedure, called FP-spike, the necessity of the binary variable and/or the continuous FP function for the positive part are assessed for a suitable fit. In univariate analyses, the FP-spike procedure usually leads to functional relationships that are easy to interpret. This paper introduces four approaches for dealing with two variables with a spike at zero (SAZ). The methods depend on the bivariate distribution of zero and nonzero values. Bi-Sep is the simplest of the four bivariate approaches. It uses the univariate FP-spike procedure separately for the two SAZ variables. In Bi-D3, Bi-D1, and Bi-Sub, proportions of zeros in both variables are considered simultaneously in the binary indicators. Therefore, these strategies can account for correlated variables. The methods can be used for arbitrary distributions of the covariates. For illustration and comparison of results, data from a case-control study on laryngeal cancer, with smoking and alcohol intake as two SAZ variables, is considered. In addition, a possible extension to three or more SAZ variables is outlined. A combination of log-linear models for the analysis of the correlation in combination with the bivariate approaches is proposed. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. A novel approach for quantitation of nonderivatized sialic acid in protein therapeutics using hydrophilic interaction chromatographic separation and nano quantity analyte detection.

    PubMed

    Chemmalil, Letha; Suravajjala, Sreekanth; See, Kate; Jordan, Eric; Furtado, Marsha; Sun, Chong; Hosselet, Stephen

    2015-01-01

    This paper describes a novel approach for the quantitation of nonderivatized sialic acid in glycoproteins, separated by hydrophilic interaction chromatography, and detection by Nano Quantity Analyte Detector (NQAD). The detection technique of NQAD is based on measuring change in the size of dry aerosol and converting the particle count rate into chromatographic output signal. NQAD detector is suitable for the detection of sialic acid, which lacks sufficiently active chromophore or fluorophore. The water condensation particle counting technology allows the analyte to be enlarged using water vapor to provide highest sensitivity. Derivatization-free analysis of glycoproteins using HPLC/NQAD method with PolyGLYCOPLEX™ amide column is well correlated with HPLC method with precolumn derivatization using 1, 2-diamino-4, 5-methylenedioxybenzene (DMB) as well as the Dionex-based high-pH anion-exchange chromatography (or ion chromatography) with pulsed amperometric detection (HPAEC-PAD). With the elimination of derivatization step, HPLC/NQAD method is more efficient than HPLC/DMB method. HPLC/NQAD method is more reproducible than HPAEC-PAD method as HPAEC-PAD method suffers high variability because of electrode fouling during analysis. Overall, HPLC/NQAD method offers broad linear dynamic range as well as excellent precision, accuracy, repeatability, reliability, and ease of use, with acceptable comparability to the commonly used HPAEC-PAD and HPLC/DMB methods. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  20. Quantifying Effects of Pharmacological Blockers of Cardiac Autonomous Control Using Variability Parameters.

    PubMed

    Miyabara, Renata; Berg, Karsten; Kraemer, Jan F; Baltatu, Ovidiu C; Wessel, Niels; Campos, Luciana A

    2017-01-01

    Objective: The aim of this study was to identify the most sensitive heart rate and blood pressure variability (HRV and BPV) parameters from a given set of well-known methods for the quantification of cardiovascular autonomic function after several autonomic blockades. Methods: Cardiovascular sympathetic and parasympathetic functions were studied in freely moving rats following peripheral muscarinic (methylatropine), β1-adrenergic (metoprolol), muscarinic + β1-adrenergic, α1-adrenergic (prazosin), and ganglionic (hexamethonium) blockades. Time domain, frequency domain and symbolic dynamics measures for each of HRV and BPV were classified through paired Wilcoxon test for all autonomic drugs separately. In order to select those variables that have a high relevance to, and stable influence on our target measurements (HRV, BPV) we used Fisher's Method to combine the p -value of multiple tests. Results: This analysis led to the following best set of cardiovascular variability parameters: The mean normal beat-to-beat-interval/value (HRV/BPV: meanNN), the coefficient of variation (cvNN = standard deviation over meanNN) and the root mean square differences of successive (RMSSD) of the time domain analysis. In frequency domain analysis the very-low-frequency (VLF) component was selected. From symbolic dynamics Shannon entropy of the word distribution (FWSHANNON) as well as POLVAR3, the non-linear parameter to detect intermittently decreased variability, showed the best ability to discriminate between the different autonomic blockades. Conclusion: Throughout a complex comparative analysis of HRV and BPV measures altered by a set of autonomic drugs, we identified the most sensitive set of informative cardiovascular variability indexes able to pick up the modifications imposed by the autonomic challenges. These indexes may help to increase our understanding of cardiovascular sympathetic and parasympathetic functions in translational studies of experimental diseases.

  1. Response surface modeling of boron adsorption from aqueous solution by vermiculite using different adsorption agents: Box-Behnken experimental design.

    PubMed

    Demirçivi, Pelin; Saygılı, Gülhayat Nasün

    2017-07-01

    In this study, a different method was applied for boron removal by using vermiculite as the adsorbent. Vermiculite, which was used in the experiments, was not modified with adsorption agents before boron adsorption using a separate process. Hexadecyltrimethylammonium bromide (HDTMA) and Gallic acid (GA) were used as adsorption agents for vermiculite by maintaining the solid/liquid ratio at 12.5 g/L. HDTMA/GA concentration, contact time, pH, initial boron concentration, inert electrolyte and temperature effects on boron adsorption were analyzed. A three-factor, three-level Box-Behnken design model combined with response surface method (RSM) was employed to examine and optimize process variables for boron adsorption from aqueous solution by vermiculite using HDTMA and GA. Solution pH (2-12), temperature (25-60 °C) and initial boron concentration (50-8,000 mg/L) were chosen as independent variables and coded x 1 , x 2 and x 3 at three levels (-1, 0 and 1). Analysis of variance was used to test the significance of variables and their interactions with 95% confidence limit (α = 0.05). According to the regression coefficients, a second-order empirical equation was evaluated between the adsorption capacity (q i ) and the coded variables tested (x i ). Optimum values of the variables were also evaluated for maximum boron adsorption by vermiculite-HDTMA (HDTMA-Verm) and vermiculite-GA (GA-Verm).

  2. Unsupervised classification of variable stars

    NASA Astrophysics Data System (ADS)

    Valenzuela, Lucas; Pichara, Karim

    2018-03-01

    During the past 10 years, a considerable amount of effort has been made to develop algorithms for automatic classification of variable stars. That has been primarily achieved by applying machine learning methods to photometric data sets where objects are represented as light curves. Classifiers require training sets to learn the underlying patterns that allow the separation among classes. Unfortunately, building training sets is an expensive process that demands a lot of human efforts. Every time data come from new surveys; the only available training instances are the ones that have a cross-match with previously labelled objects, consequently generating insufficient training sets compared with the large amounts of unlabelled sources. In this work, we present an algorithm that performs unsupervised classification of variable stars, relying only on the similarity among light curves. We tackle the unsupervised classification problem by proposing an untraditional approach. Instead of trying to match classes of stars with clusters found by a clustering algorithm, we propose a query-based method where astronomers can find groups of variable stars ranked by similarity. We also develop a fast similarity function specific for light curves, based on a novel data structure that allows scaling the search over the entire data set of unlabelled objects. Experiments show that our unsupervised model achieves high accuracy in the classification of different types of variable stars and that the proposed algorithm scales up to massive amounts of light curves.

  3. NACE-ESI-TOF MS to reveal phenolic compounds from olive oil: introducing enriched olive oil directly inside capillary.

    PubMed

    Gómez-Caravaca, Ana María; Carrasco-Pancorbo, Alegría; Segura-Carretero, Antonio; Fernández-Gutiérrez, Alberto

    2009-09-01

    Most CE methods for the analysis of phenols from olive oil use an aqueous electrolyte separation medium, although the importance of NACE is obvious, as this kind of CE seems to be more compatible with the hydrophobic olive oil matrix and could facilitate its direct injection. In the current work we develop a method involving SPE and NACE coupled to ESI-TOF MS. All the CE and ESI-TOF MS parameters were optimized in order to maximize the number of phenolic compounds detected and the sensitivity in their determination. Electrophoretic separation was carried out using a CE buffer system consisting of 25 mM NH(4)OAc/AcH in methanol/ACN (1/1 v/v) at an apparent pH value of 5.0. We studied in depth the effect of the nature and concentration of different electrolytes dissolved in different organic solvents and other experimental and instrumental CE variables. The results were compared with those obtained by CZE (with aqueous buffers) coupled to ESI-TOF MS; both methods offered to the analyst the chance to study phenolic compounds of different families (such as phenolic alcohols, lignans, complex phenols, flavonoids, etc.) from virgin olive oil by injecting methanolic extracts with efficient and fast CE separations. In the case of NACE method, we also studied the direct injection of the investigated matrix introducing a plug of olive oil directly into the capillary.

  4. Rapid Detection and Enumeration of Giardia lamblia Cysts in Water Samples by Immunomagnetic Separation and Flow Cytometric Analysis ▿ †

    PubMed Central

    Keserue, Hans-Anton; Füchslin, Hans Peter; Egli, Thomas

    2011-01-01

    Giardia lamblia is an important waterborne pathogen and is among the most common intestinal parasites of humans worldwide. Its fecal-oral transmission leads to the presence of cysts of this pathogen in the environment, and so far, quantitative rapid screening methods are not available for various matrices, such as surface waters, wastewater, or food. Thus, it is necessary to establish methods that enable reliable rapid detection of a single cyst in 10 to 100 liters of drinking water. Conventional detection relies on cyst concentration, isolation, and confirmation by immunofluorescence microscopy (IFM), resulting in low recoveries and high detection limits. Many different immunomagnetic separation (IMS) procedures have been developed for separation and cyst purification, so far with variable but high losses of cysts. A method was developed that requires less than 100 min and consists of filtration, resuspension, IMS, and flow cytometric (FCM) detection. MACS MicroBeads were used for IMS, and a reliable flow cytometric detection approach was established employing 3 different parameters for discrimination from background signals, i.e., green and red fluorescence (resulting from the distinct pattern emitted by the fluorescein dye) and sideward scatter for size discrimination. With spiked samples, recoveries exceeding 90% were obtained, and false-positive results were never encountered for negative samples. Additionally, the method was applicable to naturally occurring cysts in wastewater and has the potential to be automated. PMID:21685159

  5. Response variability of different anodal transcranial direct current stimulation intensities across multiple sessions.

    PubMed

    Ammann, Claudia; Lindquist, Martin A; Celnik, Pablo A

    It is well known that transcranial direct current stimulation (tDCS) is capable of modulating corticomotor excitability. However, a source of growing concern has been the observed inter- and intra-individual variability of tDCS-responses. Recent studies have assessed whether individuals respond in a predictable manner across repeated sessions of anodal tDCS (atDCS). The findings of these investigations have been inconsistent, and their methods have some limitations (i.e. lack of sham condition or testing only one tDCS intensity). To study inter- and intra-individual variability of atDCS effects at two different intensities on primary motor cortex (M1) excitability. Twelve subjects participated in a crossover study testing 7-min atDCS over M1 in three separate conditions (2 mA, 1 mA, sham) each repeated three times separated by 48 h. Motor evoked potentials were recorded before and after stimulation (up to 30min). Time of testing was maintained consistent within participants. To estimate the reliability of tDCS effects across sessions, we calculated the Intra-class Correlation Coefficient (ICC). AtDCS at 2 mA, but not 1 mA, significantly increased cortical excitability at the group level in all sessions. The overall ICC revealed fair to high reliability of tDCS effects for multiple sessions. Given that the distribution of responses showed important variability in the sham condition, we established a Sham Variability-Based Threshold to classify responses and to track individual changes across sessions. Using this threshold an intra-individual consistent response pattern was then observed only for the 2 mA condition. 2 mA anodal tDCS results in consistent intra- and inter-individual increases of M1 excitability. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. The cumulative effects of forest disturbance and climate variability on baseflow in a large forested watershed

    NASA Astrophysics Data System (ADS)

    Li, Q.; Wei, A.; Giles-Hansen, K.; Zhang, M.; Liu, W.

    2016-12-01

    Assessing how forest disturbance and climate change affect baseflow or groundwater discharge is critical for understanding water resource supply and protecting aquatic functions. Previous studies have mainly evaluated the effects of forest disturbance on streamflow, with rare attention on baseflow, particularly in large watersheds. However, studying this topic is challenging as it requires explicit inclusion of climate into assessment due to their interactions at any large watersheds. In this study, we used Upper Similkameen River watershed (USR) (1810 km2), located in the southern interior of British Columbia, Canada to examine how forest disturbance and climate variability affect baseflow. The conductivity mass balance method was first used for baseflow separation, and the modified double mass curves were then employed to quantitatively separate the relative contributions of forest disturbance and climate variability to annual baseflow. Our results showed that average annual baseflow and baseflow index (baseflow/streamflow) were about 85.2 ± 21.5 mm year-1 and 0.22 ± 0.05 for the study period of 1954-2013, respectively. The forest disturbance increased the annual baseflow of 18.4 mm, while climate variability decreased 19.4 mm. In addition, forest disturbance also shifted the baseflow regime with increasing of the spring baseflow and decreasing of the summer baseflow. We conclude that forest disturbance significantly altered the baseflow magnitudes and patterns, and its role in annual baseflow was equal to that caused by climate variability in the study watershed despite their opposite changing directions. The implications of our results are discussed in the context of future forest disturbance (or land cover changes) and climate changes.

  7. Simulating variable source problems via post processing of individual particle tallies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bleuel, D.L.; Donahue, R.J.; Ludewigt, B.A.

    2000-10-20

    Monte Carlo is an extremely powerful method of simulating complex, three dimensional environments without excessive problem simplification. However, it is often time consuming to simulate models in which the source can be highly varied. Similarly difficult are optimization studies involving sources in which many input parameters are variable, such as particle energy, angle, and spatial distribution. Such studies are often approached using brute force methods or intelligent guesswork. One field in which these problems are often encountered is accelerator-driven Boron Neutron Capture Therapy (BNCT) for the treatment of cancers. Solving the reverse problem of determining the best neutron source formore » optimal BNCT treatment can be accomplished by separating the time-consuming particle-tracking process of a full Monte Carlo simulation from the calculation of the source weighting factors which is typically performed at the beginning of a Monte Carlo simulation. By post-processing these weighting factors on a recorded file of individual particle tally information, the effect of changing source variables can be realized in a matter of seconds, instead of requiring hours or days for additional complete simulations. By intelligent source biasing, any number of different source distributions can be calculated quickly from a single Monte Carlo simulation. The source description can be treated as variable and the effect of changing multiple interdependent source variables on the problem's solution can be determined. Though the focus of this study is on BNCT applications, this procedure may be applicable to any problem that involves a variable source.« less

  8. Quantitative estimation of global patterns of surface ocean biological productivity and its seasonal variation on timescales from centuries to millennia

    NASA Astrophysics Data System (ADS)

    Loubere, Paul; Fariduddin, Mohammad

    1999-03-01

    We present a quantitative method, based on the relative abundances of benthic foraminifera in deep-sea sediments, for estimating surface ocean biological productivity over the timescale of centuries to millennia. We calibrate the method using a global data set composed of 207 samples from the Atlantic, Pacific, and Indian Oceans from a water depth range between 2300 and 3600 m. The sample set was developed so that other, potentially significant, environmental variables would be uncorrelated to overlying surface ocean productivity. A regression of assemblages against productivity yielded an r2 = 0.89 demonstrating a strong productivity signal in the faunal data. In addition, we examined assemblage response to annual variability in biological productivity (seasonality). Our data set included a range of seasonalities which we quantified into a seasonality index using the pigment color bands from the coastal zone color scanner (CZCS). The response of benthic foraminiferal assemblage composition to our seasonality index was tested with regression analysis. We obtained a statistically highly significant r2 = 0.75. Further, discriminant function analysis revealed a clear separation among sample groups based on surface ocean productivity and our seasonality index. Finally, we tested the response of benthic foraminiferal assemblages to three different modes of seasonality. We observed a distinct separation of our samples into groups representing low seasonal variability, strong seasonality with a single main productivity event in the year, and strong seasonality with multiple productivity events in the year. Reconstructing surface ocean biological productivity with benthic foraminifera will aid in modeling marine biogeochemical cycles. Also, estimating mode and range of annual seasonality will provide insight to changing oceanic processes, allowing the examination of the mechanisms causing changes in the marine biotic system over time. This article contains supplementary material.

  9. Response Classification Images in Vernier Acuity

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Beard, B. L.; Ellis, Stephen R. (Technical Monitor)

    1997-01-01

    Orientation selective and local sign mechanisms have been proposed as the basis for vernier acuity judgments. Linear image features contributing to discrimination can be determined for a two choice task by adding external noise to the images and then averaging the noises separately for the four types of stimulus/response trials. This method is applied to a vernier acuity task with different spatial separations to compare the predictions of the two theories. Three well-practiced observers were presented around 5000 trials of a vernier stimulus consisting of two dark horizontal lines (5 min by 0.3 min) within additive low-contrast white noise. Two spatial separations were tested, abutting and a 10 min horizontal separation. The task was to determine whether the target lines were aligned or vertically offset. The noises were averaged separately for the four stimulus/response trial types (e.g., stimulus = offset, response = aligned). The sum of the two 'not aligned' images was then subtracted from the sum of the 'aligned' images to obtain an overall image. Spatially smoothed images were quantized according to expected variability in the smoothed images to allow estimation of the statistical significance of image features. The response images from the 10 min separation condition are consistent with the local sign theory, having the appearance of two linear operators measuring vertical position with opposite sign. The images from the abutting stimulus have the same appearance with the two operators closer together. The image predicted by an oriented filter model is similar, but has its greatest weight in the abutting region, while the response images fall to nonsignificance there. The response correlation image method, previously demonstrated for letter discrimination, clarifies the features used in vernier acuity.

  10. A Chebyshev method for state-to-state reactive scattering using reactant-product decoupling: OH + H2 → H2O + H.

    PubMed

    Cvitaš, Marko T; Althorpe, Stuart C

    2013-08-14

    We extend a recently developed wave packet method for computing the state-to-state quantum dynamics of AB + CD → ABC + D reactions [M. T. Cvitaš and S. C. Althorpe, J. Phys. Chem. A 113, 4557 (2009)] to include the Chebyshev propagator. The method uses the further partitioned approach to reactant-product decoupling, which uses artificial decoupling potentials to partition the coordinate space of the reaction into separate reactant, product, and transition-state regions. Separate coordinates and basis sets can then be used that are best adapted to each region. We derive improved Chebyshev partitioning formulas which include Mandelshtam-and-Taylor-type decoupling potentials, and which are essential for the non-unitary discrete variable representations that must be used in 4-atom reactive scattering calculations. Numerical tests on the fully dimensional OH + H2 → H2O + H reaction for J = 0 show that the new version of the method is as efficient as the previously developed split-operator version. The advantages of the Chebyshev propagator (most notably the ease of parallelization for J > 0) can now be fully exploited in state-to-state reactive scattering calculations on 4-atom reactions.

  11. On the use of transition matrix methods with extended ensembles.

    PubMed

    Escobedo, Fernando A; Abreu, Charlles R A

    2006-03-14

    Different extended ensemble schemes for non-Boltzmann sampling (NBS) of a selected reaction coordinate lambda were formulated so that they employ (i) "variable" sampling window schemes (that include the "successive umbrella sampling" method) to comprehensibly explore the lambda domain and (ii) transition matrix methods to iteratively obtain the underlying free-energy eta landscape (or "importance" weights) associated with lambda. The connection between "acceptance ratio" and transition matrix methods was first established to form the basis of the approach for estimating eta(lambda). The validity and performance of the different NBS schemes were then assessed using as lambda coordinate the configurational energy of the Lennard-Jones fluid. For the cases studied, it was found that the convergence rate in the estimation of eta is little affected by the use of data from high-order transitions, while it is noticeably improved by the use of a broader window of sampling in the variable window methods. Finally, it is shown how an "elastic" window of sampling can be used to effectively enact (nonuniform) preferential sampling over the lambda domain, and how to stitch the weights from separate one-dimensional NBS runs to produce a eta surface over a two-dimensional domain.

  12. Improved HPLC method with the aid of chemometric strategy: determination of loxoprofen in pharmaceutical formulation.

    PubMed

    Venkatesan, P; Janardhanan, V Sree; Muralidharan, C; Valliappan, K

    2012-06-01

    Loxoprofen belongs to a class of Nonsteroidal anti-inflammatory drug acts by inhibiting isoforms of cyclo-oxygenase 1 and 2. In this study an improved RP-HPLC method was developed for the quantification of loxoprofen in pharmaceutical dosage form. For that purpose an experimental design approach was employed. Factors-independent variables (organic modifier, pH of the mobile phase and flow rate) were extracted from the preliminary study and as dependent variables three responses (loxoprofen retention factor, resolution between loxoprofen probenecid and retention time of probenecid) were selected. For the improvement of method development and optimization step, Derringer's desirability function was applied to simultaneously optimize the chosen three responses. The procedure allowed deduction of optimal conditions and the predicted optimum was acetonitrile: water (53:47, v/v), pH of the mobile phase adjusted at to 2.9 with ortho phosphoric acid. The separation was achieved in less than 4minutes. The method was applied in the quality control of commercial tablets. The method showed good agreement between the experimental data and predictive value throughout the studied parameter space. The optimized assay condition was validated according to International conference on harmonisation guidelines to confirm specificity, linearity, accuracy and precision.

  13. Variables affecting resolution of lung phospholipids in one-dimensional thin-layer chromatography.

    PubMed

    Krahn, J

    1987-01-01

    Resolution of the confusion in the literature about the separation of lung phospholipids in thin-layer chromatographic systems has awaited a systematic study of the variables that potentially affect this separation. In this study I show that: incorporation of ammonium sulfate into silica gel "GHL" has a dramatic effect on separation of lung phospholipids; this effect is equally dramatic but different in activated and nonactivated gels; when it picks up moisture, ammonium sulfate-activated gel very rapidly loses its ability to resolve lecithin from phosphatidylinositol; in gel containing ammonium sulfate, small amounts of phosphatidylethanolamine are hydrolyzed to lyso-phosphatidylethanolamine.

  14. Examining the sources of variability in cell culture media used for biopharmaceutical production.

    PubMed

    McGillicuddy, Nicola; Floris, Patrick; Albrecht, Simone; Bones, Jonathan

    2018-01-01

    Raw materials, in particular cell culture media, represent a significant source of variability to biopharmaceutical manufacturing processes that can detrimentally affect cellular growth, viability and specific productivity or alter the quality profile of the expressed therapeutic protein. The continual expansion of the biopharmaceutical industry is creating an increasing demand on the production and supply chain consistency for cell culture media, especially as companies embrace intensive continuous processing. Here, we provide a historical perspective regarding the transition from serum containing to serum-free media, the development of chemically-defined cell culture media for biopharmaceutical production using industrial scale bioprocesses and review production mechanisms for liquid and powder culture media. An overview and critique of analytical approaches used for the characterisation of cell culture media and the identification of root causes of variability are also provided, including in-depth liquid phase separations, mass spectrometry and spectroscopic methods.

  15. Development of toughened epoxy polymers for high performance composite and ablative applications

    NASA Technical Reports Server (NTRS)

    Allen, V. R.

    1982-01-01

    A survey of current procedures for the assessment of state of cure in epoxy polymers and for the evaluation of polymer toughness as related to nature of the crosslinking agent was made to facilitate a cause-effect study of the chemical modification of epoxy polymers. Various conformations of sample morphology were examined to identify testing variables and to establish optimum conditions for the selected physical test methods. Dynamic viscoelasticity testing was examined in conjunction with chemical analyses to allow observation of the extent of the curing reaction with size of the crosslinking agent the primary variable. Specifically the aims of the project were twofold: (1) to consider the experimental variables associated with development of "extent of cure" analysis, and (2) to assess methodology of fracture energy determination and to prescribe a meaningful and reproducible procedure. The following is separated into two categories for ease of presentation.

  16. 78 FR 62716 - Pacific Life Insurance Company, et al; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-22

    ... SECURITIES AND EXCHANGE COMMISSION [Release No. IC-30744; File No. 812-14141] Pacific Life... Act''). Applicants: Pacific Life Insurance Company (``Pacific Life''), Pacific Life's Separate Account A (``Separate Account A''), Pacific Life's Pacific Select Variable Annuity Separate Account...

  17. The Effect of Visual Variability on the Learning of Academic Concepts.

    PubMed

    Bourgoyne, Ashley; Alt, Mary

    2017-06-10

    The purpose of this study was to identify effects of variability of visual input on development of conceptual representations of academic concepts for college-age students with normal language (NL) and those with language-learning disabilities (LLD). Students with NL (n = 11) and LLD (n = 11) participated in a computer-based training for introductory biology course concepts. Participants were trained on half the concepts under a low-variability condition and half under a high-variability condition. Participants completed a posttest in which they were asked to identify and rate the accuracy of novel and trained visual representations of the concepts. We performed separate repeated measures analyses of variance to examine the accuracy of identification and ratings. Participants were equally accurate on trained and novel items in the high-variability condition, but were less accurate on novel items only in the low-variability condition. The LLD group showed the same pattern as the NL group; they were just less accurate. Results indicated that high-variability visual input may facilitate the acquisition of academic concepts in college students with NL and LLD. High-variability visual input may be especially beneficial for generalization to novel representations of concepts. Implicit learning methods may be harnessed by college courses to provide students with basic conceptual knowledge when they are entering courses or beginning new units.

  18. Increased Intra-Participant Variability in Children with Autistic Spectrum Disorders: Evidence from Single-Trial Analysis of Evoked EEG

    PubMed Central

    Milne, Elizabeth

    2011-01-01

    Intra-participant variability in clinical conditions such as autistic spectrum disorder (ASD) is an important indicator of pathophysiological processing. The data reported here illustrate that trial-by-trial variability can be reliably measured from EEG, and that intra-participant EEG variability is significantly greater in those with ASD than in neuro-typical matched controls. EEG recorded at the scalp is a linear mixture of activity arising from muscle artifacts and numerous concurrent brain processes. To minimize these additional sources of variability, EEG data were subjected to two different methods of spatial filtering. (i) The data were decomposed using infomax independent component analysis, a method of blind source separation which un-mixes the EEG signal into components with maximally independent time-courses, and (ii) a surface Laplacian transform was performed (current source density interpolation) in order to reduce the effects of volume conduction. Data are presented from 13 high functioning adolescents with ASD without co-morbid ADHD, and 12 neuro-typical age-, IQ-, and gender-matched controls. Comparison of variability between the ASD and neuro-typical groups indicated that intra-participant variability of P1 latency and P1 amplitude was greater in the participants with ASD, and inter-trial α-band phase coherence was lower in the participants with ASD. These data support the suggestion that individuals with ASD are less able to synchronize the activity of stimulus-related cell assemblies than neuro-typical individuals, and provide empirical evidence in support of theories of increased neural noise in ASD. PMID:21716921

  19. Supervised normalization of microarrays

    PubMed Central

    Mecham, Brigham H.; Nelson, Peter S.; Storey, John D.

    2010-01-01

    Motivation: A major challenge in utilizing microarray technologies to measure nucleic acid abundances is ‘normalization’, the goal of which is to separate biologically meaningful signal from other confounding sources of signal, often due to unavoidable technical factors. It is intuitively clear that true biological signal and confounding factors need to be simultaneously considered when performing normalization. However, the most popular normalization approaches do not utilize what is known about the study, both in terms of the biological variables of interest and the known technical factors in the study, such as batch or array processing date. Results: We show here that failing to include all study-specific biological and technical variables when performing normalization leads to biased downstream analyses. We propose a general normalization framework that fits a study-specific model employing every known variable that is relevant to the expression study. The proposed method is generally applicable to the full range of existing probe designs, as well as to both single-channel and dual-channel arrays. We show through real and simulated examples that the method has favorable operating characteristics in comparison to some of the most highly used normalization methods. Availability: An R package called snm implementing the methodology will be made available from Bioconductor (http://bioconductor.org). Contact: jstorey@princeton.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20363728

  20. Single-shot color fringe projection for three-dimensional shape measurement of objects with discontinuities.

    PubMed

    Dai, Meiling; Yang, Fujun; He, Xiaoyuan

    2012-04-20

    A simple but effective fringe projection profilometry is proposed to measure 3D shape by using one snapshot color sinusoidal fringe pattern. One color fringe pattern encoded with a sinusoidal fringe (as red component) and one uniform intensity pattern (as blue component) is projected by a digital video projector, and the deformed fringe pattern is recorded by a color CCD camera. The captured color fringe pattern is separated into its RGB components and division operation is applied to red and blue channels to reduce the variable reflection intensity. Shape information of the tested object is decoded by applying an arcsine algorithm on the normalized fringe pattern with subpixel resolution. In the case of fringe discontinuities caused by height steps, or spatially isolated surfaces, the separated blue component is binarized and used for correcting the phase demodulation. A simple and robust method is also introduced to compensate for nonlinear intensity response of the digital video projector. The experimental results demonstrate the validity of the proposed method.

  1. Wind Forced Variability in Eddy Formation, Eddy Shedding, and the Separation of the East Australian Current

    NASA Astrophysics Data System (ADS)

    Bull, Christopher Y. S.; Kiss, Andrew E.; Jourdain, Nicolas C.; England, Matthew H.; van Sebille, Erik

    2017-12-01

    The East Australian Current (EAC), like many other subtropical western boundary currents, is believed to be penetrating further poleward in recent decades. Previous observational and model studies have used steady state dynamics to relate changes in the westerly winds to changes in the separation behavior of the EAC. As yet, little work has been undertaken on the impact of forcing variability on the EAC and Tasman Sea circulation. Here using an eddy-permitting regional ocean model, we present a suite of simulations forced by the same time-mean fields, but with different atmospheric and remote ocean variability. These eddy-permitting results demonstrate the nonlinear response of the EAC to variable, nonstationary inhomogeneous forcing. These simulations show an EAC with high intrinsic variability and stochastic eddy shedding. We show that wind stress variability on time scales shorter than 56 days leads to increases in eddy shedding rates and southward eddy propagation, producing an increased transport and southward reach of the mean EAC extension. We adopt an energetics framework that shows the EAC extension changes to be coincident with an increase in offshore, upstream eddy variance (via increased barotropic instability) and increase in subsurface mean kinetic energy along the length of the EAC. The response of EAC separation to regional variable wind stress has important implications for both past and future climate change studies.

  2. Quantitative and qualitative measure of intralaboratory two-dimensional protein gel reproducibility and the effects of sample preparation, sample load, and image analysis.

    PubMed

    Choe, Leila H; Lee, Kelvin H

    2003-10-01

    We investigate one approach to assess the quantitative variability in two-dimensional gel electrophoresis (2-DE) separations based on gel-to-gel variability, sample preparation variability, sample load differences, and the effect of automation on image analysis. We observe that 95% of spots present in three out of four replicate gels exhibit less than a 0.52 coefficient of variation (CV) in fluorescent stain intensity (% volume) for a single sample run on multiple gels. When four parallel sample preparations are performed, this value increases to 0.57. We do not observe any significant change in quantitative value for an increase or decrease in sample load of 30% when using appropriate image analysis variables. Increasing use of automation, while necessary in modern 2-DE experiments, does change the observed level of quantitative and qualitative variability among replicate gels. The number of spots that change qualitatively for a single sample run in parallel varies from a CV = 0.03 for fully manual analysis to CV = 0.20 for a fully automated analysis. We present a systematic method by which a single laboratory can measure gel-to-gel variability using only three gel runs.

  3. Sonification of Kepler Field SU UMa Cataclysmic Variable Stars V344 Lyr and V1504 Cyg

    NASA Technical Reports Server (NTRS)

    Tutchton, Roxanne M.; Wood, Matt A.; Still, Martin D.; Howell, Steve B.; Cannizzo, John K.; Smale, Alan P.

    2012-01-01

    Sonification is the conversion of quantitative data into sound. In this work we explain the methods used in the sonification of light curves provided by the Kepler instrument from Q2 through Q6 for the cataclysmic variable systems V344 Lyr and V1504 Cyg . Both systems are SU UMa stars showing dwarf nova outbursts and superoutbursts as well as positive and negative superhumps. Focused sonifications were done from average pulse shapes of each superhump, and separate sonifications of the full, residual light curves were done for both stars. The audio of these data reflected distinct patterns within the evolutions of supercycles and superhumps that matched pervious observations and proved to be effective aids in data analysis.

  4. Stochastic modeling of neurobiological time series: Power, coherence, Granger causality, and separation of evoked responses from ongoing activity

    NASA Astrophysics Data System (ADS)

    Chen, Yonghong; Bressler, Steven L.; Knuth, Kevin H.; Truccolo, Wilson A.; Ding, Mingzhou

    2006-06-01

    In this article we consider the stochastic modeling of neurobiological time series from cognitive experiments. Our starting point is the variable-signal-plus-ongoing-activity model. From this model a differentially variable component analysis strategy is developed from a Bayesian perspective to estimate event-related signals on a single trial basis. After subtracting out the event-related signal from recorded single trial time series, the residual ongoing activity is treated as a piecewise stationary stochastic process and analyzed by an adaptive multivariate autoregressive modeling strategy which yields power, coherence, and Granger causality spectra. Results from applying these methods to local field potential recordings from monkeys performing cognitive tasks are presented.

  5. Presentation and Treatment of Poland Anomaly.

    PubMed

    Buckwalter V, Joseph A; Shah, Apurva S

    2016-12-01

    Background: Poland anomaly is a sporadic, phenotypically variable congenital condition usually characterized by unilateral pectoral muscle agenesis and ipsilateral hand deformity. Methods: A comprehensive review of the medical literature on Poland anomaly was performed using a Medline search. Results: Poland anomaly is a sporadic, phenotypically variable congenital condition usually characterized by unilateral, simple syndactyly with ipsilateral limb hypoplasia and pectoralis muscle agenesis. Operative management of syndactyly in Poland anomaly is determined by the severity of hand involvement and the resulting anatomical dysfunction. Syndactyly reconstruction is recommended in all but the mildest cases because most patients with Poland anomaly have notable brachydactyly, and digital separation can improve functional length. Conclusions: Improved understanding the etiology and presentation of Poland anomaly can improve clinician recognition and management of this rare congenital condition.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stinis, Panagiotis

    We present a comparative study of two methods for thereduction of the dimensionality of a system of ordinary differentialequations that exhibits time-scale separation. Both methods lead to areduced system of stochastic differential equations. The novel feature ofthese methods is that they allow the use, in the reduced system, ofhigher order terms in the resolved variables. The first method, proposedby Majda, Timofeyev and Vanden-Eijnden, is based on an asymptoticstrategy developed by Kurtz. The second method is a short-memoryapproximation of the Mori-Zwanzig projection formalism of irreversiblestatistical mechanics, as proposed by Chorin, Hald and Kupferman. Wepresent conditions under which the reduced models arisingmore » from the twomethods should have similar predictive ability. We apply the two methodsto test cases that satisfy these conditions. The form of the reducedmodels and the numerical simulations show that the two methods havesimilar predictive ability as expected.« less

  7. Integrating Mediators and Moderators in Research Design

    ERIC Educational Resources Information Center

    MacKinnon, David P.

    2011-01-01

    The purpose of this article is to describe mediating variables and moderating variables and provide reasons for integrating them in outcome studies. Separate sections describe examples of moderating and mediating variables and the simplest statistical model for investigating each variable. The strengths and limitations of incorporating mediating…

  8. An Open Source modular platform for hydrological model implementation

    NASA Astrophysics Data System (ADS)

    Kolberg, Sjur; Bruland, Oddbjørn

    2010-05-01

    An implementation framework for setup and evaluation of spatio-temporal models is developed, forming a highly modularized distributed model system. The ENKI framework allows building space-time models for hydrological or other environmental purposes, from a suite of separately compiled subroutine modules. The approach makes it easy for students, researchers and other model developers to implement, exchange, and test single routines in a fixed framework. The open-source license and modular design of ENKI will also facilitate rapid dissemination of new methods to institutions engaged in operational hydropower forecasting or other water resource management. Written in C++, ENKI uses a plug-in structure to build a complete model from separately compiled subroutine implementations. These modules contain very little code apart from the core process simulation, and are compiled as dynamic-link libraries (dll). A narrow interface allows the main executable to recognise the number and type of the different variables in each routine. The framework then exposes these variables to the user within the proper context, ensuring that time series exist for input variables, initialisation for states, GIS data sets for static map data, manually or automatically calibrated values for parameters etc. ENKI is designed to meet three different levels of involvement in model construction: • Model application: Running and evaluating a given model. Regional calibration against arbitrary data using a rich suite of objective functions, including likelihood and Bayesian estimation. Uncertainty analysis directed towards input or parameter uncertainty. o Need not: Know the model's composition of subroutines, or the internal variables in the model, or the creation of method modules. • Model analysis: Link together different process methods, including parallel setup of alternative methods for solving the same task. Investigate the effect of different spatial discretization schemes. o Need not: Write or compile computer code, handle file IO for each modules, • Routine implementation and testing. Implementation of new process-simulating methods/equations, specialised objective functions or quality control routines, testing of these in an existing framework. o Need not: Implement user or model interface for the new routine, IO handling, administration of model setup and run, calibration and validation routines etc. From being developed for Norway's largest hydropower producer Statkraft, ENKI is now being turned into an Open Source project. At the time of writing, the licence and the project administration is not established. Also, it remains to port the application to other compilers and computer platforms. However, we hope that ENKI will prove useful for both academic and operational users.

  9. Improved estimation of PM2.5 using Lagrangian satellite-measured aerosol optical depth

    NASA Astrophysics Data System (ADS)

    Olivas Saunders, Rolando

    Suspended particulate matter (aerosols) with aerodynamic diameters less than 2.5 mum (PM2.5) has negative effects on human health, plays an important role in climate change and also causes the corrosion of structures by acid deposition. Accurate estimates of PM2.5 concentrations are thus relevant in air quality, epidemiology, cloud microphysics and climate forcing studies. Aerosol optical depth (AOD) retrieved by the Moderate Resolution Imaging Spectroradiometer (MODIS) satellite instrument has been used as an empirical predictor to estimate ground-level concentrations of PM2.5 . These estimates usually have large uncertainties and errors. The main objective of this work is to assess the value of using upwind (Lagrangian) MODIS-AOD as predictors in empirical models of PM2.5. The upwind locations of the Lagrangian AOD were estimated using modeled backward air trajectories. Since the specification of an arrival elevation is somewhat arbitrary, trajectories were calculated to arrive at four different elevations at ten measurement sites within the continental United States. A systematic examination revealed trajectory model calculations to be sensitive to starting elevation. With a 500 m difference in starting elevation, the 48-hr mean horizontal separation of trajectory endpoints was 326 km. When the difference in starting elevation was doubled and tripled to 1000 m and 1500m, the mean horizontal separation of trajectory endpoints approximately doubled and tripled to 627 km and 886 km, respectively. A seasonal dependence of this sensitivity was also found: the smallest mean horizontal separation of trajectory endpoints was exhibited during the summer and the largest separations during the winter. A daily average AOD product was generated and coupled to the trajectory model in order to determine AOD values upwind of the measurement sites during the period 2003-2007. Empirical models that included in situ AOD and upwind AOD as predictors of PM2.5 were generated by multivariate linear regressions using the least squares method. The multivariate models showed improved performance over the single variable regression (PM2.5 and in situ AOD) models. The statistical significance of the improvement of the multivariate models over the single variable regression models was tested using the extra sum of squares principle. In many cases, even when the R-squared was high for the multivariate models, the improvement over the single models was not statistically significant. The R-squared of these multivariate models varied with respect to seasons, with the best performance occurring during the summer months. A set of seasonal categorical variables was included in the regressions to exploit this variability. The multivariate regression models that included these categorical seasonal variables performed better than the models that didn't account for seasonal variability. Furthermore, 71% of these regressions exhibited improvement over the single variable models that was statistically significant at a 95% confidence level.

  10. Separation of sodium chloride from the evaporated residue of the reverse osmosis reject generated in the leather industry--optimization by response surface methodology.

    PubMed

    Boopathy, R; Sekaran, G

    2014-08-01

    Reverse osmosis (RO) concentrate is being evaporated by solar/thermal evaporators to meet zero liquid discharge standards. The resulted evaporated residue (ER) is contaminated with both organic and inorganic mixture of salts. The generation of ER is exceedingly huge in the leather industry, which is being collected and stored under the shelter to avoid groundwater contamination by the leachate. In the present investigation, a novel process for the separation of sodium chloride from ER was developed, to reduce the environmental impact on RO concentrate discharge. The sodium chloride was selectively separated by the reactive precipitation method using hydrogen chloride gas. The selected process variables were optimized for maximum yield ofNaCl from the ER (optimum conditions were pH, 8.0; temperature, 35 degrees C; concentration of ER, 600 g/L and HCl purging time, 3 min). The recovered NaCl purity was verified using a cyclic voltagramm.

  11. A multi-segment foot model based on anatomically registered technical coordinate systems: method repeatability in pediatric feet.

    PubMed

    Saraswat, Prabhav; MacWilliams, Bruce A; Davis, Roy B

    2012-04-01

    Several multi-segment foot models to measure the motion of intrinsic joints of the foot have been reported. Use of these models in clinical decision making is limited due to lack of rigorous validation including inter-clinician, and inter-lab variability measures. A model with thoroughly quantified variability may significantly improve the confidence in the results of such foot models. This study proposes a new clinical foot model with the underlying strategy of using separate anatomic and technical marker configurations and coordinate systems. Anatomical landmark and coordinate system identification is determined during a static subject calibration. Technical markers are located at optimal sites for dynamic motion tracking. The model is comprised of the tibia and three foot segments (hindfoot, forefoot and hallux) and inter-segmental joint angles are computed in three planes. Data collection was carried out on pediatric subjects at two sites (Site 1: n=10 subjects by two clinicians and Site 2: five subjects by one clinician). A plaster mold method was used to quantify static intra-clinician and inter-clinician marker placement variability by allowing direct comparisons of marker data between sessions for each subject. Intra-clinician and inter-clinician joint angle variability were less than 4°. For dynamic walking kinematics, intra-clinician, inter-clinician and inter-laboratory variability were less than 6° for the ankle and forefoot, but slightly higher for the hallux. Inter-trial variability accounted for 2-4° of the total dynamic variability. Results indicate the proposed foot model reduces the effects of marker placement variability on computed foot kinematics during walking compared to similar measures in previous models. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Direct perturbation theory for the dark soliton solution to the nonlinear Schrödinger equation with normal dispersion.

    PubMed

    Yu, Jia-Lu; Yang, Chun-Nuan; Cai, Hao; Huang, Nian-Ning

    2007-04-01

    After finding the basic solutions of the linearized nonlinear Schrödinger equation by the method of separation of variables, the perturbation theory for the dark soliton solution is constructed by linear Green's function theory. In application to the self-induced Raman scattering, the adiabatic corrections to the soliton's parameters are obtained and the remaining correction term is given as a pure integral with respect to the continuous spectral parameter.

  13. A method for monitoring nuclear absorption coefficients of aviation fuels

    NASA Technical Reports Server (NTRS)

    Sprinkle, Danny R.; Shen, Chih-Ping

    1989-01-01

    A technique for monitoring variability in the nuclear absorption characteristics of aviation fuels has been developed. It is based on a highly collimated low energy gamma radiation source and a sodium iodide counter. The source and the counter assembly are separated by a geometrically well-defined test fuel cell. A computer program for determining the mass attenuation coefficient of the test fuel sample, based on the data acquired for a preset counting period, has been developed and tested on several types of aviation fuel.

  14. Validation of zero-order feedback strategies for medium range air-to-air interception in a horizontal plane

    NASA Technical Reports Server (NTRS)

    Shinar, J.

    1982-01-01

    A zero order feedback solution of a variable speed interception game between two aircraft in the horizontal plane, obtained by using the method of forced singular perturbation (FSP), is compared with the exact open loop solution. The comparison indicates that for initial distances of separation larger than eight turning radii of the evader, the accuracy of the feedback approximation is better than one percent. The result validates the zero order FSP approximation for medium range air combat analysis.

  15. Transfer function modeling of damping mechanisms in viscoelastic plates

    NASA Technical Reports Server (NTRS)

    Slater, J. C.; Inman, D. J.

    1991-01-01

    This work formulates a method for the modeling of material damping characteristics in plates. The Sophie German equation of classical plate theory is modified to incorporate hysteresis effects represented by complex stiffness using the transfer function approach proposed by Golla and Hughes, (1985). However, this procedure is not limited to this representation. The governing characteristic equation is decoupled through separation of variables, yielding a solution similar to that of undamped classical plate theory, allowing solution of the steady state as well as the transient response problem.

  16. Snow mapping and land use studies in Switzerland

    NASA Technical Reports Server (NTRS)

    Haefner, H. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. A system was developed for operational snow and land use mapping, based on a supervised classification method using various classification algorithms and representation of the results in maplike form on color film with a photomation system. Land use mapping, under European conditions, was achieved with a stepwise linear discriminant analysis by using additional ratio variables. On fall images, signatures of built-up areas were often not separable from wetlands. Two different methods were tested to correlate the size of settlements and the population with an accuracy for the densely populated Swiss Plateau between +2 or -12%.

  17. Estimating subcatchment runoff coefficients using weather radar and a downstream runoff sensor.

    PubMed

    Ahm, Malte; Thorndahl, Søren; Rasmussen, Michael R; Bassø, Lene

    2013-01-01

    This paper presents a method for estimating runoff coefficients of urban drainage subcatchments based on a combination of high resolution weather radar data and flow measurements from a downstream runoff sensor. By utilising the spatial variability of the precipitation it is possible to estimate the runoff coefficients of the separate subcatchments. The method is demonstrated through a case study of an urban drainage catchment (678 ha) located in the city of Aarhus, Denmark. The study has proven that it is possible to use corresponding measurements of the relative rainfall distribution over the catchment and downstream runoff measurements to identify the runoff coefficients at subcatchment level.

  18. Retrieval of seasonal dynamics of forest understory reflectance from semi-arid to boreal forests using MODIS BRDF data

    NASA Astrophysics Data System (ADS)

    Pisek, Jan; Chen, Jing; Kobayashi, Hideki; Rautiainen, Miina; Schaepman, Michael; Karnieli, Arnon; Sprintsin, Michael; Ryu, Youngryel; Nikopensius, Maris; Raabe, Kairi

    2016-04-01

    Ground vegetation (understory) provides an essential contribution to the whole-stand reflectance signal in many boreal, sub-boreal, and temperate forests. Accurate knowledge about forest understory reflectance is urgently needed in various forest reflectance modelling efforts. However, systematic collections of understory reflectance data covering different sites and ecosystems are almost missing. Measurement of understory reflectance is a real challenge because of an extremely high variability of irradiance at the forest floor, weak signal in some parts of the spectrum, spectral separability issues of over- and understory and its variable nature. Understory can consist of several sub-layers (regenerated tree, shrub, grasses or dwarf shrub, mosses, lichens, litter, bare soil), it has spatially-temporally variable species composition and ground coverage. Additional challenges are introduced by patchiness of ground vegetation, ground surface roughness, and understory-overstory relations. Due to this variability, remote sensing might be the only means to provide consistent data at spatially relevant scales. In this presentation, we report on retrieving seasonal courses of understory Normalized Difference Vegetation Index (NDVI) from multi-angular MODIS BRDF/Albedo data. We compared satellite-based seasonal courses of understory NDVI against an extended collection of different types of forest sites with available in-situ understory reflectance measurements. These sites are distributed along a wide latitudinal gradient on the Northern hemisphere: a sparse and dense black spruce forests in Alaska and Canada, a northern European boreal forest in Finland, hemiboreal needleleaf and deciduous stands in Estonia, a mixed temperate forest in Switzerland, a cool temperate deciduous broadleaf forest in Korea, and a semi-arid pine plantation in Israel. Our results indicated the retrieval method performs well particularly over open forests of different types. We also demonstrated the limitations of the method for closed canopies, where the understory signal retrieval is much attenuated. The retrieval of understory signal can be used e.g. to improve the estimates of leaf area index (LAI), fAPAR in sparsely vegetated areas, and also to study the phenology of understory layer. Our results are particularly useful to producing Northern hemisphere maps of seasonal dynamics of forests, allowing to separately retrieve understory variability, being a main contributor to spring emergence and fall senescence uncertainty. The inclusion of understory variability in ecological models will ultimately improve prediction and forecast horizons of vegetation dynamics.

  19. Variable Pitch Propellers

    NASA Technical Reports Server (NTRS)

    1920-01-01

    In this report are described four different types of propellers which appeared at widely separated dates, but which were exhibited together at the last Salon de l'Aeronautique. The four propellers are the Chaviere variable pitch propeller, the variable pitch propeller used on the Clement Bayard dirigible, the variable pitch propeller used on Italian dirigibles, and the Levasseur variable pitch propeller.

  20. Efficient Generation and Use of Power Series for Broad Application.

    NASA Astrophysics Data System (ADS)

    Rudmin, Joseph; Sochacki, James

    2017-01-01

    A brief history and overview of the Parker-Sockacki Method of Power Series generation is presented. This method generates power series to order n in time n2 for any system of differential equations that has a power series solution. The method is simple enough that novices to differential equations can easily learn it and immediately apply it. Maximal absolute error estimates allow one to determine the number of terms needed to reach desired accuracy. Ratios of coefficients in a solution with global convergence differ signficantly from that for a solution with only local convergence. Divergence of the series prevents one from overlooking poles. The method can always be cast in polynomial form, which allows separation of variables in almost all physical systems, facilitating exploration of hidden symmetries, and is implicitly symplectic.

  1. Integrated probabilistic risk assessment for nanoparticles: the case of nanosilica in food.

    PubMed

    Jacobs, Rianne; van der Voet, Hilko; Ter Braak, Cajo J F

    Insight into risks of nanotechnology and the use of nanoparticles is an essential condition for the social acceptance and safe use of nanotechnology. One of the problems with which the risk assessment of nanoparticles is faced is the lack of data, resulting in uncertainty in the risk assessment. We attempt to quantify some of this uncertainty by expanding a previous deterministic study on nanosilica (5-200 nm) in food into a fully integrated probabilistic risk assessment. We use the integrated probabilistic risk assessment method in which statistical distributions and bootstrap methods are used to quantify uncertainty and variability in the risk assessment. Due to the large amount of uncertainty present, this probabilistic method, which separates variability from uncertainty, contributed to a better understandable risk assessment. We found that quantifying the uncertainties did not increase the perceived risk relative to the outcome of the deterministic study. We pinpointed particular aspects of the hazard characterization that contributed most to the total uncertainty in the risk assessment, suggesting that further research would benefit most from obtaining more reliable data on those aspects.

  2. Modularity and the spread of perturbations in complex dynamical systems

    NASA Astrophysics Data System (ADS)

    Kolchinsky, Artemy; Gates, Alexander J.; Rocha, Luis M.

    2015-12-01

    We propose a method to decompose dynamical systems based on the idea that modules constrain the spread of perturbations. We find partitions of system variables that maximize "perturbation modularity," defined as the autocovariance of coarse-grained perturbed trajectories. The measure effectively separates the fast intramodular from the slow intermodular dynamics of perturbation spreading (in this respect, it is a generalization of the "Markov stability" method of network community detection). Our approach captures variation of modular organization across different system states, time scales, and in response to different kinds of perturbations: aspects of modularity which are all relevant to real-world dynamical systems. It offers a principled alternative to detecting communities in networks of statistical dependencies between system variables (e.g., "relevance networks" or "functional networks"). Using coupled logistic maps, we demonstrate that the method uncovers hierarchical modular organization planted in a system's coupling matrix. Additionally, in homogeneously coupled map lattices, it identifies the presence of self-organized modularity that depends on the initial state, dynamical parameters, and type of perturbations. Our approach offers a powerful tool for exploring the modular organization of complex dynamical systems.

  3. Chemical spoilage extent traceability of two kinds of processed pork meats using one multispectral system developed by hyperspectral imaging combined with effective variable selection methods.

    PubMed

    Cheng, Weiwei; Sun, Da-Wen; Pu, Hongbin; Wei, Qingyi

    2017-04-15

    The feasibility of hyperspectral imaging (HSI) (400-1000nm) for tracing the chemical spoilage extent of the raw meat used for two kinds of processed meats was investigated. Calibration models established separately for salted and cooked meats using full wavebands showed good results with the determination coefficient in prediction (R 2 P ) of 0.887 and 0.832, respectively. For simplifying the calibration models, two variable selection methods were used and compared. The results showed that genetic algorithm-partial least squares (GA-PLS) with as much continuous wavebands selected as possible always had better performance. The potential of HSI to develop one multispectral system for simultaneously tracing the chemical spoilage extent of the two kinds of processed meats was also studied. Good result with an R 2 P of 0.854 was obtained using GA-PLS as the dimension reduction method, which was thus used to visualize total volatile base nitrogen (TVB-N) contents corresponding to each pixel of the image. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Modularity and the spread of perturbations in complex dynamical systems.

    PubMed

    Kolchinsky, Artemy; Gates, Alexander J; Rocha, Luis M

    2015-12-01

    We propose a method to decompose dynamical systems based on the idea that modules constrain the spread of perturbations. We find partitions of system variables that maximize "perturbation modularity," defined as the autocovariance of coarse-grained perturbed trajectories. The measure effectively separates the fast intramodular from the slow intermodular dynamics of perturbation spreading (in this respect, it is a generalization of the "Markov stability" method of network community detection). Our approach captures variation of modular organization across different system states, time scales, and in response to different kinds of perturbations: aspects of modularity which are all relevant to real-world dynamical systems. It offers a principled alternative to detecting communities in networks of statistical dependencies between system variables (e.g., "relevance networks" or "functional networks"). Using coupled logistic maps, we demonstrate that the method uncovers hierarchical modular organization planted in a system's coupling matrix. Additionally, in homogeneously coupled map lattices, it identifies the presence of self-organized modularity that depends on the initial state, dynamical parameters, and type of perturbations. Our approach offers a powerful tool for exploring the modular organization of complex dynamical systems.

  5. 6Li in a three-body model with realistic Forces: Separable versus nonseparable approach

    NASA Astrophysics Data System (ADS)

    Hlophe, L.; Lei, Jin; Elster, Ch.; Nogga, A.; Nunes, F. M.

    2017-12-01

    Background: Deuteron induced reactions are widely used to probe nuclear structure and astrophysical information. Those (d ,p ) reactions may be viewed as three-body reactions and described with Faddeev techniques. Purpose: Faddeev equations in momentum space have a long tradition of utilizing separable interactions in order to arrive at sets of coupled integral equations in one variable. However, it needs to be demonstrated that their solution based on separable interactions agrees exactly with solutions based on nonseparable forces. Methods: Momentum space Faddeev equations are solved with nonseparable and separable forces as coupled integral equations. Results: The ground state of 6Li is calculated via momentum space Faddeev equations using the CD-Bonn neutron-proton force and a Woods-Saxon type neutron(proton)-4He force. For the latter the Pauli-forbidden S -wave bound state is projected out. This result is compared to a calculation in which the interactions in the two-body subsystems are represented by separable interactions derived in the Ernst-Shakin-Thaler (EST) framework. Conclusions: We find that calculations based on the separable representation of the interactions and the original interactions give results that agree to four significant figures for the binding energy, provided that energy and momentum support points of the EST expansion are chosen independently. The momentum distributions computed in both approaches also fully agree with each other.

  6. Identification of Active Galactic Nuclei through HST optical variability in the GOODS South field

    NASA Astrophysics Data System (ADS)

    Pouliasis, Ektoras; Georgantopoulos; Bonanos, A.; HCV Team

    2016-08-01

    This work aims to identify AGN in the GOODS South deep field through optical variability. This method can easily identify low-luminosity AGN. In particular, we use images in the z-band obtained from the Hubble Space Telescope with the ACS/WFC camera over 5 epochs separated by ~45 days. Aperture photometry has been performed using SExtractor to extract the lightcurves. Several variability indices, such as the median absolute deviation, excess variance, and sigma were applied to automatically identify the variable sources. After removing artifacts, stars and supernovae from the variable selected sample and keeping only those sources with known photometric or spectroscopic redshift, the optical variability was compared to variability in other wavelengths (X-rays, mid-IR, radio). This multi-wavelength study provides important constraints on the structure and the properties of the AGN and their relation to their hosts. This work is a part of the validation of the Hubble Catalog of Variables (HCV) project, which has been launched at the National Observatory of Athens by ESA, and aims to identify all sources (pointlike and extended) showing variability, based on the Hubble Source Catalog (HSC, Whitmore et al. 2015). The HSC version 1 was released in February 2015 and includes 80 million sources imaged with the WFPC2, ACS/WFC, WFC3/UVIS and WFC3/IR cameras.

  7. AGN Variability in the GOODS Fields

    NASA Astrophysics Data System (ADS)

    Sarajedini, Vicki

    2007-07-01

    Variability is a proven method to identify intrinsically faint active nuclei in galaxies found in deep HST surveys. We propose to extend our short-term variability study of the GOODS fields to include the more recent epochs obtained via supernovae searchers, increasing the overall time baseline from 6 months to 2.5 years. Based on typical AGN lightcurves, we expect to detect 70% more AGN by including these more recent epochs. Variable-detected AGN samples complement current X-ray and mid-IR surveys for AGN by providing unambigous evidence of nuclear activity. Additionallty, a significant number of variable nuclei are not associated with X-ray or mid-IR sources and would thus go undetected. With the increased time baseline, we will be able to construct the structure function {variability amplitude vs. time} for low-luminosity AGN to z 1. The inclusion of the longer time interval will allow for better descrimination among the various models describing the nature of AGN variability. The variability survey will be compared against spectroscopically selected AGN from the Team Keck Redshift Survey of the GOODS-N and the upcoming Flamingos-II NIR survey of the GOODS-S. The high-resolution ACS images will be used to separate the AGN from the host galaxy light and study the morphology, size and environment of the host galaxy. These studies will address questions concerning the nature of low-luminosity AGN evolution and variability at z 1.

  8. Investigating a hybrid perturbation-Galerkin technique using computer algebra

    NASA Technical Reports Server (NTRS)

    Andersen, Carl M.; Geer, James F.

    1988-01-01

    A two-step hybrid perturbation-Galerkin method is presented for the solution of a variety of differential equations type problems which involve a scalar parameter. The resulting (approximate) solution has the form of a sum where each term consists of the product of two functions. The first function is a function of the independent field variable(s) x, and the second is a function of the parameter lambda. In step one the functions of x are determined by forming a perturbation expansion in lambda. In step two the functions of lambda are determined through the use of the classical Bubnov-Gelerkin method. The resulting hybrid method has the potential of overcoming some of the drawbacks of the perturbation and Bubnov-Galerkin methods applied separately, while combining some of the good features of each. In particular, the results can be useful well beyond the radius of convergence associated with the perturbation expansion. The hybrid method is applied with the aid of computer algebra to a simple two-point boundary value problem where the radius of convergence is finite and to a quantum eigenvalue problem where the radius of convergence is zero. For both problems the hybrid method apparently converges for an infinite range of the parameter lambda. The results obtained from the hybrid method are compared with approximate solutions obtained by other methods, and the applicability of the hybrid method to broader problem areas is discussed.

  9. On Gammelgaard's Formula for a Star Product with Separation of Variables

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander

    2013-08-01

    We show that Gammelgaard's formula expressing a star product with separation of variables on a pseudo-Kähler manifold in terms of directed graphs without cycles is equivalent to an inversion formula for an operator on a formal Fock space. We prove this inversion formula directly and thus offer an alternative approach to Gammelgaard's formula which gives more insight into the question why the directed graphs in his formula have no cycles.

  10. Infinitesimal Deformations of a Formal Symplectic Groupoid

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander

    2011-09-01

    Given a formal symplectic groupoid G over a Poisson manifold ( M, π 0), we define a new object, an infinitesimal deformation of G, which can be thought of as a formal symplectic groupoid over the manifold M equipped with an infinitesimal deformation {π_0 + \\varepsilon π_1} of the Poisson bivector field π 0. To any pair of natural star products {(ast,tildeast)} having the same formal symplectic groupoid G we relate an infinitesimal deformation of G. We call it the deformation groupoid of the pair {(ast,tildeast)} . To each star product with separation of variables {ast} on a Kähler-Poisson manifold M we relate another star product with separation of variables {hatast} on M. We build an algorithm for calculating the principal symbols of the components of the logarithm of the formal Berezin transform of a star product with separation of variables {ast} . This algorithm is based upon the deformation groupoid of the pair {(ast,hatast)}.

  11. Methods for assisting recovery of damaged brain and spinal cord using arrays of X-Ray microplanar beams

    DOEpatents

    Dilmanian, F. Avraham; McDonald, III, John W.

    2007-12-04

    A method of assisting recovery of an injury site of brain or spinal cord injury includes providing a therapeutic dose of X-ray radiation to the injury site through an array of parallel microplanar beams. The dose at least temporarily removes regeneration inhibitors from the irradiated regions. Substantially unirradiated cells surviving between the microplanar beams migrate to the in-beam irradiated portion and assist in recovery. The dose may be administered in dose fractions over several sessions, separated in time, using angle-variable intersecting microbeam arrays (AVIMA). Additional doses may be administered by varying the orientation of the microplanar beams. The method may be enhanced by injecting stem cells into the injury site.

  12. Methods for assisting recovery of damaged brain and spinal cord using arrays of X-ray microplanar beams

    DOEpatents

    Dilmanian, F. Avraham; McDonald, III, John W.

    2007-01-02

    A method of assisting recovery of an injury site of brain or spinal cord injury includes providing a therapeutic dose of X-ray radiation to the injury site through an array of parallel microplanar beams. The dose at least temporarily removes regeneration inhibitors from the irradiated regions. Substantially unirradiated cells surviving between the microplanar beams migrate to the in-beam irradiated portion and assist in recovery. The dose may be administered in dose fractions over several sessions, separated in time, using angle-variable intersecting microbeam arrays (AVIMA). Additional doses may be administered by varying the orientation of the microplanar beams. The method may be enhanced by injecting stem cells into the injury site.

  13. Evaluation of chromatin integrity of motile bovine spermatozoa capacitated in vitro.

    PubMed

    Reckova, Z; Machatkova, M; Rybar, R; Horakova, J; Hulinska, P; Machal, L

    2008-08-01

    The efficiency of in vitro embryo production is highly variable amongst individual sires in cattle. To eliminate that this variability is not caused by sperm chromatin damage caused by separation or capacitacion, chromatin integrity was evaluated. Seventeen of AI bulls with good NRRs but variable embryo production efficiency were used. For each bull, motile spermatozoa were separated on a Percoll gradient, resuspended in IVF-TALP medium and capacitated with or incubated without heparin for 6 h. Samples before and after separation and after 3-h and 6-h capacitacion or incubation were evaluated by the Sperm Chromatin Structure Assay (SCSA) and the proportion of sperm with intact chromatin structure was calculated. Based on changes in the non-DFI-sperm proportion, the sires were categorized as DNA-unstable (DNA-us), DNA-stable (DNA-s) and DNA-most stable (DNA-ms) bulls (n=3, n=5 and n=9, respectively). In DNA-us bulls, separation produced a significant increase of the mean non-DFI-sperm proportion (p

  14. Investigation to develop a multistage forest sampling inventory system using ERTS-1 imagery

    NASA Technical Reports Server (NTRS)

    Langley, P. G.; Vanroessel, J. W. (Principal Investigator); Wert, S. L.

    1975-01-01

    The author has identified the following significant results. The annotation system produced a RMSE of about 200 m ground distance in the MSS data system with the control data used. All the analytical MSS interpretation models tried were highly significant. However, the gains in forest sampling efficiency that can be achieved by using the models vary from zero to over 50 percent depending on the area to which they are applied and the sampling method used. Among the sampling methods tried, regression sampling yielded substantial and the most consistent gains. The single most significant variable in the interpretation model was the difference between bands 5 and 7. The contrast variable, computed by the Hadamard transform was significant but did not contribute much to the interpretation model. Forest areas containing very large timber volumes because of large tree sizes were not separable from areas of similar crown cover but containing smaller trees using ERTS image interpretation only. All correlations between space derived timber volume predictions and estimates obtained from aerial and ground sampling were relatively low but significant and stable. There was a much stronger relationship between variables derived from MSS and U2 data than between U2 and ground data.

  15. A Response Surface Methodology for Bi-Level Integrated System Synthesis (BLISS)

    NASA Technical Reports Server (NTRS)

    Altus, Troy David; Sobieski, Jaroslaw (Technical Monitor)

    2002-01-01

    The report describes a new method for optimization of engineering systems such as aerospace vehicles whose design must harmonize a number of subsystems and various physical phenomena, each represented by a separate computer code, e.g., aerodynamics, structures, propulsion, performance, etc. To represent the system internal couplings, the codes receive output from other codes as part of their inputs. The system analysis and optimization task is decomposed into subtasks that can be executed concurrently, each subtask conducted using local state and design variables and holding constant a set of the system-level design variables. The subtasks results are stored in form of the Response Surfaces (RS) fitted in the space of the system-level variables to be used as the subtask surrogates in a system-level optimization whose purpose is to optimize the system objective(s) and to reconcile the system internal couplings. By virtue of decomposition and execution concurrency, the method enables a broad workfront in organization of an engineering project involving a number of specialty groups that might be geographically dispersed, and it exploits the contemporary computing technology of massively concurrent and distributed processing. The report includes a demonstration test case of supersonic business jet design.

  16. Determinants of 24-hour energy expenditure in man. Methods and results using a respiratory chamber.

    PubMed Central

    Ravussin, E; Lillioja, S; Anderson, T E; Christin, L; Bogardus, C

    1986-01-01

    Daily human energy requirements calculated from separate components of energy expenditure are inaccurate and usually in poor agreement with measured energy intakes. Measurement of energy expenditure over periods of 24 h or longer is needed to determine more accurately rates of daily energy expenditure in humans. We provide a detailed description of a human respiratory chamber and methods used to determine rates of energy expenditure over 24-h periods in 177 subjects. The results show that: fat-free mass (FFM) as estimated by densitometry is the best available determinant of 24-h energy expenditures (24EE) and explains 81% of the variance observed between individuals (24EE [kcal/d] = 597 + 26.5 FFM); 24EE in an individual is very reproducible (coefficient of variation = 2.4%); and even when adjusted for differences in FFM, there is still considerable interperson variability of the daily energy expenditure. A large portion of the variability of 24EE among individuals, independent of differences in body size, was due to variability in the degree of spontaneous physical activity, i.e., "fidgeting," which accounted for 100-800 kcal/d in these subjects. Images PMID:3782471

  17. Oak ridge national laboratory automated clean chemistry for bulk analysis of environmental swipe samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bostick, Debra A.; Hexel, Cole R.; Ticknor, Brian W.

    2016-11-01

    To shorten the lengthy and costly manual chemical purification procedures, sample preparation methods for mass spectrometry are being automated using commercial-off-the-shelf (COTS) equipment. This addresses a serious need in the nuclear safeguards community to debottleneck the separation of U and Pu in environmental samples—currently performed by overburdened chemists—with a method that allows unattended, overnight operation. In collaboration with Elemental Scientific Inc., the prepFAST-MC2 was designed based on current COTS equipment that was modified for U/Pu separations utilizing Eichrom™ TEVA and UTEVA resins. Initial verification of individual columns yielded small elution volumes with consistent elution profiles and good recovery. Combined columnmore » calibration demonstrated ample separation without crosscontamination of the eluent. Automated packing and unpacking of the built-in columns initially showed >15% deviation in resin loading by weight, which can lead to inconsistent separations. Optimization of the packing and unpacking methods led to a reduction in the variability of the packed resin to less than 5% daily. The reproducibility of the automated system was tested with samples containing 30 ng U and 15 pg Pu, which were separated in a series with alternating reagent blanks. These experiments showed very good washout of both the resin and the sample from the columns as evidenced by low blank values. Analysis of the major and minor isotope ratios for U and Pu provided values well within data quality limits for the International Atomic Energy Agency. Additionally, system process blanks spiked with 233U and 244Pu tracers were separated using the automated system after it was moved outside of a clean room and yielded levels equivalent to clean room blanks, confirming that the system can produce high quality results without the need for expensive clean room infrastructure. Comparison of the amount of personnel time necessary for successful manual vs. automated chemical separations showed a significant decrease in hands-on time from 9.8 hours to 35 minutes for seven samples, respectively. This documented time savings and reduced labor translates to a significant cost savings per sample. Overall, the system will enable faster sample reporting times at reduced costs by limiting personnel hours dedicated to the chemical separation.« less

  18. Control-group feature normalization for multivariate pattern analysis of structural MRI data using the support vector machine.

    PubMed

    Linn, Kristin A; Gaonkar, Bilwaj; Satterthwaite, Theodore D; Doshi, Jimit; Davatzikos, Christos; Shinohara, Russell T

    2016-05-15

    Normalization of feature vector values is a common practice in machine learning. Generally, each feature value is standardized to the unit hypercube or by normalizing to zero mean and unit variance. Classification decisions based on support vector machines (SVMs) or by other methods are sensitive to the specific normalization used on the features. In the context of multivariate pattern analysis using neuroimaging data, standardization effectively up- and down-weights features based on their individual variability. Since the standard approach uses the entire data set to guide the normalization, it utilizes the total variability of these features. This total variation is inevitably dependent on the amount of marginal separation between groups. Thus, such a normalization may attenuate the separability of the data in high dimensional space. In this work we propose an alternate approach that uses an estimate of the control-group standard deviation to normalize features before training. We study our proposed approach in the context of group classification using structural MRI data. We show that control-based normalization leads to better reproducibility of estimated multivariate disease patterns and improves the classifier performance in many cases. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Variables separation and superintegrability of the nine-dimensional MICZ-Kepler problem

    NASA Astrophysics Data System (ADS)

    Phan, Ngoc-Hung; Le, Dai-Nam; Thoi, Tuan-Quoc N.; Le, Van-Hoang

    2018-03-01

    The nine-dimensional MICZ-Kepler problem is of recent interest. This is a system describing a charged particle moving in the Coulomb field plus the field of a SO(8) monopole in a nine-dimensional space. Interestingly, this problem is equivalent to a 16-dimensional harmonic oscillator via the Hurwitz transformation. In the present paper, we report on the multiseparability, a common property of superintegrable systems, and the superintegrability of the problem. First, we show the solvability of the Schrödinger equation of the problem by the variables separation method in different coordinates. Second, based on the SO(10) symmetry algebra of the system, we construct explicitly a set of seventeen invariant operators, which are all in the second order of the momentum components, satisfying the condition of superintegrability. The found number 17 coincides with the prediction of (2n - 1) law of maximal superintegrability order in the case n = 9. Until now, this law is accepted to apply only to scalar Hamiltonian eigenvalue equations in n-dimensional space; therefore, our results can be treated as evidence that this definition of superintegrability may also apply to some vector equations such as the Schrödinger equation for the nine-dimensional MICZ-Kepler problem.

  20. Considerations of solar wind dynamics in mapping of Jupiter's auroral features to magnetospheric sources

    NASA Astrophysics Data System (ADS)

    Gyalay, S.; Vogt, M.; Withers, P.

    2015-12-01

    Previous studies have mapped locations from the magnetic equator to the ionosphere in order to understand how auroral features relate to magnetospheric sources. Vogt et al. (2011) in particular mapped equatorial regions to the ionosphere by using a method of flux equivalence—requiring that the magnetic flux in a specified region at the equator is equal to the magnetic flux in the region to which it maps in the ionosphere. This is preferred to methods relying on tracing field lines from global Jovian magnetic field models, which are inaccurate beyond 30 Jupiter radii from the planet. That previous study produced a two-dimensional model—accounting for changes with radial distance and local time—of the normal component of the magnetic field in the equatorial region. However, this two-dimensional fit—which aggregated all equatorial data from Pioneer 10, Pioneer 11, Voyager 1, Voyager 2, Ulysses, and Galileo—did not account for temporal variability resulting from changing solar wind conditions. Building off of that project, this study aims to map the Jovian aurora to the magnetosphere for two separate cases: with a nominal magnetosphere, and with a magnetosphere compressed by high solar wind dynamic pressure. Using the Michigan Solar Wind Model (mSWiM) to predict the solar wind conditions upstream of Jupiter, intervals of high solar wind dynamic pressure were separated from intervals of low solar wind dynamic pressure—thus creating two datasets of magnetometer measurements to be used for two separate 2D fits, and two separate mappings.

  1. [Baseflow separation methods in hydrological process research: a review].

    PubMed

    Xu, Lei-Lei; Liu, Jing-Lin; Jin, Chang-Jie; Wang, An-Zhi; Guan, De-Xin; Wu, Jia-Bing; Yuan, Feng-Hui

    2011-11-01

    Baseflow separation research is regarded as one of the most important and difficult issues in hydrology and ecohydrology, but lacked of unified standards in the concepts and methods. This paper introduced the theories of baseflow separation based on the definitions of baseflow components, and analyzed the development course of different baseflow separation methods. Among the methods developed, graph separation method is simple and applicable but arbitrary, balance method accords with hydrological mechanism but is difficult in application, whereas time series separation method and isotopic method can overcome the subjective and arbitrary defects caused by graph separation method, and thus can obtain the baseflow procedure quickly and efficiently. In recent years, hydrological modeling, digital filtering, and isotopic method are the main methods used for baseflow separation.

  2. Estimation of time-variable fast flow path chemical concentrations for application in tracer-based hydrograph separation analyses

    USGS Publications Warehouse

    Kronholm, Scott C.; Capel, Paul D.

    2016-01-01

    Mixing models are a commonly used method for hydrograph separation, but can be hindered by the subjective choice of the end-member tracer concentrations. This work tests a new variant of mixing model that uses high-frequency measures of two tracers and streamflow to separate total streamflow into water from slowflow and fastflow sources. The ratio between the concentrations of the two tracers is used to create a time-variable estimate of the concentration of each tracer in the fastflow end-member. Multiple synthetic data sets, and data from two hydrologically diverse streams, are used to test the performance and limitations of the new model (two-tracer ratio-based mixing model: TRaMM). When applied to the synthetic streams under many different scenarios, the TRaMM produces results that were reasonable approximations of the actual values of fastflow discharge (±0.1% of maximum fastflow) and fastflow tracer concentrations (±9.5% and ±16% of maximum fastflow nitrate concentration and specific conductance, respectively). With real stream data, the TRaMM produces high-frequency estimates of slowflow and fastflow discharge that align with expectations for each stream based on their respective hydrologic settings. The use of two tracers with the TRaMM provides an innovative and objective approach for estimating high-frequency fastflow concentrations and contributions of fastflow water to the stream. This provides useful information for tracking chemical movement to streams and allows for better selection and implementation of water quality management strategies.

  3. The Information Content of Discrete Functions and Their Application in Genetic Data Analysis

    DOE PAGES

    Sakhanenko, Nikita A.; Kunert-Graf, James; Galas, David J.

    2017-10-13

    The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. Here, we present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discretemore » variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis—that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. Finally, we illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.« less

  4. The Information Content of Discrete Functions and Their Application in Genetic Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakhanenko, Nikita A.; Kunert-Graf, James; Galas, David J.

    The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. Here, we present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discretemore » variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis—that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. Finally, we illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.« less

  5. The Information Content of Discrete Functions and Their Application in Genetic Data Analysis.

    PubMed

    Sakhanenko, Nikita A; Kunert-Graf, James; Galas, David J

    2017-12-01

    The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. We present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discrete variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis-that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. We illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.

  6. Variability in rainfall at monitoring stations and derivation of a long-term rainfall intensity record in the Grand Canyon Region, Arizona, USA

    USGS Publications Warehouse

    Caster, Joshua J.; Sankey, Joel B.

    2016-04-11

    In this study, we examine rainfall datasets of varying temporal length, resolution, and spatial distribution to characterize rainfall depth, intensity, and seasonality for monitoring stations along the Colorado River within Marble and Grand Canyons. We identify maximum separation distances between stations at which rainfall measurements might be most useful for inferring rainfall characteristics at other locations. We demonstrate a method for applying relations between daily rainfall depth and intensity, from short-term high-resolution data to lower-resolution longer-term data, to synthesize a long-term record of daily rainfall intensity from 1950–2012. We consider the implications of our spatio-temporal characterization of rainfall for understanding local landscape change in sedimentary deposits and archaeological sites, and for better characterizing past and present rainfall and its potential role in overland flow erosion within the canyons. We find that rainfall measured at stations within the river corridor is spatially correlated at separation distances of tens of kilometers, and is not correlated at the large elevation differences that separate stations along the Colorado River from stations above the canyon rim. These results provide guidance for reasonable separation distances at which rainfall measurements at stations within the Grand Canyon region might be used to infer rainfall at other nearby locations along the river. Like other rugged landscapes, spatial variability between rainfall measured at monitoring stations appears to be influenced by canyon and rim physiography and elevation, with preliminary results suggesting the highest elevation landform in the region, the Kaibab Plateau, may function as an important orographic influence. Stations at specific locations within the canyons and along the river, such as in southern (lower) Marble Canyon and eastern (upper) Grand Canyon, appear to have strong potential to receive high-intensity rainfall that can generate runoff which may erode alluvium. The characterization of past and present rainfall variability in this study will be useful for future studies that evaluate more spatially continuous datasets in order to better understand the rainfall dynamics within this, and potentially other, deep canyons.

  7. Analysis Methodology for Balancing Authority Cooperation in High Penetration of Variable Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makarov, Yuri V.; Etingov, Pavel V.; Zhou, Ning

    2010-02-01

    With the rapidly growing penetration level of wind and solar generation, the challenges of managing variability and the uncertainty of intermittent renewable generation become more and more significant. The problem of power variability and uncertainty gets exacerbated when each balancing authority (BA) works locally and separately to balance its own subsystem. The virtual BA concept means various forms of collaboration between individual BAs must manage power variability and uncertainty. The virtual BA will have a wide area control capability in managing its operational balancing requirements in different time frames. This coordination results in the improvement of efficiency and reliability ofmore » power system operation while facilitating the high level integration of green, intermittent energy resources. Several strategies for virtual BA implementation, such as ACE diversity interchange (ADI), wind only BA, BA consolidation, dynamic scheduling, regulation and load following sharing, extreme event impact study are discussed in this report. The objective of such strategies is to allow individual BAs within a large power grid to help each other deal with power variability. Innovative methods have been developed to simulate the balancing operation of BAs. These methods evaluate the BA operation through a number of metrics — such as capacity, ramp rate, ramp duration, energy and cycling requirements — to evaluate the performances of different virtual BA strategies. The report builds a systematic framework for evaluating BA consolidation and coordination. Results for case studies show that significant economic and reliability benefits can be gained. The merits and limitation of each virtual BA strategy are investigated. The report provides guidelines for the power industry to evaluate the coordination or consolidation method. The application of the developed strategies in cooperation with several regional BAs is in progress for several off-spring projects.« less

  8. Addressing the identification problem in age-period-cohort analysis: a tutorial on the use of partial least squares and principal components analysis.

    PubMed

    Tu, Yu-Kang; Krämer, Nicole; Lee, Wen-Chung

    2012-07-01

    In the analysis of trends in health outcomes, an ongoing issue is how to separate and estimate the effects of age, period, and cohort. As these 3 variables are perfectly collinear by definition, regression coefficients in a general linear model are not unique. In this tutorial, we review why identification is a problem, and how this problem may be tackled using partial least squares and principal components regression analyses. Both methods produce regression coefficients that fulfill the same collinearity constraint as the variables age, period, and cohort. We show that, because the constraint imposed by partial least squares and principal components regression is inherent in the mathematical relation among the 3 variables, this leads to more interpretable results. We use one dataset from a Taiwanese health-screening program to illustrate how to use partial least squares regression to analyze the trends in body heights with 3 continuous variables for age, period, and cohort. We then use another dataset of hepatocellular carcinoma mortality rates for Taiwanese men to illustrate how to use partial least squares regression to analyze tables with aggregated data. We use the second dataset to show the relation between the intrinsic estimator, a recently proposed method for the age-period-cohort analysis, and partial least squares regression. We also show that the inclusion of all indicator variables provides a more consistent approach. R code for our analyses is provided in the eAppendix.

  9. Reliability of Chinese medicine diagnostic variables in the examination of patients with osteoarthritis of the knee.

    PubMed

    Hua, Bin; Abbas, Estelle; Hayes, Alan; Ryan, Peter; Nelson, Lisa; O'Brien, Kylie

    2012-11-01

    Chinese medicine (CM) has its own diagnostic indicators that are used as evidence of change in a patient's condition. The majority of studies investigating efficacy of Chinese herbal medicine (CHM) have utilized biomedical diagnostic endpoints. For CM clinical diagnostic variables to be incorporated into clinical trial designs, there would need to be evidence that these diagnostic variables are reliable. Previous studies have indicated that the reliability of CM syndrome diagnosis is variable. Little information is known about where the variability stems from--the basic data collection level or the synthesis of diagnostic data, or both. No previous studies have investigated systematically the reliability of all four diagnostic methods used in the CM diagnostic process (Inquiry, Inspection, Auscultation/Olfaction, and Palpation). The objective of this study was to assess the inter-rater reliability of data collected using the four diagnostic methods of CM in Australian patients with knee osteoarthritis (OA), in order to investigate if CM variables could be used with confidence as diagnostic endpoints in a clinical trial investigating the efficacy of a CHM in treating OA. An inter-rater reliability study was conducted as a substudy of a clinical trial investigating the treatment of knee OA with Chinese herbal medicine. Two (2) experienced CM practitioners conducted a CM examination separately, within 2 hours of each other, in 40 participants. A CM assessment form was utilized to record the diagnostic data. Cohen's κ coefficient was used as a measure of the level of agreement between 2 practitioners. There was a relatively good level of agreement for Inquiry and Auscultation variables, and, in general, a low level of agreement for (visual) Inspection and Palpation variables. There was variation in the level of agreement between 2 practitioners on clinical information collected using the Four Diagnostic Methods of a CM examination. Some aspects of CM diagnosis appear to be reliable, while others are not. Based on these results, it was inappropriate to use CM diagnostic variables as diagnostic endpoints in the main study, which was an investigation of efficacy of CHM treatment of knee OA.

  10. Students' Misconceptions about Random Variables

    ERIC Educational Resources Information Center

    Kachapova, Farida; Kachapov, Ilias

    2012-01-01

    This article describes some misconceptions about random variables and related counter-examples, and makes suggestions about teaching initial topics on random variables in general form instead of doing it separately for discrete and continuous cases. The focus is on post-calculus probability courses. (Contains 2 figures.)

  11. A homogeneous method to measure aminoacyl-tRNA synthetase aminoacylation activity using scintillation proximity assay technology.

    PubMed

    Macarrón, R; Mensah, L; Cid, C; Carranza, C; Benson, N; Pope, A J; Díez, E

    2000-09-10

    A new method to measure the aminoacylation of tRNA based upon the use of the scintillation proximity assay (SPA) technology has been developed. The assay detects incorporation of radiolabeled amino acids into cognate tRNA, catalyzed by a specific aminoacyl-tRNA synthetase (aaRS). Under acidic conditions, uncoated yttrium silicate SPA beads were found to bind tRNA aggregates, while the radiolabeled amino acid substrate remains in solution, resulting in good signal discrimination of these two species in the absence of any separation steps. The usefulness of this approach was demonstrated by measurement of steady-state kinetic constants and inhibitor binding constants for a range of aaRS enzymes in comparison with data from standard, trichloroacetic acid-precipitation-based assays. In all cases, the data were quantitatively comparable. Although the radioisotopic counting efficiency of the SPA method was less than that of standard liquid scintillation counting, the statistical performance (i.e., signal to background, variability, stability) of the SPA assays was at least equivalent to the separation-based methods. The assay was also shown to work well in miniaturized 384-well microtiter plate formats, resulting in considerable reagent savings. In summary, a new method to characterize aaRS activity is described that is faster and more amenable to high-throughput screening than traditional methods. Copyright 2000 Academic Press.

  12. Effect of defuzzification method of fuzzy modeling

    NASA Astrophysics Data System (ADS)

    Lapohos, Tibor; Buchal, Ralph O.

    1994-10-01

    Imprecision can arise in fuzzy relational modeling as a result of fuzzification, inference and defuzzification. These three sources of imprecision are difficult to separate. We have determined through numerical studies that an important source of imprecision is the defuzzification stage. This imprecision adversely affects the quality of the model output. The most widely used defuzzification algorithm is known by the name of `center of area' (COA) or `center of gravity' (COG). In this paper, we show that this algorithm not only maps the near limit values of the variables improperly but also introduces errors for middle domain values of the same variables. Furthermore, the behavior of this algorithm is a function of the shape of the reference sets. We compare the COA method to the weighted average of cluster centers (WACC) procedure in which the transformation is carried out based on the values of the cluster centers belonging to each of the reference membership functions instead of using the functions themselves. We show that this procedure is more effective and computationally much faster than the COA. The method is tested for a family of reference sets satisfying certain constraints, that is, for any support value the sum of reference membership function values equals one and the peak values of the two marginal membership functions project to the boundaries of the universe of discourse. For all the member sets of this family of reference sets the defuzzification errors do not get bigger as the linguistic variables tend to their extreme values. In addition, the more reference sets that are defined for a certain linguistic variable, the less the average defuzzification error becomes. In case of triangle shaped reference sets there is no defuzzification error at all. Finally, an alternative solution is provided that improves the performance of the COA method.

  13. Psychological Separation, Attachment Security, Vocational Self-Concept Crystallization, and Career Indecision: A Structural Equation Analysis.

    ERIC Educational Resources Information Center

    Tokar, David M.; Withrow, Jason R.; Hall, Rosalie J.; Moradi, Bonnie

    2003-01-01

    Structural equation modeling was used to test theoretically based models in which psychological separation and attachment security variables were related to career indecision and those relations were mediated through vocational self-concept crystallization. Results indicated that some components of separation and attachment security did relate to…

  14. Factors Moderating Children's Adjustment to Parental Separation: Findings from a Community Study in England

    ERIC Educational Resources Information Center

    Cheng, Helen; Dunn, Judy; O'Connor, Thomas G.; Golding, Jean

    2006-01-01

    Research findings show that there is marked variability in children's response to parental separation, but few studies identify the sources of this variation. This prospective longitudinal study examines the factors modifying children's adjustment to parental separation in a community sample of 5,635 families in England. Children's…

  15. Adjustable shear stress erosion and transport flume

    DOEpatents

    Roberts, Jesse D.; Jepsen, Richard A.

    2002-01-01

    A method and apparatus for measuring the total erosion rate and downstream transport of suspended and bedload sediments using an adjustable shear stress erosion and transport (ASSET) flume with a variable-depth sediment core sample. Water is forced past a variable-depth sediment core sample in a closed channel, eroding sediments, and introducing suspended and bedload sediments into the flow stream. The core sample is continuously pushed into the flow stream, while keeping the surface level with the bottom of the channel. Eroded bedload sediments are transported downstream and then gravitationally separated from the flow stream into one or more quiescent traps. The captured bedload sediments (particles and aggregates) are weighed and compared to the total mass of sediment eroded, and also to the concentration of sediments suspended in the flow stream.

  16. Judgments of eye level in light and in darkness

    NASA Technical Reports Server (NTRS)

    Stoper, Arnold E.; Cohen, Malcolm M.

    1986-01-01

    Subjects judged eye level in the light and in the dark by raising and lowering themselves in a dental chair until a stationary target appeared to be at the level of their eyes. This method reduced the possibility of subjects' using visible landmarks as reference points for setting eye level during lighted trials, which may have contributed to artificially low estimates of the variability of this judgment in previous studies. Chair settings were 2.5 deg higher in the dark than in the light, and variability was approximately 66 percent greater in the dark than in the light. These results are discussed in terms of possible interactions of two separate systems, one sensitive to the orientations of visible surfaces and the other sensitive to bodily and gravitational information.

  17. Presentation and Treatment of Poland Anomaly

    PubMed Central

    Buckwalter V, Joseph A.; Shah, Apurva S.

    2016-01-01

    Background: Poland anomaly is a sporadic, phenotypically variable congenital condition usually characterized by unilateral pectoral muscle agenesis and ipsilateral hand deformity. Methods: A comprehensive review of the medical literature on Poland anomaly was performed using a Medline search. Results: Poland anomaly is a sporadic, phenotypically variable congenital condition usually characterized by unilateral, simple syndactyly with ipsilateral limb hypoplasia and pectoralis muscle agenesis. Operative management of syndactyly in Poland anomaly is determined by the severity of hand involvement and the resulting anatomical dysfunction. Syndactyly reconstruction is recommended in all but the mildest cases because most patients with Poland anomaly have notable brachydactyly, and digital separation can improve functional length. Conclusions: Improved understanding the etiology and presentation of Poland anomaly can improve clinician recognition and management of this rare congenital condition. PMID:28149203

  18. Study on Sumbawa gold recovery using centrifuge

    NASA Astrophysics Data System (ADS)

    Ferdana, A. D.; Petrus, H. T. B. M.; Bendiyasa, I. M.; Prijambada, I. D.; Hamada, F.; Sachiko, T.

    2018-01-01

    The Artisanal Small Gold Mining in Sumbawa has been processing gold with mercury (Hg), which poses a serious threat to the mining and global environment. One method of gold processing that does not use mercury is by gravity method. Before processing the ore first performed an analysis of Mineragraphy and analysis of compound with XRD. Mineragraphy results show that gold is associated with chalcopyrite and covelite and is a single particle (native) on size 58.8 μm, 117 μm up to 294 μm. characterization with XRD shows that the Sumbawa Gold Ore is composed of quartz, pyrite, pyroxene, and sericite compounds. Sentrifugation is one of separation equipment of gravity method to increase concentrate based on difference of specific gravity. The optimum concentration result is influenced by several variables, such as water flow rate and particle size. In this present research, the range of flow rate is 5 lpm and 10 lpm, the particle size - 100 + 200 mesh and -200 +300 mesh. Gold concentration in concentrate is measured by EDX. The result shows that the optimum condition is obtained at a separation with flow rate 5 lpm and a particle size of -100 + 200 mesh.

  19. A technique for generating phase-space-based Monte Carlo beamlets in radiotherapy applications.

    PubMed

    Bush, K; Popescu, I A; Zavgorodni, S

    2008-09-21

    As radiotherapy treatment planning moves toward Monte Carlo (MC) based dose calculation methods, the MC beamlet is becoming an increasingly common optimization entity. At present, methods used to produce MC beamlets have utilized a particle source model (PSM) approach. In this work we outline the implementation of a phase-space-based approach to MC beamlet generation that is expected to provide greater accuracy in beamlet dose distributions. In this approach a standard BEAMnrc phase space is sorted and divided into beamlets with particles labeled using the inheritable particle history variable. This is achieved with the use of an efficient sorting algorithm, capable of sorting a phase space of any size into the required number of beamlets in only two passes. Sorting a phase space of five million particles can be achieved in less than 8 s on a single-core 2.2 GHz CPU. The beamlets can then be transported separately into a patient CT dataset, producing separate dose distributions (doselets). Methods for doselet normalization and conversion of dose to absolute units of Gy for use in intensity modulated radiation therapy (IMRT) plan optimization are also described.

  20. Variability of exhaled breath condensate pH in lung transplant recipients.

    PubMed

    Czebe, Krisztina; Kullmann, Tamas; Csiszer, Eszter; Barat, Erzsebet; Horvath, Ildiko; Antus, Balazs

    2008-01-01

    Measurement of pH in exhaled breath condensate (EBC) may represent a novel method for investigating airway pathology. The aim of this longitudinal study was to assess the variability of EBC pH in stable lung transplant recipients (LTR). During routine clinical visits 74 EBC pH measurements were performed in 17 LTR. EBC pH was also measured in 19 healthy volunteers on four separate occasions. EBC pH was determined at standard CO2 partial pressure by a blood gas analyzer. Mean EBC pH in clinically stable LTR and in controls was similar (6.38 +/- 0.09 vs. 6.44 +/- 0.16; p = nonsignificant). Coefficient of variation for pH in LTR and controls was 2.1 and 2.3%, respectively. The limits of agreement for between-visit variability determined by the Bland-Altman test in LTR and healthy volunteers were also comparable (-0.29 and 0.46 vs. -0.53 and 0.44). Our data suggest that the variability of EBC pH in stable LTR is relatively small, and it is similar to that in healthy nontransplant subjects.

  1. Diurnal variation of eye movement and heart rate variability in the human fetus at term.

    PubMed

    Morokuma, S; Horimoto, N; Satoh, S; Nakano, H

    2001-07-01

    To elucidate diurnal variations in eye movement and fetal heart rate (FHR) variability in the term fetus, we observed these two parameters continuously for 24 h, using real-time ultrasound and Doppler cardiotocograph, respectively. Studied were five uncomplicated fetuses at term. The time series data of the presence and absence of eye movement and mean FHR value for each 1 min were analyzed using the maximum entropy method (MEM) and subsequent nonlinear least squares fitting. According to the power value of eye movement, all five cases were classified into two groups: three cases in the large power group and two cases in the small power group. The acrophases of eye movement and FHR variability in the large power group were close, thereby implying the existence of a diurnal rhythm in both these parameters and also that they are synchronized. In the small power group, the acrophases were separated. The synchronization of eye movement and FHR variability in the large power group suggests that these phenomena are governed by a common central mechanism related to diurnal rhythm generation.

  2. Impacts analysis of car following models considering variable vehicular gap policies

    NASA Astrophysics Data System (ADS)

    Xin, Qi; Yang, Nan; Fu, Rui; Yu, Shaowei; Shi, Zhongke

    2018-07-01

    Due to the important roles playing in the vehicles' adaptive cruise control system, variable vehicular gap polices were employed to full velocity difference model (FVDM) to investigate the traffic flow properties. In this paper, two new car following models were put forward by taking constant time headway(CTH) policy and variable time headway(VTH) policy into optimal velocity function, separately. By steady state analysis of the new models, an equivalent optimal velocity function was defined. To determine the linear stable conditions of the new models, we introduce equivalent expressions of safe vehicular gap, and then apply small amplitude perturbation analysis and long terms of wave expansion techniques to obtain the new models' linear stable conditions. Additionally, the first order approximate solutions of the new models were drawn at the stable region, by transforming the models into typical Burger's partial differential equations with reductive perturbation method. The FVDM based numerical simulations indicate that the variable vehicular gap polices with proper parameters directly contribute to the improvement of the traffic flows' stability and the avoidance of the unstable traffic phenomena.

  3. Latent variable modeling to analyze the effects of process parameters on the dissolution of paracetamol tablet

    PubMed Central

    Sun, Fei; Xu, Bing; Zhang, Yi; Dai, Shengyun; Shi, Xinyuan; Qiao, Yanjiang

    2017-01-01

    ABSTRACT The dissolution is one of the critical quality attributes (CQAs) of oral solid dosage forms because it relates to the absorption of drug. In this paper, the influence of raw materials, granules and process parameters on the dissolution of paracetamol tablet was analyzed using latent variable modeling methods. The variability in raw materials and granules was understood based on the principle component analysis (PCA), respectively. A multi-block partial least squares (MBPLS) model was used to determine the critical factors affecting the dissolution. The results showed that the binder amount, the post granulation time, the API content in granule, the fill depth and the punch tip separation distance were the critical factors with variable importance in the projection (VIP) values larger than 1. The importance of each unit of the whole process was also ranked using the block importance in the projection (BIP) index. It was concluded that latent variable models (LVMs) were very useful tools to extract information from the available data and improve the understanding on dissolution behavior of paracetamol tablet. The obtained LVMs were also helpful to propose the process design space and to design control strategies in the further research. PMID:27689242

  4. Decadal Variation's Offset of Global Warming in Recent Tropical Pacific Climate

    NASA Astrophysics Data System (ADS)

    Yeo, S. R.; Yeh, S. W.; Kim, K. Y.; Kim, W.

    2015-12-01

    Despite the increasing greenhouse gas concentration, there is no significant warming in the sea surface temperature (SST) over the tropical eastern Pacific since about 2000. This counterintuitive observation has generated substantial interest in the role of low-frequency variation over the Pacific Ocean such as Pacific Decadal Oscillation (PDO) or Interdecadal Pacific Oscillation (IPO). Therefore, it is necessary to appropriately separate low-frequency variability and global warming from SST records. Here we present three primary modes of global SST as a secular warming trend, a low-frequency variability, and a biennial oscillation through the use of novel statistical method. By analyzing temporal behavior of the three-mode, it is found that the opposite contributions of secular warming trend and cold phase of low-frequency variability since 1999 account for the warming hiatus in the tropical eastern Pacific. This result implies that the low-frequency variability modulates the manifestation of global warming signal in the tropical Pacific SST. Furthermore, if the low-frequency variability turns to a positive phase, warming in the tropical eastern Pacific will be amplified and also strong El Niño events will occur more frequently in the near future.

  5. Stochastic evaluation of annual micropollutant loads and their uncertainties in separate storm sewers.

    PubMed

    Hannouche, Ali; Chebbo, Ghassan; Joannis, Claude; Gasperi, Johnny; Gromaire, Marie-Christine; Moilleron, Régis; Barraud, Sylvie; Ruban, Véronique

    2017-12-01

    This article describes a stochastic method to calculate the annual pollutant loads and its application over several years at the outlet of three catchments drained by separate storm sewers. A stochastic methodology using Monte Carlo simulations is proposed for assessing annual pollutant load, as well as the associated uncertainties, from a few event sampling campaigns and/or continuous turbidity measurements (representative of the total suspended solids concentration (TSS)). Indeed, in the latter case, the proposed method takes into account the correlation between pollutants and TSS. The developed method was applied to data acquired within the French research project "INOGEV" (innovations for a sustainable management of urban water) at the outlet of three urban catchments drained by separate storm sewers. Ten or so event sampling campaigns for a large range of pollutants (46 pollutants and 2 conventional water quality parameters: TSS and total organic carbon (TOC)) are combined with hundreds of rainfall events for which, at least one among three continuously monitored parameters (rainfall intensity, flow rate, and turbidity) is available. Results obtained for the three catchments show that the annual pollutant loads can be estimated with uncertainties ranging from 10 to 60%, and the added value of turbidity monitoring for lowering the uncertainty is demonstrated. A low inter-annual and inter-site variability of pollutant loads, for many of studied pollutants, is observed with respect to the estimated uncertainties, and can be explained mainly by annual precipitation.

  6. Simultaneous LC determination of paracetamol and related compounds in pharmaceutical formulations using a carbon-based column.

    PubMed

    Monser, Lotfi; Darghouth, Frida

    2002-03-01

    A simple, rapid and convenient high performance liquid chromatographic method, which permits the simultaneous determination of paracetamol, 4-aminophenol and 4-chloracetanilide in pharmaceutical preparation has been developed. The chromatographic separation was achieved on porous graphitized carbon (PGC) column using an isocratic mixture of 80/20 (v/v) acetonitrile/0.05 M potassium phosphate buffer (pH 5.5) and ultraviolet detection at 244 nm. Correlation coefficient for calibration curves in the ranges 1-50 microg ml(-1) for paracetamol and 5-40 microg ml(-1) for 4-aminophenol and 4-chloroacetanilide were >0.99. The sensitivity of detection is 0.1 microg ml(-1) for paracetamol and 0.5 microg ml(-1) for 4-aminophenol and 4-chloroacetanilide. The proposed liquid chromatographic method was successfully applied to the analysis of commercially available paracetamol dosage forms with recoveries of 98-103%. It is suggested that the proposed method should be used for routine quality control and dosage form assay of paracetamol in pharmaceutical preparations. The chromatographic behaviour of the three compounds was examined under variable mobile phase compositions and pH, the results revealed that selectivity was dependent on the organic solvent and pH used. The retention selectivity of these compounds on PGC was compared with those of octadecylsilica (ODS) packing materials in reversed phase liquid chromatography. The ODS column gave little separation for the degradation product (4-aminophenol) from paracetamol, whereas PGC column provides better separation in much shorter time.

  7. Performance of Blind Source Separation Algorithms for FMRI Analysis using a Group ICA Method

    PubMed Central

    Correa, Nicolle; Adali, Tülay; Calhoun, Vince D.

    2007-01-01

    Independent component analysis (ICA) is a popular blind source separation (BSS) technique that has proven to be promising for the analysis of functional magnetic resonance imaging (fMRI) data. A number of ICA approaches have been used for fMRI data analysis, and even more ICA algorithms exist, however the impact of using different algorithms on the results is largely unexplored. In this paper, we study the performance of four major classes of algorithms for spatial ICA, namely information maximization, maximization of non-gaussianity, joint diagonalization of cross-cumulant matrices, and second-order correlation based methods when they are applied to fMRI data from subjects performing a visuo-motor task. We use a group ICA method to study the variability among different ICA algorithms and propose several analysis techniques to evaluate their performance. We compare how different ICA algorithms estimate activations in expected neuronal areas. The results demonstrate that the ICA algorithms using higher-order statistical information prove to be quite consistent for fMRI data analysis. Infomax, FastICA, and JADE all yield reliable results; each having their strengths in specific areas. EVD, an algorithm using second-order statistics, does not perform reliably for fMRI data. Additionally, for the iterative ICA algorithms, it is important to investigate the variability of the estimates from different runs. We test the consistency of the iterative algorithms, Infomax and FastICA, by running the algorithm a number of times with different initializations and note that they yield consistent results over these multiple runs. Our results greatly improve our confidence in the consistency of ICA for fMRI data analysis. PMID:17540281

  8. Performance of blind source separation algorithms for fMRI analysis using a group ICA method.

    PubMed

    Correa, Nicolle; Adali, Tülay; Calhoun, Vince D

    2007-06-01

    Independent component analysis (ICA) is a popular blind source separation technique that has proven to be promising for the analysis of functional magnetic resonance imaging (fMRI) data. A number of ICA approaches have been used for fMRI data analysis, and even more ICA algorithms exist; however, the impact of using different algorithms on the results is largely unexplored. In this paper, we study the performance of four major classes of algorithms for spatial ICA, namely, information maximization, maximization of non-Gaussianity, joint diagonalization of cross-cumulant matrices and second-order correlation-based methods, when they are applied to fMRI data from subjects performing a visuo-motor task. We use a group ICA method to study variability among different ICA algorithms, and we propose several analysis techniques to evaluate their performance. We compare how different ICA algorithms estimate activations in expected neuronal areas. The results demonstrate that the ICA algorithms using higher-order statistical information prove to be quite consistent for fMRI data analysis. Infomax, FastICA and joint approximate diagonalization of eigenmatrices (JADE) all yield reliable results, with each having its strengths in specific areas. Eigenvalue decomposition (EVD), an algorithm using second-order statistics, does not perform reliably for fMRI data. Additionally, for iterative ICA algorithms, it is important to investigate the variability of estimates from different runs. We test the consistency of the iterative algorithms Infomax and FastICA by running the algorithm a number of times with different initializations, and we note that they yield consistent results over these multiple runs. Our results greatly improve our confidence in the consistency of ICA for fMRI data analysis.

  9. Process parameters and morphology in puerarin, phospholipids and their complex microparticles generation by supercritical antisolvent precipitation.

    PubMed

    Li, Ying; Yang, Da-Jian; Chen, Shi-Lin; Chen, Si-Bao; Chan, Albert Sun-Chi

    2008-07-09

    The aim of the study was to develop and evaluate a new method for the production of puerarin phospholipids complex (PPC) microparticles. The advanced particle formation method, solution enhanced dispersion by supercritical fluids (SEDS), was used for the preparation of puerarin (Pur), phospholipids (PC) and their complex particles for the first time. Evaluation of the processing variables on PPC particle characteristics was also conducted. The processing variables included temperature, pressure, solution concentration, the flow rate of supercritical carbon dioxide (SC-CO2) and the relative flow rate of drug solution to CO2. The morphology, particle size and size distribution of the particles were determined. Meanwhile Pur and phospholipids were separately prepared by gas antisolvent precipitation (GAS) method and solid characterization of particles by the two supercritical methods was also compared. Pur formed by GAS was more orderly, purer crystal, whereas amorphous Pur particles between 0.5 and 1microm were formed by SEDS. The complex was successfully obtained by SEDS exhibiting amorphous, partially agglomerated spheres comprised of particles sized only about 1microm. SEDS method may be useful for the processing of other pharmaceutical preparations besides phospholipids complex particles. Furthermore adopting a GAS process to recrystallize pharmaceuticals will provide a highly versatile methodology to generate new polymorphs of drugs in addition to conventional techniques.

  10. Three-dimensional unsteady Euler equations solutions on dynamic grids

    NASA Technical Reports Server (NTRS)

    Belk, D. M.; Janus, J. M.; Whitfield, D. L.

    1985-01-01

    A method is presented for solving the three-dimensional unsteady Euler equations on dynamic grids based on flux vector splitting. The equations are cast in curvilinear coordinates and a finite volume discretization is used for handling arbitrary geometries. The discretized equations are solved using an explicit upwind second-order predictor corrector scheme that is stable for a CFL of 2. Characteristic variable boundary conditions are developed and used for unsteady impermeable surfaces and for the far-field boundary. Dynamic-grid results are presented for an oscillating air-foil and for a store separating from a reflection plate. For the cases considered of stores separating from a reflection plate, the unsteady aerodynamic forces on the store are significantly different from forces obtained by steady-state aerodynamics with the body inclination angle changed to account for plunge velocity.

  11. Cognitive Profiles of Mathematical Problem Solving Learning Disability for Different Definitions of Disability

    PubMed Central

    Tolar, Tammy D.; Fuchs, Lynn; Fletcher, Jack M.; Fuchs, Douglas; Hamlett, Carol L.

    2014-01-01

    Three cohorts of third-grade students (N = 813) were evaluated on achievement, cognitive abilities, and behavioral attention according to contrasting research traditions in defining math learning disability (LD) status: low achievement versus extremely low achievement and IQ-achievement discrepant versus strictly low-achieving LD. We use methods from these two traditions to form math problem solving LD groups. To evaluate group differences, we used MANOVA-based profile and canonical analyses to control for relations among the outcomes and regression to control for group definition variables. Results suggest that basic arithmetic is the key distinguishing characteristic that separates low-achieving problem solvers (including LD, regardless of definition) from typically achieving students. Word problem solving is the key distinguishing characteristic that separates IQ-achievement-discrepant from strictly low-achieving LD students, favoring the IQ-achievement-discrepant students. PMID:24939971

  12. An Adaptive Mesh Algorithm: Mapping the Mesh Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scannapieco, Anthony J.

    2016-07-25

    Both thermodynamic and kinematic variables must be mapped. The kinematic variables are defined on a separate kinematic mesh; it is the duel mesh to the thermodynamic mesh. The map of the kinematic variables is done by calculating the contributions of kinematic variables on the old thermodynamic mesh, mapping the kinematic variable contributions onto the new thermodynamic mesh and then synthesizing the mapped kinematic variables on the new kinematic mesh. In this document the map of the thermodynamic variables will be described.

  13. Boosting for detection of gene-environment interactions.

    PubMed

    Pashova, H; LeBlanc, M; Kooperberg, C

    2013-01-30

    In genetic association studies, it is typically thought that genetic variants and environmental variables jointly will explain more of the inheritance of a phenotype than either of these two components separately. Traditional methods to identify gene-environment interactions typically consider only one measured environmental variable at a time. However, in practice, multiple environmental factors may each be imprecise surrogates for the underlying physiological process that actually interacts with the genetic factors. In this paper, we develop a variant of L(2) boosting that is specifically designed to identify combinations of environmental variables that jointly modify the effect of a gene on a phenotype. Because the effect modifiers might have a small signal compared with the main effects, working in a space that is orthogonal to the main predictors allows us to focus on the interaction space. In a simulation study that investigates some plausible underlying model assumptions, our method outperforms the least absolute shrinkage and selection and Akaike Information Criterion and Bayesian Information Criterion model selection procedures as having the lowest test error. In an example for the Women's Health Initiative-Population Architecture using Genomics and Epidemiology study, the dedicated boosting method was able to pick out two single-nucleotide polymorphisms for which effect modification appears present. The performance was evaluated on an independent test set, and the results are promising. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Statistical modeling methods to analyze the impacts of multiunit process variability on critical quality attributes of Chinese herbal medicine tablets.

    PubMed

    Sun, Fei; Xu, Bing; Zhang, Yi; Dai, Shengyun; Yang, Chan; Cui, Xianglong; Shi, Xinyuan; Qiao, Yanjiang

    2016-01-01

    The quality of Chinese herbal medicine tablets suffers from batch-to-batch variability due to a lack of manufacturing process understanding. In this paper, the Panax notoginseng saponins (PNS) immediate release tablet was taken as the research subject. By defining the dissolution of five active pharmaceutical ingredients and the tablet tensile strength as critical quality attributes (CQAs), influences of both the manipulated process parameters introduced by an orthogonal experiment design and the intermediate granules' properties on the CQAs were fully investigated by different chemometric methods, such as the partial least squares, the orthogonal projection to latent structures, and the multiblock partial least squares (MBPLS). By analyzing the loadings plots and variable importance in the projection indexes, the granule particle sizes and the minimal punch tip separation distance in tableting were identified as critical process parameters. Additionally, the MBPLS model suggested that the lubrication time in the final blending was also important in predicting tablet quality attributes. From the calculated block importance in the projection indexes, the tableting unit was confirmed to be the critical process unit of the manufacturing line. The results demonstrated that the combinatorial use of different multivariate modeling methods could help in understanding the complex process relationships as a whole. The output of this study can then be used to define a control strategy to improve the quality of the PNS immediate release tablet.

  15. Performance of the AOAC use-dilution method with targeted modifications: collaborative study.

    PubMed

    Tomasino, Stephen F; Parker, Albert E; Hamilton, Martin A; Hamilton, Gordon C

    2012-01-01

    The U.S. Environmental Protection Agency (EPA), in collaboration with an industry work group, spearheaded a collaborative study designed to further enhance the AOAC use-dilution method (UDM). Based on feedback from laboratories that routinely conduct the UDM, improvements to the test culture preparation steps were prioritized. A set of modifications, largely based on culturing the test microbes on agar as specified in the AOAC hard surface carrier test method, were evaluated in a five-laboratory trial. The modifications targeted the preparation of the Pseudomonas aeruginosa test culture due to the difficulty in separating the pellicle from the broth in the current UDM. The proposed modifications (i.e., the modified UDM) were compared to the current UDM methodology for P. aeruginosa and Staphylococcus aureus. Salmonella choleraesuis was not included in the study. The goal was to determine if the modifications reduced method variability. Three efficacy response variables were statistically analyzed: the number of positive carriers, the log reduction, and the pass/fail outcome. The scope of the collaborative study was limited to testing one liquid disinfectant (an EPA-registered quaternary ammonium product) at two levels of presumed product efficacies, high and low. Test conditions included use of 400 ppm hard water as the product diluent and a 5% organic soil load (horse serum) added to the inoculum. Unfortunately, the study failed to support the adoption of the major modification (use of an agar-based approach to grow the test cultures) based on an analysis of method's variability. The repeatability and reproducibility standard deviations for the modified method were equal to or greater than those for the current method across the various test variables. However, the authors propose retaining the frozen stock preparation step of the modified method, and based on the statistical equivalency of the control log densities, support its adoption as a procedural change to the current UDM. The current UDM displayed acceptable responsiveness to changes in product efficacy; acceptable repeatability across multiple tests in each laboratory for the control counts and log reductions; and acceptable reproducibility across multiple laboratories for the control log density values and log reductions. Although the data do not support the adoption of all modifications, the UDM collaborative study data are valuable for assessing sources of method variability and a reassessment of the performance standard for the UDM.

  16. Assessment of Vulnerability to Coccidioidomycosis in Arizona and California.

    PubMed

    Shriber, Jennifer; Conlon, Kathryn C; Benedict, Kaitlin; McCotter, Orion Z; Bell, Jesse E

    2017-06-23

    Coccidioidomycosis is a fungal infection endemic to the southwestern United States, particularly Arizona and California. Its incidence has increased, potentially due in part to the effects of changing climatic variables on fungal growth and spore dissemination. This study aims to quantify the county-level vulnerability to coccidioidomycosis in Arizona and California and to assess the relationships between population vulnerability and climate variability. The variables representing exposure, sensitivity, and adaptive capacity were combined to calculate county level vulnerability indices. Three methods were used: (1) principal components analysis; (2) quartile weighting; and (3) percentile weighting. Two sets of indices, "unsupervised" and "supervised", were created. Each index was correlated with coccidioidomycosis incidence data from 2000-2014. The supervised percentile index had the highest correlation; it was then correlated with variability measures for temperature, precipitation, and drought. The supervised percentile index was significantly correlated ( p < 0.05) with coccidioidomycosis incidence in both states. Moderate, positive significant associations ( p < 0.05) were found between index scores and climate variability when both states were concurrently analyzed and when California was analyzed separately. This research adds to the body of knowledge that could be used to target interventions to vulnerable counties and provides support for the hypothesis that population vulnerability to coccidioidomycosis is associated with climate variability.

  17. Assessment of Vulnerability to Coccidioidomycosis in Arizona and California

    PubMed Central

    Conlon, Kathryn C.; Benedict, Kaitlin; McCotter, Orion Z.; Bell, Jesse E.

    2017-01-01

    Coccidioidomycosis is a fungal infection endemic to the southwestern United States, particularly Arizona and California. Its incidence has increased, potentially due in part to the effects of changing climatic variables on fungal growth and spore dissemination. This study aims to quantify the county-level vulnerability to coccidioidomycosis in Arizona and California and to assess the relationships between population vulnerability and climate variability. The variables representing exposure, sensitivity, and adaptive capacity were combined to calculate county level vulnerability indices. Three methods were used: (1) principal components analysis; (2) quartile weighting; and (3) percentile weighting. Two sets of indices, “unsupervised” and “supervised”, were created. Each index was correlated with coccidioidomycosis incidence data from 2000–2014. The supervised percentile index had the highest correlation; it was then correlated with variability measures for temperature, precipitation, and drought. The supervised percentile index was significantly correlated (p < 0.05) with coccidioidomycosis incidence in both states. Moderate, positive significant associations (p < 0.05) were found between index scores and climate variability when both states were concurrently analyzed and when California was analyzed separately. This research adds to the body of knowledge that could be used to target interventions to vulnerable counties and provides support for the hypothesis that population vulnerability to coccidioidomycosis is associated with climate variability. PMID:28644403

  18. No difference in variability of unique hue selections and binary hue selections.

    PubMed

    Bosten, J M; Lawrance-Owen, A J

    2014-04-01

    If unique hues have special status in phenomenological experience as perceptually pure, it seems reasonable to assume that they are represented more precisely by the visual system than are other colors. Following the method of Malkoc et al. (J. Opt. Soc. Am. A22, 2154 [2005]), we gathered unique and binary hue selections from 50 subjects. For these subjects we repeated the measurements in two separate sessions, allowing us to measure test-retest reliabilities (0.52≤ρ≤0.78; p≪0.01). We quantified the within-individual variability for selections of each hue. Adjusting for the differences in variability intrinsic to different regions of chromaticity space, we compared the within-individual variability for unique hues to that for binary hues. Surprisingly, we found that selections of unique hues did not show consistently lower variability than selections of binary hues. We repeated hue measurements in a single session for an independent sample of 58 subjects, using a different relative scaling of the cardinal axes of MacLeod-Boynton chromaticity space. Again, we found no consistent difference in adjusted within-individual variability for selections of unique and binary hues. Our finding does not depend on the particular scaling chosen for the Y axis of MacLeod-Boynton chromaticity space.

  19. Multinomial N-mixture models improve the applicability of electrofishing for developing population estimates of stream-dwelling Smallmouth Bass

    USGS Publications Warehouse

    Mollenhauer, Robert; Brewer, Shannon K.

    2017-01-01

    Failure to account for variable detection across survey conditions constrains progressive stream ecology and can lead to erroneous stream fish management and conservation decisions. In addition to variable detection’s confounding long-term stream fish population trends, reliable abundance estimates across a wide range of survey conditions are fundamental to establishing species–environment relationships. Despite major advancements in accounting for variable detection when surveying animal populations, these approaches remain largely ignored by stream fish scientists, and CPUE remains the most common metric used by researchers and managers. One notable advancement for addressing the challenges of variable detection is the multinomial N-mixture model. Multinomial N-mixture models use a flexible hierarchical framework to model the detection process across sites as a function of covariates; they also accommodate common fisheries survey methods, such as removal and capture–recapture. Effective monitoring of stream-dwelling Smallmouth Bass Micropterus dolomieu populations has long been challenging; therefore, our objective was to examine the use of multinomial N-mixture models to improve the applicability of electrofishing for estimating absolute abundance. We sampled Smallmouth Bass populations by using tow-barge electrofishing across a range of environmental conditions in streams of the Ozark Highlands ecoregion. Using an information-theoretic approach, we identified effort, water clarity, wetted channel width, and water depth as covariates that were related to variable Smallmouth Bass electrofishing detection. Smallmouth Bass abundance estimates derived from our top model consistently agreed with baseline estimates obtained via snorkel surveys. Additionally, confidence intervals from the multinomial N-mixture models were consistently more precise than those of unbiased Petersen capture–recapture estimates due to the dependency among data sets in the hierarchical framework. We demonstrate the application of this contemporary population estimation method to address a longstanding stream fish management issue. We also detail the advantages and trade-offs of hierarchical population estimation methods relative to CPUE and estimation methods that model each site separately.

  20. Connection of stratospheric QBO with global atmospheric general circulation and tropical SST. Part I: methodology and composite life cycle

    NASA Astrophysics Data System (ADS)

    Huang, Bohua; Hu, Zeng-Zhen; Kinter, James L.; Wu, Zhaohua; Kumar, Arun

    2012-01-01

    The stratospheric quasi-biennial oscillation (QBO) and its association with the interannual variability in the stratosphere and troposphere, as well as in tropical sea surface temperature anomalies (SSTA), are examined in the context of a QBO life cycle. The analysis is based on the ERA40 and NCEP/NCAR reanalyses, radiosonde observations at Singapore, and other observation-based datasets. Both reanalyses reproduce the QBO life cycle and its associated variability in the stratosphere reasonably well, except that some long-term changes are detected only in the NCEP/NCAR reanalysis. In order to separate QBO from variability on other time scales and to eliminate the long-term changes, a scale separation technique [Ensemble Empirical Mode Decomposition (EEMD)] is applied to the raw data. The QBO component of zonal wind anomalies at 30 hPa, extracted using the EEMD method, is defined as a QBO index. Using this index, the QBO life cycle composites of stratosphere and troposphere variables, as well as SSTA, are constructed and examined. The composite features in the stratosphere are generally consistent with previous investigations. The correlations between the QBO and tropical Pacific SSTA depend on the phase in a QBO life cycle. On average, cold (warm) SSTA peaks about half a year after the maximum westerlies (easterlies) at 30 hPa. The connection of the QBO with the troposphere seems to be associated with the differences of temperature anomalies between the stratosphere and troposphere. While the anomalies in the stratosphere propagate downward systematically, some anomalies in the troposphere develop and expand vertically. Therefore, it is possible that the temperature difference between the troposphere and stratosphere may alter the atmospheric stability and tropical deep convection, which modulates the Walker circulation and SSTA in the equatorial Pacific Ocean.

  1. Short-term Variability of Extinction by Broadband Stellar Photometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Musat, I.C.; Ellingson, R.G.

    2005-03-18

    Aerosol optical depth variation over short-term time intervals is determined from broadband observations of stars with a whole sky imager. The main difficulty in such measurements consists of accurately separating the star flux value from the non-stellar diffuse skylight. Using correction method to overcome this difficulty, the monochromatic extinction at the ground due to aerosols is extracted from heterochromatic measurements. A form of closure is achieved by comparison with simultaneous or temporally close measurements with other instruments, and the total error of the method, as a combination of random error of measurements and systematic error of calibration and model, ismore » assessed as being between 2.6 and 3% rms.« less

  2. A users guide for A344: A program using a finite difference method to analyze transonic flow over oscillating airfoils

    NASA Technical Reports Server (NTRS)

    Weatherill, W. H.; Ehlers, F. E.

    1979-01-01

    The design and usage of a pilot program for calculating the pressure distributions over harmonically oscillating airfoils in transonic flow are described. The procedure used is based on separating the velocity potential into steady and unsteady parts and linearizing the resulting unsteady differential equations for small disturbances. The steady velocity potential which must be obtained from some other program, was required for input. The unsteady equation, as solved, is linear with spatially varying coefficients. Since sinusoidal motion was assumed, time was not a variable. The numerical solution was obtained through a finite difference formulation and either a line relaxation or an out of core direct solution method.

  3. Characteristics of women who frequently under report their energy intake: a doubly labelled water study.

    PubMed

    Scagliusi, F B; Ferriolli, E; Pfrimer, K; Laureano, C; Cunha, C S F; Gualano, B; Lourenço, B H; Lancha, A H

    2009-10-01

    We applied three dietary assessment methods and aimed at obtaining a set of physical, social and psychological variables that can discriminate those individuals who did not underreport ('never under-reporters'), those who underreported in one dietary assessment method ('occasional under-reporters') and those who underreported in two or three dietary assessment methods ('frequent under-reporters'). Sixty-five women aged 18-57 years were recruited for this study. Total energy expenditure was determined by doubly labelled water, and energy intake was estimated by three 24-h diet recalls, 3-day food records and a food frequency questionnaire. A multiple discriminant analysis was used to identify which of those variables better discriminated the three groups: body mass index (BMI), income, education, social desirability, nutritional knowledge, dietary restraint, physical activity practice, body dissatisfaction and binge-eating symptoms. Twenty-three participants were 'never under-reporters'. Twenty-four participants were 'occasional under-reporters' and 18 were 'frequent under-reporters'. Four variables entered the discriminant model: income, BMI, social desirability and body dissatisfaction. According to potency indices, income contributed the most to the total discriminant power, followed in decreasing order by social desirability score, BMI and body dissatisfaction. Income, social desirability and BMI were the characteristics that mainly separated the 'never under-reporters' from the under-reporters (occasional or frequent). Body dissatisfaction better discriminated the 'occasional under-reporters' from the 'frequent under-reporters'. 'Frequent under-reporters' have a greater BMI, social desirability score, body dissatisfaction score and lower income. These four variables seemed to be able to discriminate individuals who are more prone to systematic under reporting.

  4. Heart Rate Variability Moderates the Association Between Separation-Related Psychological Distress and Blood Pressure Reactivity Over Time.

    PubMed

    Bourassa, Kyle J; Hasselmo, Karen; Sbarra, David A

    2016-08-01

    Divorce is a stressor associated with long-term health risk, though the mechanisms of this effect are poorly understood. Cardiovascular reactivity is one biological pathway implicated as a predictor of poor long-term health after divorce. A sample of recently separated and divorced adults (N = 138) was assessed over an average of 7.5 months to explore whether individual differences in heart rate variability-assessed by respiratory sinus arrhythmia-operate in combination with subjective reports of separation-related distress to predict prospective changes in cardiovascular reactivity, as indexed by blood pressure reactivity. Participants with low resting respiratory sinus arrhythmia at baseline showed no association between divorce-related distress and later blood pressure reactivity, whereas participants with high respiratory sinus arrhythmia showed a positive association. In addition, within-person variation in respiratory sinus arrhythmia and between-persons variation in separation-related distress interacted to predict blood pressure reactivity at each laboratory visit. Individual differences in heart rate variability and subjective distress operate together to predict cardiovascular reactivity and may explain some of the long-term health risk associated with divorce. © The Author(s) 2016.

  5. Star Products with Separation of Variables Admitting a Smooth Extension

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander

    2012-08-01

    Given a complex manifold M with an open dense subset Ω endowed with a pseudo-Kähler form ω which cannot be smoothly extended to a larger open subset, we consider various examples where the corresponding Kähler-Poisson structure and a star product with separation of variables on (Ω, ω) admit smooth extensions to M. We give a simple criterion of the existence of a smooth extension of a star product and apply it to these examples.

  6. Rapid kV-switching single-source dual-energy CT ex vivo renal calculi characterization using a multiparametric approach: refining parameters on an expanded dataset.

    PubMed

    Kriegshauser, J Scott; Paden, Robert G; He, Miao; Humphreys, Mitchell R; Zell, Steven I; Fu, Yinlin; Wu, Teresa; Sugi, Mark D; Silva, Alvin C

    2018-06-01

    We aimed to determine the best algorithms for renal stone composition characterization using rapid kV-switching single-source dual-energy computed tomography (rsDECT) and a multiparametric approach after dataset expansion and refinement of variables. rsDECT scans (80 and 140 kVp) were performed on 38 ex vivo 5- to 10-mm renal stones composed of uric acid (UA; n = 21), struvite (STR; n = 5), cystine (CYS; n = 5), and calcium oxalate monohydrate (COM; n = 7). Measurements were obtained for 17 variables: mean Hounsfield units (HU) at 11 monochromatic keV levels, effective Z, 2 iodine-water material basis pairs, and 3 mean monochromatic keV ratios (40/140, 70/120, 70/140). Analysis included using 5 multiparametric algorithms: Support Vector Machine, RandomTree, Artificial Neural Network, Naïve Bayes Tree, and Decision Tree (C4.5). Separating UA from non-UA stones was 100% accurate using multiple methods. For non-UA stones, using a 70-keV mean cutoff value of 694 HU had 100% accuracy for distinguishing COM from non-COM (CYS, STR) stones. The best result for distinguishing all 3 non-UA subtypes was obtained using RandomTree (15/17, 88%). For stones 5 mm or larger, multiple methods can distinguish UA from non-UA and COM from non-COM stones with 100% accuracy. Thus, the choice for analysis is per the user's preference. The best model for separating all three non-UA subtypes was 88% accurate, although with considerable individual overlap between CYS and STR stones. Larger, more diverse datasets, including in vivo data and technical improvements in material separation, may offer more guidance in distinguishing non-UA stone subtypes in the clinical setting.

  7. Selectivity in reversed-phase separations: general influence of solvent type and mobile phase pH.

    PubMed

    Neue, Uwe D; Méndez, Alberto

    2007-05-01

    The influence of the mobile phase on retention is studied in this paper for a group of over 70 compounds with a broad range of multiple functional groups. We varied the pH of the mobile phase (pH 3, 7, and 10) and the organic modifier (methanol, acetonitrile (ACN), and tetrahydrofuran (THF)), using 15 different stationary phases. In this paper, we describe the overall retention and selectivity changes observed with these variables. We focus on the primary effects of solvent choice and pH. For example, transfer rules for solvent composition resulting in equivalent retention depend on the packing as well as on the type of analyte. Based on the retention patterns, one can calculate selectivity difference values for different variables. The selectivity difference is a measure of the importance of the different variables involved in method development. Selectivity changes specific to the type of analyte are described. The largest selectivity differences are obtained with pH changes.

  8. The theory of the gravitational potential applied to orbit prediction

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, J. C.

    1976-01-01

    A complete derivation of the geopotential function and its gradient is presented. Also included is the transformation of Laplace's equation from Cartesian to spherical coordinates. The analytic solution to Laplace's equation is obtained from the transformed version, in the classical manner of separating the variables. A cursory introduction to the method devised by Pines, using direction cosines to express the orientation of a point in space, is presented together with sample computer program listings for computing the geopotential function and the components of its gradient. The use of the geopotential function is illustrated.

  9. Horizontal Temperature Variability in the Stratosphere: Global Variations Inferred from CRISTA Data

    NASA Technical Reports Server (NTRS)

    Eidmann, G.; Offermann, D.; Jarisch, M.; Preusse, P.; Eckermann, S. D.; Schmidlin, F. J.

    2001-01-01

    In two separate orbital campaigns (November, 1994 and August, 1997), the Cryogenic Infrared Spectrometers and Telescopes for the Atmosphere (CRISTA) instrument acquired global stratospheric data of high accuracy and high spatial resolution. The standard limb-scanned CRISTA measurements resolved atmospheric spatial structures with vertical dimensions greater than or equal to 1.5 - 2 km and horizontal dimensions is greater than or equal to 100 - 200 km. A fluctuation analysis of horizontal temperature distributions derived from these data is presented. This method is somewhat complementary to conventional power-spectral analysis techniques.

  10. Quantification and characterization of alkaloids from roots of Rauwolfia serpentina using ultra-high performance liquid chromatography-photo diode array-mass spectrometry.

    PubMed

    Sagi, Satyanarayanaraju; Avula, Bharathi; Wang, Yan-Hong; Khan, Ikhlas A

    2016-01-01

    A new UHPLC-UV method has been developed for the simultaneous analysis of seven alkaloids [ajmaline (1), yohimbine (2), corynanthine (3), ajmalicine (4), serpentine (5), serpentinine (6), and reserpine (7)] from the root samples of Rauwolfia serpentina (L.) Benth. ex Kurz. The chromatographic separation was achieved using a reversed phase C18 column with a mobile phase of water and acetonitrile, both containing 0.05% formic acid. The seven compounds were completely separated within 8 min at a flow rate of 0.2 mL/min with a 2-μL injection volume. The method is validated for linearity, accuracy, repeatability, limits of detection (LOD), and limits of quantification (LOQ). Seven plant samples and 21 dietary supplements claiming to contain Rauwolfia roots were analyzed and content of total alkaloids (1-7) varied, namely, 1.57-12.1 mg/g dry plant material and 0.0-4.5 mg/day, respectively. The results indicated that commercial products are of variable quality. The developed analytical method is simple, economic, fast, and suitable for quality control analysis of Rauwolfia samples and commercial products. The UHPLC-QToF-mass spectrometry with electrospray ionization (ESI) interface method is described for the confirmation and characterization of alkaloids from plant samples. This method involved the detection of [M + H](+) or M(+) ions in the positive mode.

  11. Sea level reconstructions from altimetry and tide gauges using independent component analysis

    NASA Astrophysics Data System (ADS)

    Brunnabend, Sandra-Esther; Kusche, Jürgen; Forootan, Ehsan

    2017-04-01

    Many reconstructions of global and regional sea level rise derived from tide gauges and satellite altimetry used the method of empirical orthogonal functions (EOF) to reduce noise, improving the spatial resolution of the reconstructed outputs and investigate the different signals in climate time series. However, the second order EOF method has some limitations, e.g. in the separation of individual physical signals into different modes of sea level variations and in the capability to physically interpret the different modes as they are assumed to be orthogonal. Therefore, we investigate the use of the more advanced statistical signal decomposition technique called independent component analysis (ICA) to reconstruct global and regional sea level change from satellite altimetry and tide gauge records. Our results indicate that the used method has almost no influence on the reconstruction of global mean sea level change (1.6 mm/yr from 1960-2010 and 2.9 mm/yr from 1993-2013). Only different numbers of modes are needed for the reconstruction. Using the ICA method is advantageous for separating independent climate variability signals from regional sea level variations as the mixing problem of the EOF method is strongly reduced. As an example, the modes most dominated by the El Niño-Southern Oscillation (ENSO) signal are compared. Regional sea level changes near Tianjin, China, Los Angeles, USA, and Majuro, Marshall Islands are reconstructed and the contributions from ENSO are identified.

  12. Personality Constellations of Adolescents with Histories of Traumatic Parental Separations

    PubMed Central

    Malone, Johanna C.; Westen, Drew; Levendosky, Alytia A.

    2014-01-01

    Consistent with attachment theory and a developmental psychopathology framework, a growing body of research suggests that traumatic parental separations may lead to unique pathways of personality adaptation and maladaptation. The present study both examined personality characteristics and identified personality subtypes of adolescents with histories of traumatic separations. Randomly selected psychologists and psychiatrists provided data on 236 adolescents with histories of traumatic separations using a personality pathology instrument designed for use by clinically experienced observers, the Shedler-Westen Assessment Procedure (SWAP-II-A). Using a Q factor analysis, five distinct personality subtypes were identified: internalizing/avoidant, psychopathic, resilient, impulsive dysregulated, and immature dysregulated. Initial support for the validity of the subtypes was established based on Axis I and Axis II pathology, adaptive functioning, developmental history, and family history variables. The personality subtypes demonstrated substantial incremental validity in predicting adaptive functioning, above and beyond demographic variables and histories of other traumatic experiences. PMID:24647212

  13. [Comparative study of different extraction methods and assays of tannins in some pteridophytes].

    PubMed

    Laurent, S

    1975-10-01

    Various processes of extraction and quantitative analysis of a condensed tannin in a plant extract, which also includes some chlorogenic acids, have been examined. 60% methanol, at 50 degrees C, proved the most efficient extraction solvent. Several methods of analysis have been tried. The measure of the colour intensity obtained by the action of sulphuric vanilline on flavanols cannot be used because it depends on the tannin condensation stage. It is impossible to separate tannin from chlorogenic acids using the methods of adsorption by skin or nylon powders, or precipitation by polyvinylpyrrolidone. Only paper chromatography, followed by the distinct elution of the various phenolic compounds, allows the tannin evaluation by subtraction; but owing to the variability of the results, many more experiments are necessary. Some other processes are being studied.

  14. The Tremaine-Weinberg Method for Pattern Speeds Using Hα Emission from Ionized Gas

    NASA Astrophysics Data System (ADS)

    Beckman, J. E.; Fathi, K.; Piñol, N.; Toonen, S.; Hernandez, O.; Carignan, C.

    2008-10-01

    The Fabry-Perot interferometer FaNTOmM was used at the 3.6-m CFHT and the 1.6-m Mont Mégantic Telescope to obtain data cubes in Hα of 9 nearby spiral galaxies from which maps in integrated intensity, velocity, and velocity dispersion were derived. We then applied the Tremaine-Weinberg method, in which the pattern speed can be deduced from its velocity field, by finding the integrated value of the mean velocity along a slit parallel to the major axis weighted by the intensity and divided by the weighted mean distance of the velocity points from the tangent point measured along the slit. The measured variables can be used either to make separate calculations of the pattern speed and derive a mean, or in a plot of one against the other for all the points on all slits, from which a best fit value can be derived. Linear fits were found for all the galaxies in the sample. For two galaxies a clearly separate inner pattern speed with a higher value, was also identified and measured.

  15. A comparative evaluation of different ionic liquids for arsenic species separation and determination in wine varietals by liquid chromatography - hydride generation atomic fluorescence spectrometry.

    PubMed

    Castro Grijalba, Alexander; Fiorentini, Emiliano F; Martinez, Luis D; Wuilloud, Rodolfo G

    2016-09-02

    The application of different ionic liquids (ILs) as modifiers for chromatographic separation and determination of arsenite [As(III)], arsenate [As(V)], dimethylarsonic acid (DMA) and monomethylarsonic acid (MMA) species in wine samples, by reversed-phase high performance liquid chromatography coupled to hydride generation atomic fluorescence spectrometry detection (RP-HPLC-HG-AFS) was studied in this work. Several factors influencing the chromatographic separation of the As species, such as pH of the mobile phase, buffer solution concentration, buffer type, IL concentration and length of alkyl groups in ILs were evaluated. The complete separation of As species was achieved using a C18 column in isocratic mode with a mobile phase composed of 0.5% (v/v) 1-octyl-3-methylimidazolium chloride ([C8mim]Cl) and 5% (v/v) methanol at pH 8.5. A multivariate methodology was used to optimize the variables involved in AFS detection of As species after they were separated by HPLC. The ILs showed remarkable performance for the separation of As species, which was obtained within 18min with a resolution higher than 0.83. The limits of detection for As(III), As(V), MMA and DMA were 0.81, 0.89, 0.62 and 1.00μg As L(-1). The proposed method was applied for As speciation analysis in white and red wine samples originated from different grape varieties. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Comparison of methods for estimating ground-water recharge and base flow at a small watershed underlain by fractured bedrock in the Eastern United States

    USGS Publications Warehouse

    Risser, Dennis W.; Gburek, William J.; Folmar, Gordon J.

    2005-01-01

    This study by the U.S. Geological Survey (USGS), in cooperation with the Agricultural Research Service (ARS), U.S. Department of Agriculture, compared multiple methods for estimating ground-water recharge and base flow (as a proxy for recharge) at sites in east-central Pennsylvania underlain by fractured bedrock and representative of a humid-continental climate. This study was one of several within the USGS Ground-Water Resources Program designed to provide an improved understanding of methods for estimating recharge in the eastern United States. Recharge was estimated on a monthly and annual basis using four methods?(1) unsaturated-zone drainage collected in gravity lysimeters, (2) daily water balance, (3) water-table fluctuations in wells, and (4) equations of Rorabaugh. Base flow was estimated by streamflow-hydrograph separation using the computer programs PART and HYSEP. Estimates of recharge and base flow were compared for an 8-year period (1994-2001) coinciding with operation of the gravity lysimeters at an experimental recharge site (Masser Recharge Site) and a longer 34-year period (1968-2001), for which climate and streamflow data were available on a 2.8-square-mile watershed (WE-38 watershed). Estimates of mean-annual recharge at the Masser Recharge Site and WE-38 watershed for 1994-2001 ranged from 9.9 to 14.0 inches (24 to 33 percent of precipitation). Recharge, in inches, from the various methods was: unsaturated-zone drainage, 12.2; daily water balance, 12.3; Rorabaugh equations with PULSE, 10.2, or RORA, 14.0; and water-table fluctuations, 9.9. Mean-annual base flow from streamflow-hydrograph separation ranged from 9.0 to 11.6 inches (21-28 percent of precipitation). Base flow, in inches, from the various methods was: PART, 10.7; HYSEP Local Minimum, 9.0; HYSEP Sliding Interval, 11.5; and HYSEP Fixed Interval, 11.6. Estimating recharge from multiple methods is useful, but the inherent differences of the methods must be considered when comparing results. For example, although unsaturated-zone drainage from the gravity lysimeters provided the most direct measure of potential recharge, it does not incorporate spatial variability that is contained in watershed-wide estimates of net recharge from the Rorabaugh equations or base flow from streamflow-hydrograph separation. This study showed that water-level fluctuations, in particular, should be used with caution to estimate recharge in low-storage fractured-rock aquifers because of the variability of water-level response among wells and sensitivity of recharge to small errors in estimating specific yield. To bracket the largest range of plausible recharge, results from this study indicate that recharge derived from RORA should be compared with base flow from the Local-Minimum version of HYSEP.

  17. SU-F-J-22: Lung VolumeVariability Assessed by Bh-CBCT in 3D Surface Image Guided Deep InspirationBreath Hold (DIBH) Radiotherapy for Left-Sided Breast Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gutierrez, A; Stanley, D; Papanikolaou, N

    Purpose: With the increasing use of DIBH techniques for left-sided breast cancer, 3D surface-image guided DIBH techniques have improved patient setup and facilitated DIBH radiation delivery. However, quantification of the daily separation between the heart and left breast still presents a challenge. One method of assuring separation is to ensure consistent left lung filling. With this in mind, the aim of this study is to retrospectively quantify left lung volume from weekly breath hold-CBCTs (bh-CBCT) of left-sided breast patients treated using a 3D surface imaging system. Methods: Ten patients (n=10) previously treated to the left breast using the C-Rad CatalystHDmore » system (C-RAD AG, Uppsala Sweden) were evaluated. Patients were positioned with CatalystHD and with bh-CBCT. bh-CBCTs were acquired at the validation date, first day of treatment and at subsequent weekly intervals. Total treatment courses spanned from 3 to 5 weeks. bh-CBCT images were exported to VelocityAI and the left lung volume was segmented. Volumes were recorded and analyzed. Results: A total of 41 bh-CBCTs were contoured in VelocityAI for the 10 patients. The mean left lung volume for all patients was 1657±295cc based on validation bh-CBCT. With the subsequent lung volumes normalized to the validation lung volume, the mean relative ratios for all patients were 1.02±0.11, 0.97±0.14, 0.98±0.11, 1.02±0.01, and 0.96±0.02 for week 1, 2, 3, 4, and 5, respectively. Overall, the mean left lung volume change was ≤4.0% over a 5-week course; however left lung volume variations of up to 28% were noted in a select patient. Conclusion: With the use of the C-RAD CatalystHD system, the mean lung volume variability over a 5-week course of DIBH treatments was ≤4.0%. By minimizing left lung volume variability, heart to left breast separation maybe more consistently maintained. AN Gutierrez has a research grant from C-RAD AG.« less

  18. A New Cluster Analysis-Marker-Controlled Watershed Method for Separating Particles of Granular Soils.

    PubMed

    Alam, Md Ferdous; Haque, Asadul

    2017-10-18

    An accurate determination of particle-level fabric of granular soils from tomography data requires a maximum correct separation of particles. The popular marker-controlled watershed separation method is widely used to separate particles. However, the watershed method alone is not capable of producing the maximum separation of particles when subjected to boundary stresses leading to crushing of particles. In this paper, a new separation method, named as Monash Particle Separation Method (MPSM), has been introduced. The new method automatically determines the optimal contrast coefficient based on cluster evaluation framework to produce the maximum accurate separation outcomes. Finally, the particles which could not be separated by the optimal contrast coefficient were separated by integrating cuboid markers generated from the clustering by Gaussian mixture models into the routine watershed method. The MPSM was validated on a uniformly graded sand volume subjected to one-dimensional compression loading up to 32 MPa. It was demonstrated that the MPSM is capable of producing the best possible separation of particles required for the fabric analysis.

  19. ENKI - An Open Source environmental modelling platfom

    NASA Astrophysics Data System (ADS)

    Kolberg, S.; Bruland, O.

    2012-04-01

    The ENKI software framework for implementing spatio-temporal models is now released under the LGPL license. Originally developed for evaluation and comparison of distributed hydrological model compositions, ENKI can be used for simulating any time-evolving process over a spatial domain. The core approach is to connect a set of user specified subroutines into a complete simulation model, and provide all administrative services needed to calibrate and run that model. This includes functionality for geographical region setup, all file I/O, calibration and uncertainty estimation etc. The approach makes it easy for students, researchers and other model developers to implement, exchange, and test single routines and various model compositions in a fixed framework. The open-source license and modular design of ENKI will also facilitate rapid dissemination of new methods to institutions engaged in operational water resource management. ENKI uses a plug-in structure to invoke separately compiled subroutines, separately built as dynamic-link libraries (dlls). The source code of an ENKI routine is highly compact, with a narrow framework-routine interface allowing the main program to recognise the number, types, and names of the routine's variables. The framework then exposes these variables to the user within the proper context, ensuring that distributed maps coincide spatially, time series exist for input variables, states are initialised, GIS data sets exist for static map data, manually or automatically calibrated values for parameters etc. By using function calls and memory data structures to invoke routines and facilitate information flow, ENKI provides good performance. For a typical distributed hydrological model setup in a spatial domain of 25000 grid cells, 3-4 time steps simulated per second should be expected. Future adaptation to parallel processing may further increase this speed. New modifications to ENKI include a full separation of API and user interface, making it possible to run ENKI from GIS programs and other software environments. ENKI currently compiles under Windows and Visual Studio only, but ambitions exist to remove the platform and compiler dependencies.

  20. Separating decadal global water cycle variability from sea level rise.

    PubMed

    Hamlington, B D; Reager, J T; Lo, M-H; Karnauskas, K B; Leben, R R

    2017-04-20

    Under a warming climate, amplification of the water cycle and changes in precipitation patterns over land are expected to occur, subsequently impacting the terrestrial water balance. On global scales, such changes in terrestrial water storage (TWS) will be reflected in the water contained in the ocean and can manifest as global sea level variations. Naturally occurring climate-driven TWS variability can temporarily obscure the long-term trend in sea level rise, in addition to modulating the impacts of sea level rise through natural periodic undulation in regional and global sea level. The internal variability of the global water cycle, therefore, confounds both the detection and attribution of sea level rise. Here, we use a suite of observations to quantify and map the contribution of TWS variability to sea level variability on decadal timescales. In particular, we find that decadal sea level variability centered in the Pacific Ocean is closely tied to low frequency variability of TWS in key areas across the globe. The unambiguous identification and clean separation of this component of variability is the missing step in uncovering the anthropogenic trend in sea level and understanding the potential for low-frequency modulation of future TWS impacts including flooding and drought.

  1. Noise source separation of diesel engine by combining binaural sound localization method and blind source separation method

    NASA Astrophysics Data System (ADS)

    Yao, Jiachi; Xiang, Yang; Qian, Sichong; Li, Shengyang; Wu, Shaowei

    2017-11-01

    In order to separate and identify the combustion noise and the piston slap noise of a diesel engine, a noise source separation and identification method that combines a binaural sound localization method and blind source separation method is proposed. During a diesel engine noise and vibration test, because a diesel engine has many complex noise sources, a lead covering method was carried out on a diesel engine to isolate other interference noise from the No. 1-5 cylinders. Only the No. 6 cylinder parts were left bare. Two microphones that simulated the human ears were utilized to measure the radiated noise signals 1 m away from the diesel engine. First, a binaural sound localization method was adopted to separate the noise sources that are in different places. Then, for noise sources that are in the same place, a blind source separation method is utilized to further separate and identify the noise sources. Finally, a coherence function method, continuous wavelet time-frequency analysis method, and prior knowledge of the diesel engine are combined to further identify the separation results. The results show that the proposed method can effectively separate and identify the combustion noise and the piston slap noise of a diesel engine. The frequency of the combustion noise and the piston slap noise are respectively concentrated at 4350 Hz and 1988 Hz. Compared with the blind source separation method, the proposed method has superior separation and identification effects, and the separation results have fewer interference components from other noise.

  2. Development of a 3D seed morphological tool for grapevine variety identification, and its comparison with SSR analysis.

    PubMed

    Karasik, Avshalom; Rahimi, Oshrit; David, Michal; Weiss, Ehud; Drori, Elyashiv

    2018-04-25

    Grapevine (Vitis vinifera L.) is one of the classical fruits of the Old World. Among the thousands of domesticated grapevine varieties and variable wild sylvestris populations, the range of variation in pip morphology is very wide. In this study we scanned representative samples of grape pip populations, in an attempt to probe the possibility of using the 3D tool for grape variety identification. The scanning was followed by mathematical and statistical analysis using innovative algorithms from the field of computer sciences. Using selected Fourier coefficients, a very clear separation was obtained between most of the varieties, with only very few overlaps. These results show that this method enables the separation between different Vitis vinifera varieties. Interestingly, when using the 3D approach to analyze couples of varieties, considered synonyms by the standard 22 SSR analysis approach, we found that the varieties in two of the considered synonym couples were clearly separated by the morphological analysis. This work, therefore, suggests a new systematic tool for high resolution variety discrimination.

  3. On-chip Magnetic Separation and Cell Encapsulation in Droplets

    NASA Astrophysics Data System (ADS)

    Chen, A.; Byvank, T.; Bharde, A.; Miller, B. L.; Chalmers, J. J.; Sooryakumar, R.; Chang, W.-J.; Bashir, R.

    2012-02-01

    The demand for high-throughput single cell assays is gaining importance because of the heterogeneity of many cell suspensions, even after significant initial sorting. These suspensions may display cell-to-cell variability at the gene expression level that could impact single cell functional genomics, cancer, stem-cell research and drug screening. The on-chip monitoring of individual cells in an isolated environment could prevent cross-contamination, provide high recovery yield and ability to study biological traits at a single cell level These advantages of on-chip biological experiments contrast to conventional methods, which require bulk samples that provide only averaged information on cell metabolism. We report on a device that integrates microfluidic technology with a magnetic tweezers array to combine the functionality of separation and encapsulation of objects such as immunomagnetically labeled cells or magnetic beads into pico-liter droplets on the same chip. The ability to control the separation throughput that is independent of the hydrodynamic droplet generation rate allows the encapsulation efficiency to be optimized. The device can potentially be integrated with on-chip labeling and/or bio-detection to become a powerful single-cell analysis device.

  4. Source separation of municipal solid waste: The effects of different separation methods and citizens' inclination-case study of Changsha, China.

    PubMed

    Chen, Haibin; Yang, Yan; Jiang, Wei; Song, Mengjie; Wang, Ying; Xiang, Tiantian

    2017-02-01

    A case study on the source separation of municipal solid waste (MSW) was performed in Changsha, the capital city of Hunan Province, China. The objective of this study is to analyze the effects of different separation methods and compare their effects with citizens' attitudes and inclination. An effect evaluation method based on accuracy rate and miscellany rate was proposed to study the performance of different separation methods. A large-scale questionnaire survey was conducted to determine citizens' attitudes and inclination toward source separation. Survey result shows that the vast majority of respondents hold consciously positive attitudes toward participation in source separation. Moreover, the respondents ignore the operability of separation methods and would rather choose the complex separation method involving four or more subclassed categories. For the effects of separation methods, the site experiment result demonstrates that the relatively simple separation method involving two categories (food waste and other waste) achieves the best effect with the highest accuracy rate (83.1%) and the lowest miscellany rate (16.9%) among the proposed experimental alternatives. The outcome reflects the inconsistency between people's environmental awareness and behavior. Such inconsistency and conflict may be attributed to the lack of environmental knowledge. Environmental education is assumed to be a fundamental solution to improve the effect of source separation of MSW in Changsha. Important management tips on source separation, including the reformation of the current pay-as-you-throw (PAYT) system, are presented in this work. A case study on the source separation of municipal solid waste was performed in Changsha. An effect evaluation method based on accuracy rate and miscellany rate was proposed to study the performance of different separation methods. The site experiment result demonstrates that the two-category (food waste and other waste) method achieves the best effect. The inconsistency between people's inclination and the effect of source separation exists. The proposed method can be expanded to other cities to determine the most effective separation method during planning stages or to evaluate the performance of running source separation systems.

  5. Water outlet control mechanism for fuel cell system operation in variable gravity environments

    NASA Technical Reports Server (NTRS)

    Vasquez, Arturo (Inventor); McCurdy, Kerri L. (Inventor); Bradley, Karla F. (Inventor)

    2007-01-01

    A self-regulated water separator provides centrifugal separation of fuel cell product water from oxidant gas. The system uses the flow energy of the fuel cell's two-phase water and oxidant flow stream and a regulated ejector or other reactant circulation pump providing the two-phase fluid flow. The system further uses a means of controlling the water outlet flow rate away from the water separator that uses both the ejector's or reactant pump's supply pressure and a compressibility sensor to provide overall control of separated water flow either back to the separator or away from the separator.

  6. Theoretical analysis of exponential transversal method of lines for the diffusion equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salazar, A.; Raydan, M.; Campo, A.

    1996-12-31

    Recently a new approximate technique to solve the diffusion equation was proposed by Campo and Salazar. This new method is inspired on the Method of Lines (MOL) with some insight coming from the method of separation of variables. The proposed method, the Exponential Transversal Method of Lines (ETMOL), utilizes an exponential variation to improve accuracy in the evaluation of the time derivative. Campo and Salazar have implemented this method in a wide range of heat/mass transfer applications and have obtained surprisingly good numerical results. In this paper, the authors study the theoretical properties of ETMOL in depth. In particular, consistency,more » stability and convergence are established in the framework of the heat/mass diffusion equation. In most practical applications the method presents a very reduced truncation error in time and its different versions are proven to be unconditionally stable in the Fourier sense. Convergence of the solutions is then established. The theory is corroborated by several analytical/numerical experiments.« less

  7. Usage of machine learning for the separation of electroweak and strong Zγ production at the LHC experiments

    NASA Astrophysics Data System (ADS)

    Petukhov, A. M.; Soldatov, E. Yu

    2017-12-01

    Separation of electroweak component from strong component of associated Zγ production on hadron colliders is a very challenging task due to identical final states of such processes. The only difference is the origin of two leading jets in these two processes. Rectangular cuts on jet kinematic variables from ATLAS/CMS 8 TeV Zγ experimental analyses were improved using machine learning techniques. New selection variables were also tested. The expected significance of separation for LHC experiments conditions at the second datataking period (Run2) and 120 fb-1 amount of data reaches more than 5σ. Future experimental observation of electroweak Zγ production can also lead to the observation physics beyond Standard Model.

  8. 17 CFR 270.6e-3(T) - Temporary exemptions for flexible premium variable life insurance separate accounts.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... contracts, including, but not limited to, premium rate structure and premium processing, insurance... discrete cash values that may vary in amount in accordance with the investment experience of the separate...

  9. 17 CFR 270.6e-3(T) - Temporary exemptions for flexible premium variable life insurance separate accounts.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... contracts, including, but not limited to, premium rate structure and premium processing, insurance... discrete cash values that may vary in amount in accordance with the investment experience of the separate...

  10. 17 CFR 270.6e-3(T) - Temporary exemptions for flexible premium variable life insurance separate accounts.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... contracts, including, but not limited to, premium rate structure and premium processing, insurance... discrete cash values that may vary in amount in accordance with the investment experience of the separate...

  11. 17 CFR 270.6e-3(T) - Temporary exemptions for flexible premium variable life insurance separate accounts.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... contracts, including, but not limited to, premium rate structure and premium processing, insurance... discrete cash values that may vary in amount in accordance with the investment experience of the separate...

  12. 17 CFR 270.6e-3(T) - Temporary exemptions for flexible premium variable life insurance separate accounts.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... contracts, including, but not limited to, premium rate structure and premium processing, insurance... discrete cash values that may vary in amount in accordance with the investment experience of the separate...

  13. Calibrating the pixel-level Kepler imaging data with a causal data-driven model

    NASA Astrophysics Data System (ADS)

    Wang, Dun; Foreman-Mackey, Daniel; Hogg, David W.; Schölkopf, Bernhard

    2015-01-01

    In general, astronomical observations are affected by several kinds of noise, each with it's own causal source; there is photon noise, stochastic source variability, and residuals coming from imperfect calibration of the detector or telescope. In particular, the precision of NASA Kepler photometry for exoplanet science—the most precise photometric measurements of stars ever made—appears to be limited by unknown or untracked variations in spacecraft pointing and temperature, and unmodeled stellar variability. Here we present the Causal Pixel Model (CPM) for Kepler data, a data-driven model intended to capture variability but preserve transit signals. The CPM works at the pixel level (not the photometric measurement level); it can capture more fine-grained information about the variation of the spacecraft than is available in the pixel-summed aperture photometry. The basic idea is that CPM predicts each target pixel value from a large number of pixels of other stars sharing the instrument variabilities while not containing any information on possible transits at the target star. In addition, we use the target star's future and past (auto-regression). By appropriately separating the data into training and test sets, we ensure that information about any transit will be perfectly isolated from the fitting of the model. The method has four hyper-parameters (the number of predictor stars, the auto-regressive window size, and two L2-regularization amplitudes for model components), which we set by cross-validation. We determine a generic set of hyper-parameters that works well on most of the stars with 11≤V≤12 mag and apply the method to a corresponding set of target stars with known planet transits. We find that we can consistently outperform (for the purposes of exoplanet detection) the Kepler Pre-search Data Conditioning (PDC) method for exoplanet discovery, often improving the SNR by a factor of two. While we have not yet exhaustively tested the method at other magnitudes, we expect that it should be generally applicable, with positive consequences for subsequent exoplanet detection or stellar variability (in which case we must exclude the autoregressive part to preserve intrinsic variability).

  14. Simultaneous separation and determination of six arsenic species in rice by anion-exchange chromatography with inductively coupled plasma mass spectrometry.

    PubMed

    Ma, Li; Yang, Zhaoguang; Tang, Jie; Wang, Lin

    2016-06-01

    The simultaneous separation and determination of arsenite As(III), arsenate As(V), monomethylarsonic acid (MMA), dimethylarsinic acid (DMA), arsenobetaine (AsB), and arsenocholine (AsC) in rice samples have been carried out in one single anion-exchange column run by high-performance liquid chromatography with inductively coupled plasma mass spectrometry. To estimate the effect of variables on arsenic (As) speciation, the chromatographic conditions including type of competing anion, ionic strength, pH of elution buffer, and flow rate of mobile phase have been investigated by a univariate approach. Under the optimum chromatographic conditions, baseline separation of six As species has been achieved within 10 min by gradient elution program using 4 mM NH4 HCO3 at pH 8.6 as mobile phase A and 4 mM NH4 HCO3 , 40 mM NH4 NO3 at pH 8.6 as mobile phase B. The method detection limits for As(III), As(V), MMA, DMA, AsB, and AsC were 0.4, 0.9, 0.2, 0.4, 0.5, and 0.3 μg/kg, respectively. The proposed method has been applied to separation and quantification of As species in real rice samples collected from Hunan Province, China. The main As species detected in all samples were As(III), As(V) and DMA, with inorganic As accounting for over 80% of total As in these samples. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Bayesian LASSO, scale space and decision making in association genetics.

    PubMed

    Pasanen, Leena; Holmström, Lasse; Sillanpää, Mikko J

    2015-01-01

    LASSO is a penalized regression method that facilitates model fitting in situations where there are as many, or even more explanatory variables than observations, and only a few variables are relevant in explaining the data. We focus on the Bayesian version of LASSO and consider four problems that need special attention: (i) controlling false positives, (ii) multiple comparisons, (iii) collinearity among explanatory variables, and (iv) the choice of the tuning parameter that controls the amount of shrinkage and the sparsity of the estimates. The particular application considered is association genetics, where LASSO regression can be used to find links between chromosome locations and phenotypic traits in a biological organism. However, the proposed techniques are relevant also in other contexts where LASSO is used for variable selection. We separate the true associations from false positives using the posterior distribution of the effects (regression coefficients) provided by Bayesian LASSO. We propose to solve the multiple comparisons problem by using simultaneous inference based on the joint posterior distribution of the effects. Bayesian LASSO also tends to distribute an effect among collinear variables, making detection of an association difficult. We propose to solve this problem by considering not only individual effects but also their functionals (i.e. sums and differences). Finally, whereas in Bayesian LASSO the tuning parameter is often regarded as a random variable, we adopt a scale space view and consider a whole range of fixed tuning parameters, instead. The effect estimates and the associated inference are considered for all tuning parameters in the selected range and the results are visualized with color maps that provide useful insights into data and the association problem considered. The methods are illustrated using two sets of artificial data and one real data set, all representing typical settings in association genetics.

  16. Entropy based quantification of Ki-67 positive cell images and its evaluation by a reader study

    NASA Astrophysics Data System (ADS)

    Niazi, M. Khalid Khan; Pennell, Michael; Elkins, Camille; Hemminger, Jessica; Jin, Ming; Kirby, Sean; Kurt, Habibe; Miller, Barrie; Plocharczyk, Elizabeth; Roth, Rachel; Ziegler, Rebecca; Shana'ah, Arwa; Racke, Fred; Lozanski, Gerard; Gurcan, Metin N.

    2013-03-01

    Presence of Ki-67, a nuclear protein, is typically used to measure cell proliferation. The quantification of the Ki-67 proliferation index is performed visually by the pathologist; however, this is subject to inter- and intra-reader variability. Automated techniques utilizing digital image analysis by computers have emerged. The large variations in specimen preparation, staining, and imaging as well as true biological heterogeneity of tumor tissue often results in variable intensities in Ki-67 stained images. These variations affect the performance of currently developed methods. To optimize the segmentation of Ki-67 stained cells, one should define a data dependent transformation that will account for these color variations instead of defining a fixed linear transformation to separate different hues. To address these issues in images of tissue stained with Ki-67, we propose a methodology that exploits the intrinsic properties of CIE L∗a∗b∗ color space to translate this complex problem into an automatic entropy based thresholding problem. The developed method was evaluated through two reader studies with pathology residents and expert hematopathologists. Agreement between the proposed method and the expert pathologists was good (CCC = 0.80).

  17. The effects of higher-order questioning strategies on nonscience majors' achievement in an introductory environmental science course and their attitudes toward the environment

    NASA Astrophysics Data System (ADS)

    Eason, Grace Teresa

    The purpose of this quasi-experimental study was to determine the effect a higher-order questioning strategy (Bloom, 1956) had on undergraduate non-science majors' attitudes toward the environment and their achievement in an introductory environmental science course, EDS 1032, "Survey of Science 2: Life Science," which was offered during the Spring 2000 term. Students from both treatment and control groups (N = 63), which were determined using intact classes, participated in eight cooperative group activities based on the Biological Sciences Curriculum Studies (BSCS) 5E model (Bybee, 1993). The treatment group received a higher-order questioning method combined with the BSCS 5E model. The control group received a lower-order questioning method, combined with the BSCS 5E model. Two instruments were used to measure students' attitude and achievement changes. The Ecology Issue Attitude (EIA) survey (Schindler, 1995) and a comprehensive environmental science final exam. Kolb's Learning Style Inventory (KLSI, 1985) was used to measure students' learning style type. After a 15-week treatment period, results were analyzed using MANCOVA. The overall MANCOVA model used to test the statistical difference between the collective influences of the independent variables on the three dependent variables simultaneously was found to be not significant at alpha = .05. This differs from findings of previous studies in which higher-order questioning techniques had a significant effect on student achievement (King 1989 & 1992; Blosser, 1991; Redfield and Rousseau, 1981; Gall 1970). At the risk of inflated Type I and Type II error rates, separate univariate analyses were performed. However, none of the research factors, when examined collectively or separately, made any significant contribution to explaining the variability in EIA attitude, EIA achievement, and comprehensive environmental science final examination scores. Nevertheless, anecdotal evidence from student's self-reported behavior changes indicated favorable responses to an increased awareness of and positive action toward the environment.

  18. Updated Magmatic Flux Rate Estimates for the Hawaii Plume

    NASA Astrophysics Data System (ADS)

    Wessel, P.

    2013-12-01

    Several studies have estimated the magmatic flux rate along the Hawaiian-Emperor Chain using a variety of methods and arriving at different results. These flux rate estimates have weaknesses because of incomplete data sets and different modeling assumptions, especially for the youngest portion of the chain (<3 Ma). While they generally agree on the 1st order features, there is less agreement on the magnitude and relative size of secondary flux variations. Some of these differences arise from the use of different methodologies, but the significance of this variability is difficult to assess due to a lack of confidence bounds on the estimates obtained with these disparate methods. All methods introduce some error, but to date there has been little or no quantification of error estimates for the inferred melt flux, making an assessment problematic. Here we re-evaluate the melt flux for the Hawaii plume with the latest gridded data sets (SRTM30+ and FAA 21.1) using several methods, including the optimal robust separator (ORS) and directional median filtering techniques (DiM). We also compute realistic confidence limits on the results. In particular, the DiM technique was specifically developed to aid in the estimation of surface loads that are superimposed on wider bathymetric swells and it provides error estimates on the optimal residuals. Confidence bounds are assigned separately for the estimated surface load (obtained from the ORS regional/residual separation techniques) and the inferred subsurface volume (from gravity-constrained isostasy and plate flexure optimizations). These new and robust estimates will allow us to assess which secondary features in the resulting melt flux curve are significant and should be incorporated when correlating melt flux variations with other geophysical and geochemical observations.

  19. Improving the accuracy of flood forecasting with transpositions of ensemble NWP rainfall fields considering orographic effects

    NASA Astrophysics Data System (ADS)

    Yu, Wansik; Nakakita, Eiichi; Kim, Sunmin; Yamaguchi, Kosei

    2016-08-01

    The use of meteorological ensembles to produce sets of hydrological predictions increased the capability to issue flood warnings. However, space scale of the hydrological domain is still much finer than meteorological model, and NWP models have challenges with displacement. The main objective of this study to enhance the transposition method proposed in Yu et al. (2014) and to suggest the post-processing ensemble flood forecasting method for the real-time updating and the accuracy improvement of flood forecasts that considers the separation of the orographic rainfall and the correction of misplaced rain distributions using additional ensemble information through the transposition of rain distributions. In the first step of the proposed method, ensemble forecast rainfalls from a numerical weather prediction (NWP) model are separated into orographic and non-orographic rainfall fields using atmospheric variables and the extraction of topographic effect. Then the non-orographic rainfall fields are examined by the transposition scheme to produce additional ensemble information and new ensemble NWP rainfall fields are calculated by recombining the transposition results of non-orographic rain fields with separated orographic rainfall fields for a generation of place-corrected ensemble information. Then, the additional ensemble information is applied into a hydrologic model for post-flood forecasting with a 6-h interval. The newly proposed method has a clear advantage to improve the accuracy of mean value of ensemble flood forecasting. Our study is carried out and verified using the largest flood event by typhoon 'Talas' of 2011 over the two catchments, which are Futatsuno (356.1 km2) and Nanairo (182.1 km2) dam catchments of Shingu river basin (2360 km2), which is located in the Kii peninsula, Japan.

  20. Resonance Raman Spectroscopy of human brain metastasis of lung cancer analyzed by blind source separation

    NASA Astrophysics Data System (ADS)

    Zhou, Yan; Liu, Cheng-Hui; Pu, Yang; Cheng, Gangge; Yu, Xinguang; Zhou, Lixin; Lin, Dongmei; Zhu, Ke; Alfano, Robert R.

    2017-02-01

    Resonance Raman (RR) spectroscopy offers a novel Optical Biopsy method in cancer discrimination by a means of enhancement in Raman scattering. It is widely acknowledged that the RR spectrum of tissue is a superposition of spectra of various key building block molecules. In this study, the Resonance Raman (RR) spectra of human metastasis of lung cancerous and normal brain tissues excited by a visible selected wavelength at 532 nm are used to explore spectral changes caused by the tumor evolution. The potential application of RR spectra human brain metastasis of lung cancer was investigated by Blind Source Separation such as Principal Component Analysis (PCA). PCA is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components (PCs). The results show significant RR spectra difference between human metastasis of lung cancerous and normal brain tissues analyzed by PCA. To evaluate the efficacy of for cancer detection, a linear discriminant analysis (LDA) classifier is utilized to calculate the sensitivity, and specificity and the receiver operating characteristic (ROC) curves are used to evaluate the performance of this criterion. Excellent sensitivity of 0.97, specificity (close to 1.00) and the Area Under ROC Curve (AUC) of 0.99 values are achieved under best optimal circumstance. This research demonstrates that RR spectroscopy is effective for detecting changes of tissues due to the development of brain metastasis of lung cancer. RR spectroscopy analyzed by blind source separation may have potential to be a new armamentarium.

  1. Advanced Neuropsychological Diagnostics Infrastructure (ANDI): A Normative Database Created from Control Datasets

    PubMed Central

    de Vent, Nathalie R.; Agelink van Rentergem, Joost A.; Schmand, Ben A.; Murre, Jaap M. J.; Huizenga, Hilde M.

    2016-01-01

    In the Advanced Neuropsychological Diagnostics Infrastructure (ANDI), datasets of several research groups are combined into a single database, containing scores on neuropsychological tests from healthy participants. For most popular neuropsychological tests the quantity, and range of these data surpasses that of traditional normative data, thereby enabling more accurate neuropsychological assessment. Because of the unique structure of the database, it facilitates normative comparison methods that were not feasible before, in particular those in which entire profiles of scores are evaluated. In this article, we describe the steps that were necessary to combine the separate datasets into a single database. These steps involve matching variables from multiple datasets, removing outlying values, determining the influence of demographic variables, and finding appropriate transformations to normality. Also, a brief description of the current contents of the ANDI database is given. PMID:27812340

  2. Neural control of fast nonlinear systems--application to a turbocharged SI engine with VCT.

    PubMed

    Colin, Guillaume; Chamaillard, Yann; Bloch, Gérard; Corde, Gilles

    2007-07-01

    Today, (engine) downsizing using turbocharging appears as a major way in reducing fuel consumption and pollutant emissions of spark ignition (SI) engines. In this context, an efficient control of the air actuators [throttle, turbo wastegate, and variable camshaft timing (VCT)] is needed for engine torque control. This paper proposes a nonlinear model-based control scheme which combines separate, but coordinated, control modules. Theses modules are based on different control strategies: internal model control (IMC), model predictive control (MPC), and optimal control. It is shown how neural models can be used at different levels and included in the control modules to replace physical models, which are too complex to be online embedded, or to estimate nonmeasured variables. The results obtained from two different test benches show the real-time applicability and good control performance of the proposed methods.

  3. Advanced Neuropsychological Diagnostics Infrastructure (ANDI): A Normative Database Created from Control Datasets.

    PubMed

    de Vent, Nathalie R; Agelink van Rentergem, Joost A; Schmand, Ben A; Murre, Jaap M J; Huizenga, Hilde M

    2016-01-01

    In the Advanced Neuropsychological Diagnostics Infrastructure (ANDI), datasets of several research groups are combined into a single database, containing scores on neuropsychological tests from healthy participants. For most popular neuropsychological tests the quantity, and range of these data surpasses that of traditional normative data, thereby enabling more accurate neuropsychological assessment. Because of the unique structure of the database, it facilitates normative comparison methods that were not feasible before, in particular those in which entire profiles of scores are evaluated. In this article, we describe the steps that were necessary to combine the separate datasets into a single database. These steps involve matching variables from multiple datasets, removing outlying values, determining the influence of demographic variables, and finding appropriate transformations to normality. Also, a brief description of the current contents of the ANDI database is given.

  4. Skin-stiffener interface stresses in composite stiffened panels

    NASA Technical Reports Server (NTRS)

    Wang, J. T. S.; Biggers, S. B.

    1984-01-01

    A model and solution method for determining the normal and shear stresses in the interface between the skin and the stiffener attached flange were developed. An efficient, analytical solution procedure was developed and incorporated in a sizing code for stiffened panels. The analysis procedure described provides a means to study the effects of material and geometric design parameters on the interface stresses. These stresses include the normal stress, and the shear stresses in both the longitudinal and the transverse directions. The tendency toward skin/stiffener separation may therefore be minimized by choosing appropriate values for the design variables. The most important design variables include the relative bending stiffnesses of the skin and stiffener attached flange, the bending stiffness of the stiffener web, and the flange width. The longitudinal compressive loads in the flange and skin have significant effects on the interface stresses.

  5. Non-invasive diagnosis of liver fibrosis in chronic hepatitis C

    PubMed Central

    Schiavon, Leonardo de Lucca; Narciso-Schiavon, Janaína Luz; de Carvalho-Filho, Roberto José

    2014-01-01

    Assessment of liver fibrosis in chronic hepatitis C virus (HCV) infection is considered a relevant part of patient care and key for decision making. Although liver biopsy has been considered the gold standard for staging liver fibrosis, it is an invasive technique and subject to sampling errors and significant intra- and inter-observer variability. Over the last decade, several noninvasive markers were proposed for liver fibrosis diagnosis in chronic HCV infection, with variable performance. Besides the clear advantage of being noninvasive, a more objective interpretation of test results may overcome the mentioned intra- and inter-observer variability of liver biopsy. In addition, these tests can theoretically offer a more accurate view of fibrogenic events occurring in the entire liver with the advantage of providing frequent fibrosis evaluation without additional risk. However, in general, these tests show low accuracy in discriminating between intermediate stages of fibrosis and may be influenced by several hepatic and extra-hepatic conditions. These methods are either serum markers (usually combined in a mathematical model) or imaging modalities that can be used separately or combined in algorithms to improve accuracy. In this review we will discuss the different noninvasive methods that are currently available for the evaluation of liver fibrosis in chronic hepatitis C, their advantages, limitations and application in clinical practice. PMID:24659877

  6. Improvements to an earth observing statistical performance model with applications to LWIR spectral variability

    NASA Astrophysics Data System (ADS)

    Zhao, Runchen; Ientilucci, Emmett J.

    2017-05-01

    Hyperspectral remote sensing systems provide spectral data composed of hundreds of narrow spectral bands. Spectral remote sensing systems can be used to identify targets, for example, without physical interaction. Often it is of interested to characterize the spectral variability of targets or objects. The purpose of this paper is to identify and characterize the LWIR spectral variability of targets based on an improved earth observing statistical performance model, known as the Forecasting and Analysis of Spectroradiometric System Performance (FASSP) model. FASSP contains three basic modules including a scene model, sensor model and a processing model. Instead of using mean surface reflectance only as input to the model, FASSP transfers user defined statistical characteristics of a scene through the image chain (i.e., from source to sensor). The radiative transfer model, MODTRAN, is used to simulate the radiative transfer based on user defined atmospheric parameters. To retrieve class emissivity and temperature statistics, or temperature / emissivity separation (TES), a LWIR atmospheric compensation method is necessary. The FASSP model has a method to transform statistics in the visible (ie., ELM) but currently does not have LWIR TES algorithm in place. This paper addresses the implementation of such a TES algorithm and its associated transformation of statistics.

  7. Automated reverse engineering of nonlinear dynamical systems

    PubMed Central

    Bongard, Josh; Lipson, Hod

    2007-01-01

    Complex nonlinear dynamics arise in many fields of science and engineering, but uncovering the underlying differential equations directly from observations poses a challenging task. The ability to symbolically model complex networked systems is key to understanding them, an open problem in many disciplines. Here we introduce for the first time a method that can automatically generate symbolic equations for a nonlinear coupled dynamical system directly from time series data. This method is applicable to any system that can be described using sets of ordinary nonlinear differential equations, and assumes that the (possibly noisy) time series of all variables are observable. Previous automated symbolic modeling approaches of coupled physical systems produced linear models or required a nonlinear model to be provided manually. The advance presented here is made possible by allowing the method to model each (possibly coupled) variable separately, intelligently perturbing and destabilizing the system to extract its less observable characteristics, and automatically simplifying the equations during modeling. We demonstrate this method on four simulated and two real systems spanning mechanics, ecology, and systems biology. Unlike numerical models, symbolic models have explanatory value, suggesting that automated “reverse engineering” approaches for model-free symbolic nonlinear system identification may play an increasing role in our ability to understand progressively more complex systems in the future. PMID:17553966

  8. Solution of Dirac equation for Eckart potential and trigonometric Manning Rosen potential using asymptotic iteration method

    NASA Astrophysics Data System (ADS)

    Resita Arum, Sari; A, Suparmi; C, Cari

    2016-01-01

    The Dirac equation for Eckart potential and trigonometric Manning Rosen potential with exact spin symmetry is obtained using an asymptotic iteration method. The combination of the two potentials is substituted into the Dirac equation, then the variables are separated into radial and angular parts. The Dirac equation is solved by using an asymptotic iteration method that can reduce the second order differential equation into a differential equation with substitution variables of hypergeometry type. The relativistic energy is calculated using Matlab 2011. This study is limited to the case of spin symmetry. With the asymptotic iteration method, the energy spectra of the relativistic equations and equations of orbital quantum number l can be obtained, where both are interrelated between quantum numbers. The energy spectrum is also numerically solved using the Matlab software, where the increase in the radial quantum number nr causes the energy to decrease. The radial part and the angular part of the wave function are defined as hypergeometry functions and visualized with Matlab 2011. The results show that the disturbance of a combination of the Eckart potential and trigonometric Manning Rosen potential can change the radial part and the angular part of the wave function. Project supported by the Higher Education Project (Grant No. 698/UN27.11/PN/2015).

  9. Automated reverse engineering of nonlinear dynamical systems.

    PubMed

    Bongard, Josh; Lipson, Hod

    2007-06-12

    Complex nonlinear dynamics arise in many fields of science and engineering, but uncovering the underlying differential equations directly from observations poses a challenging task. The ability to symbolically model complex networked systems is key to understanding them, an open problem in many disciplines. Here we introduce for the first time a method that can automatically generate symbolic equations for a nonlinear coupled dynamical system directly from time series data. This method is applicable to any system that can be described using sets of ordinary nonlinear differential equations, and assumes that the (possibly noisy) time series of all variables are observable. Previous automated symbolic modeling approaches of coupled physical systems produced linear models or required a nonlinear model to be provided manually. The advance presented here is made possible by allowing the method to model each (possibly coupled) variable separately, intelligently perturbing and destabilizing the system to extract its less observable characteristics, and automatically simplifying the equations during modeling. We demonstrate this method on four simulated and two real systems spanning mechanics, ecology, and systems biology. Unlike numerical models, symbolic models have explanatory value, suggesting that automated "reverse engineering" approaches for model-free symbolic nonlinear system identification may play an increasing role in our ability to understand progressively more complex systems in the future.

  10. Variability of the QuantiFERON®-TB gold in-tube test using automated and manual methods.

    PubMed

    Whitworth, William C; Goodwin, Donald J; Racster, Laura; West, Kevin B; Chuke, Stella O; Daniels, Laura J; Campbell, Brandon H; Bohanon, Jamaria; Jaffar, Atheer T; Drane, Wanzer; Sjoberg, Paul A; Mazurek, Gerald H

    2014-01-01

    The QuantiFERON®-TB Gold In-Tube test (QFT-GIT) detects Mycobacterium tuberculosis (Mtb) infection by measuring release of interferon gamma (IFN-γ) when T-cells (in heparinized whole blood) are stimulated with specific Mtb antigens. The amount of IFN-γ is determined by enzyme-linked immunosorbent assay (ELISA). Automation of the ELISA method may reduce variability. To assess the impact of ELISA automation, we compared QFT-GIT results and variability when ELISAs were performed manually and with automation. Blood was collected into two sets of QFT-GIT tubes and processed at the same time. For each set, IFN-γ was measured in automated and manual ELISAs. Variability in interpretations and IFN-γ measurements was assessed between automated (A1 vs. A2) and manual (M1 vs. M2) ELISAs. Variability in IFN-γ measurements was also assessed on separate groups stratified by the mean of the four ELISAs. Subjects (N = 146) had two automated and two manual ELISAs completed. Overall, interpretations were discordant for 16 (11%) subjects. Excluding one subject with indeterminate results, 7 (4.8%) subjects had discordant automated interpretations and 10 (6.9%) subjects had discordant manual interpretations (p = 0.17). Quantitative variability was not uniform; within-subject variability was greater with higher IFN-γ measurements and with manual ELISAs. For subjects with mean TB Responses ±0.25 IU/mL of the 0.35 IU/mL cutoff, the within-subject standard deviation for two manual tests was 0.27 (CI95 = 0.22-0.37) IU/mL vs. 0.09 (CI95 = 0.07-0.12) IU/mL for two automated tests. QFT-GIT ELISA automation may reduce variability near the test cutoff. Methodological differences should be considered when interpreting and using IFN-γ release assays (IGRAs).

  11. A New Cluster Analysis-Marker-Controlled Watershed Method for Separating Particles of Granular Soils

    PubMed Central

    Alam, Md Ferdous

    2017-01-01

    An accurate determination of particle-level fabric of granular soils from tomography data requires a maximum correct separation of particles. The popular marker-controlled watershed separation method is widely used to separate particles. However, the watershed method alone is not capable of producing the maximum separation of particles when subjected to boundary stresses leading to crushing of particles. In this paper, a new separation method, named as Monash Particle Separation Method (MPSM), has been introduced. The new method automatically determines the optimal contrast coefficient based on cluster evaluation framework to produce the maximum accurate separation outcomes. Finally, the particles which could not be separated by the optimal contrast coefficient were separated by integrating cuboid markers generated from the clustering by Gaussian mixture models into the routine watershed method. The MPSM was validated on a uniformly graded sand volume subjected to one-dimensional compression loading up to 32 MPa. It was demonstrated that the MPSM is capable of producing the best possible separation of particles required for the fabric analysis. PMID:29057823

  12. 12 CFR 250.411 - Interlocking relationships between member bank and variable annuity insurance company.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and variable annuity insurance company. 250.411 Section 250.411 Banks and Banking FEDERAL RESERVE... and variable annuity insurance company. (a) The Board has recently been asked to consider whether... insurance company, of which the accumulation fund is a “separate account,” but as to which the insurance...

  13. Predictors of Coping in Divorced Single Mothers.

    ERIC Educational Resources Information Center

    Propst, L. Rebecca; And Others

    1986-01-01

    Examined the effects of demographic variables, variables specific to marriage and divorce, and coping resources (internal and external) on the adjustment of single mothers. Results indicate that four classes of variables have an effect on the mother's adjustment: phase of divorce and/or separation; numbers and ages of children; style of coping;…

  14. Standardization, evaluation and early-phase method validation of an analytical scheme for batch-consistency N-glycosylation analysis of recombinant produced glycoproteins.

    PubMed

    Zietze, Stefan; Müller, Rainer H; Brecht, René

    2008-03-01

    In order to set up a batch-to-batch-consistency analytical scheme for N-glycosylation analysis, several sample preparation steps including enzyme digestions and fluorophore labelling and two HPLC-methods were established. The whole method scheme was standardized, evaluated and validated according to the requirements on analytical testing in early clinical drug development by usage of a recombinant produced reference glycoprotein (RGP). The standardization of the methods was performed by clearly defined standard operation procedures. During evaluation of the methods, the major interest was in the loss determination of oligosaccharides within the analytical scheme. Validation of the methods was performed with respect to specificity, linearity, repeatability, LOD and LOQ. Due to the fact that reference N-glycan standards were not available, a statistical approach was chosen to derive accuracy from the linearity data. After finishing the validation procedure, defined limits for method variability could be calculated and differences observed in consistency analysis could be separated into significant and incidental ones.

  15. Overall Memory Impairment Identification with Mathematical Modeling of the CVLT-II Learning Curve in Multiple Sclerosis

    PubMed Central

    Stepanov, Igor I.; Abramson, Charles I.; Hoogs, Marietta; Benedict, Ralph H. B.

    2012-01-01

    The CVLT-II provides standardized scores for each of the List A five learning trials, so that the clinician can compare the patient's raw trials 1–5 scores with standardized ones. However, frequently, a patient's raw scores fluctuate making a proper interpretation difficult. The CVLT-II does not offer any other methods for classifying a patient's learning and memory status on the background of the learning curve. The main objective of this research is to illustrate that discriminant analysis provides an accurate assessment of the learning curve, if suitable predictor variables are selected. Normal controls were ninety-eight healthy volunteers (78 females and 20 males). A group of MS patients included 365 patients (266 females and 99 males) with clinically defined multiple sclerosis. We show that the best predictor variables are coefficients B3 and B4 of our mathematical model B3 ∗ exp(−B2  ∗  (X − 1)) + B4  ∗  (1 − exp(−B2  ∗  (X − 1))) because discriminant functions, calculated separately for B3 and B4, allow nearly 100% correct classification. These predictors allow identification of separate impairment of readiness to learn or ability to learn, or both. PMID:22745911

  16. Overall Memory Impairment Identification with Mathematical Modeling of the CVLT-II Learning Curve in Multiple Sclerosis.

    PubMed

    Stepanov, Igor I; Abramson, Charles I; Hoogs, Marietta; Benedict, Ralph H B

    2012-01-01

    The CVLT-II provides standardized scores for each of the List A five learning trials, so that the clinician can compare the patient's raw trials 1-5 scores with standardized ones. However, frequently, a patient's raw scores fluctuate making a proper interpretation difficult. The CVLT-II does not offer any other methods for classifying a patient's learning and memory status on the background of the learning curve. The main objective of this research is to illustrate that discriminant analysis provides an accurate assessment of the learning curve, if suitable predictor variables are selected. Normal controls were ninety-eight healthy volunteers (78 females and 20 males). A group of MS patients included 365 patients (266 females and 99 males) with clinically defined multiple sclerosis. We show that the best predictor variables are coefficients B3 and B4 of our mathematical model B3 ∗ exp(-B2  ∗  (X - 1)) + B4  ∗  (1 - exp(-B2  ∗  (X - 1))) because discriminant functions, calculated separately for B3 and B4, allow nearly 100% correct classification. These predictors allow identification of separate impairment of readiness to learn or ability to learn, or both.

  17. A Hybrid Vortex Sheet / Point Vortex Model for Unsteady Separated Flows

    NASA Astrophysics Data System (ADS)

    Darakananda, Darwin; Eldredge, Jeff D.; Colonius, Tim; Williams, David R.

    2015-11-01

    The control of separated flow over an airfoil is essential for obtaining lift enhancement, drag reduction, and the overall ability to perform high agility maneuvers. In order to develop reliable flight control systems capable of realizing agile maneuvers, we need a low-order aerodynamics model that can accurately predict the force response of an airfoil to arbitrary disturbances and/or actuation. In the present work, we integrate vortex sheets and variable strength point vortices into a method that is able to capture the formation of coherent vortex structures while remaining computationally tractable for control purposes. The role of the vortex sheet is limited to tracking the dynamics of the shear layer immediately behind the airfoil. When parts of the sheet develop into large scale structures, those sections are replaced by variable strength point vortices. We prevent the vortex sheets from growing indefinitely by truncating the tips of the sheets and transfering their circulation into nearby point vortices whenever the length of sheet exceeds a threshold. We demonstrate the model on a variety of canonical problems, including pitch-up and impulse translation of an airfoil at various angles of attack. Support by the U.S. Air Force Office of Scientific Research (FA9550-14-1-0328) with program manager Dr. Douglas Smith is gratefully acknowledged.

  18. Quantitative assessment of multiple sclerosis lesion load using CAD and expert input

    NASA Astrophysics Data System (ADS)

    Gertych, Arkadiusz; Wong, Alexis; Sangnil, Alan; Liu, Brent J.

    2008-03-01

    Multiple sclerosis (MS) is a frequently encountered neurological disease with a progressive but variable course affecting the central nervous system. Outline-based lesion quantification in the assessment of lesion load (LL) performed on magnetic resonance (MR) images is clinically useful and provides information about the development and change reflecting overall disease burden. Methods of LL assessment that rely on human input are tedious, have higher intra- and inter-observer variability and are more time-consuming than computerized automatic (CAD) techniques. At present it seems that methods based on human lesion identification preceded by non-interactive outlining by CAD are the best LL quantification strategies. We have developed a CAD that automatically quantifies MS lesions, displays 3-D lesion map and appends radiological findings to original images according to current DICOM standard. CAD is also capable to display and track changes and make comparison between patient's separate MRI studies to determine disease progression. The findings are exported to a separate imaging tool for review and final approval by expert. Capturing and standardized archiving of manual contours is also implemented. Similarity coefficients calculated from quantities of LL in collected exams show a good correlation of CAD-derived results vs. those incorporated as expert's reading. Combining the CAD approach with an expert interaction may impact to the diagnostic work-up of MS patients because of improved reproducibility in LL assessment and reduced time for single MR or comparative exams reading. Inclusion of CAD-generated outlines as DICOM-compliant overlays into the image data can serve as a better reference in MS progression tracking.

  19. Separation of natural product using columns packed with Fused-Core particles.

    PubMed

    Yang, Peilin; Litwinski, George R; Pursch, Matthias; McCabe, Terry; Kuppannan, Krishna

    2009-06-01

    Three HPLC columns packed with 3 microm, sub-2 microm, and 2.7 microm Fused-Core (superficially porous) particles were compared in separation performance using two natural product mixtures containing 15 structurally related components. The Ascentis Express C18 column packed with Fused-Core particles showed an 18% increase in column efficiency (theoretical plates), a 76% increase in plate number per meter, a 65% enhancement in separation speed and a 19% increase in back pressure compared to the Atlantis T3 C18 column packed with 3 microm particles. Column lot-to-lot variability for critical pairs in the natural product mixture was observed with both columns, with the Atlantis T3 column exhibiting a higher degree of variability. The Ascentis Express column was also compared with the Acquity BEH column packed with sub-2 microm particles. Although the peak efficiencies obtained by the Ascentis Express column were only about 74% of those obtained by the Acquity BEH column, the 50% lower back pressure and comparable separation speed allowed high-efficiency and high-speed separation to be performed using conventional HPLC instrumentation.

  20. Separation of variables solution for non-linear radiative cooling

    NASA Technical Reports Server (NTRS)

    Siegel, Robert

    1987-01-01

    A separation of variables solution has been obtained for transient radiative cooling of an absorbing-scattering plane layer. The solution applies after an initial transient period required for adjustment of the temperature and scattering source function distributions. The layer emittance, equal to the instantaneous heat loss divided by the fourth power of the instantaneous mean temperature, becomes constant. This emittance is a function of only the optical thickness of the layer and the scattering albedo; its behavior as a function of these quantities is considerably different than for a layer at constant temperature.

  1. A Comparison of Hybrid Reynolds Averaged Navier Stokes/Large Eddy Simulation (RANS/LES) and Unsteady RANS Predictions of Separated Flow for a Variable Speed Power Turbine Blade Operating with Low Inlet Turbulence Levels

    DTIC Science & Technology

    2017-10-01

    Facility is a large-scale cascade that allows detailed flow field surveys and blade surface measurements.10–12 The facility has a continuous run ...structured grids at 2 flow conditions, cruise and takeoff, of the VSPT blade . Computations were run in parallel on a Department of Defense...RANS/LES) and Unsteady RANS Predictions of Separated Flow for a Variable-Speed Power- Turbine Blade Operating with Low Inlet Turbulence Levels

  2. Linking landscape characteristics to local grizzly bear abundance using multiple detection methods in a hierarchical model

    USGS Publications Warehouse

    Graves, T.A.; Kendall, Katherine C.; Royle, J. Andrew; Stetz, J.B.; Macleod, A.C.

    2011-01-01

    Few studies link habitat to grizzly bear Ursus arctos abundance and these have not accounted for the variation in detection or spatial autocorrelation. We collected and genotyped bear hair in and around Glacier National Park in northwestern Montana during the summer of 2000. We developed a hierarchical Markov chain Monte Carlo model that extends the existing occupancy and count models by accounting for (1) spatially explicit variables that we hypothesized might influence abundance; (2) separate sub-models of detection probability for two distinct sampling methods (hair traps and rub trees) targeting different segments of the population; (3) covariates to explain variation in each sub-model of detection; (4) a conditional autoregressive term to account for spatial autocorrelation; (5) weights to identify most important variables. Road density and per cent mesic habitat best explained variation in female grizzly bear abundance; spatial autocorrelation was not supported. More female bears were predicted in places with lower road density and with more mesic habitat. Detection rates of females increased with rub tree sampling effort. Road density best explained variation in male grizzly bear abundance and spatial autocorrelation was supported. More male bears were predicted in areas of low road density. Detection rates of males increased with rub tree and hair trap sampling effort and decreased over the sampling period. We provide a new method to (1) incorporate multiple detection methods into hierarchical models of abundance; (2) determine whether spatial autocorrelation should be included in final models. Our results suggest that the influence of landscape variables is consistent between habitat selection and abundance in this system.

  3. Design Considerations for a New Terminal Area Arrival Scheduler

    NASA Technical Reports Server (NTRS)

    Thipphavong, Jane; Mulfinger, Daniel

    2010-01-01

    Design of a terminal area arrival scheduler depends on the interrelationship between throughput, delay and controller intervention. The main contribution of this paper is an analysis of the above interdependence for several stochastic behaviors of expected system performance distributions in the aircraft s time of arrival at the meter fix and runway. Results of this analysis serve to guide the scheduler design choices for key control variables. Two types of variables are analyzed, separation buffers and terminal delay margins. The choice for these decision variables was tested using sensitivity analysis. Analysis suggests that it is best to set the separation buffer at the meter fix to its minimum and adjust the runway buffer to attain the desired system performance. Delay margin was found to have the least effect. These results help characterize the variables most influential in the scheduling operations of terminal area arrivals.

  4. Local oceanographic variability influences the performance of juvenile abalone under climate change.

    PubMed

    Boch, C A; Micheli, F; AlNajjar, M; Monismith, S G; Beers, J M; Bonilla, J C; Espinoza, A M; Vazquez-Vera, L; Woodson, C B

    2018-04-03

    Climate change is causing warming, deoxygenation, and acidification of the global ocean. However, manifestation of climate change may vary at local scales due to oceanographic conditions. Variation in stressors, such as high temperature and low oxygen, at local scales may lead to variable biological responses and spatial refuges from climate impacts. We conducted outplant experiments at two locations separated by ~2.5 km and two sites at each location separated by ~200 m in the nearshore of Isla Natividad, Mexico to assess how local ocean conditions (warming and hypoxia) may affect juvenile abalone performance. Here, we show that abalone growth and mortality mapped to variability in stress exposure across sites and locations. These insights indicate that management decisions aimed at maintaining and recovering valuable marine species in the face of climate change need to be informed by local variability in environmental conditions.

  5. A Direct Method for Fuel Optimal Maneuvers of Distributed Spacecraft in Multiple Flight Regimes

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.; Cooley, D. S.; Guzman, Jose J.

    2005-01-01

    We present a method to solve the impulsive minimum fuel maneuver problem for a distributed set of spacecraft. We develop the method assuming a non-linear dynamics model and parameterize the problem to allow the method to be applicable to multiple flight regimes including low-Earth orbits, highly-elliptic orbits (HEO), Lagrange point orbits, and interplanetary trajectories. Furthermore, the approach is not limited by the inter-spacecraft separation distances and is applicable to both small formations as well as large constellations. Semianalytical derivatives are derived for the changes in the total AV with respect to changes in the independent variables. We also apply a set of constraints to ensure that the fuel expenditure is equalized over the spacecraft in formation. We conclude with several examples and present optimal maneuver sequences for both a HE0 and libration point formation.

  6. Separation as a suicide risk factor.

    PubMed

    Wyder, Marianne; Ward, Patrick; De Leo, Diego

    2009-08-01

    Marital separation (as distinct from divorce) is rarely researched in the suicidological literature. Studies usually report on the statuses of 'separated' and 'divorced' as a combined category, possibly because demographic registries are not able to identify separation reliably. However, in most countries divorce only happens once the process of separation has settled which, in most cases, occurs a long time after the initial break-up. It has been hypothesised that separation might carry a far greater risk of suicide than divorce. The present study investigates the impact of separation on suicide risk by taking into account the effects of age and gender. The incidence of suicide associated with marital status, age and gender was determined by comparing the Queensland Suicide Register (a large dataset of all suicides in Queensland from 1994 to 2004) with the QLD population through two different census datasets: the Registered Marital Status and the Social Marital Status. These two registries permit the isolation of the variable 'separated' with great reliability. During the examined period, 6062 persons died by suicide in QLD (an average of 551 cases per year), with males outnumbering females by four to one. For both males and females separation created a risk of suicide at least 4 times higher than any other marital status. The risk was particularly high for males aged 15 to 24 (RR 91.62). This study highlights a great variation in the incidence of suicide by marital status, age and gender, which suggests that these variables should not be studied in isolation. Furthermore, particularly in younger males, separation appears to be strongly associated with the risk of suicide.

  7. Two inviscid computational simulations of separated flow about airfoils

    NASA Technical Reports Server (NTRS)

    Barnwell, R. W.

    1976-01-01

    Two inviscid computational simulations of separated flow about airfoils are described. The basic computational method is the line relaxation finite-difference method. Viscous separation is approximated with inviscid free-streamline separation. The point of separation is specified, and the pressure in the separation region is calculated. In the first simulation, the empiricism of constant pressure in the separation region is employed. This empiricism is easier to implement with the present method than with singularity methods. In the second simulation, acoustic theory is used to determine the pressure in the separation region. The results of both simulations are compared with experiment.

  8. Simplex-based optimization of numerical and categorical inputs in early bioprocess development: Case studies in HT chromatography.

    PubMed

    Konstantinidis, Spyridon; Titchener-Hooker, Nigel; Velayudhan, Ajoy

    2017-08-01

    Bioprocess development studies often involve the investigation of numerical and categorical inputs via the adoption of Design of Experiments (DoE) techniques. An attractive alternative is the deployment of a grid compatible Simplex variant which has been shown to yield optima rapidly and consistently. In this work, the method is combined with dummy variables and it is deployed in three case studies wherein spaces are comprised of both categorical and numerical inputs, a situation intractable by traditional Simplex methods. The first study employs in silico data and lays out the dummy variable methodology. The latter two employ experimental data from chromatography based studies performed with the filter-plate and miniature column High Throughput (HT) techniques. The solute of interest in the former case study was a monoclonal antibody whereas the latter dealt with the separation of a binary system of model proteins. The implemented approach prevented the stranding of the Simplex method at local optima, due to the arbitrary handling of the categorical inputs, and allowed for the concurrent optimization of numerical and categorical, multilevel and/or dichotomous, inputs. The deployment of the Simplex method, combined with dummy variables, was therefore entirely successful in identifying and characterizing global optima in all three case studies. The Simplex-based method was further shown to be of equivalent efficiency to a DoE-based approach, represented here by D-Optimal designs. Such an approach failed, however, to both capture trends and identify optima, and led to poor operating conditions. It is suggested that the Simplex-variant is suited to development activities involving numerical and categorical inputs in early bioprocess development. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. DTFP-Growth: Dynamic Threshold-Based FP-Growth Rule Mining Algorithm Through Integrating Gene Expression, Methylation, and Protein-Protein Interaction Profiles.

    PubMed

    Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan; Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan; Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan

    2018-04-01

    Association rule mining is an important technique for identifying interesting relationships between gene pairs in a biological data set. Earlier methods basically work for a single biological data set, and, in maximum cases, a single minimum support cutoff can be applied globally, i.e., across all genesets/itemsets. To overcome this limitation, in this paper, we propose dynamic threshold-based FP-growth rule mining algorithm that integrates gene expression, methylation and protein-protein interaction profiles based on weighted shortest distance to find the novel associations among different pairs of genes in multi-view data sets. For this purpose, we introduce three new thresholds, namely, Distance-based Variable/Dynamic Supports (DVS), Distance-based Variable Confidences (DVC), and Distance-based Variable Lifts (DVL) for each rule by integrating co-expression, co-methylation, and protein-protein interactions existed in the multi-omics data set. We develop the proposed algorithm utilizing these three novel multiple threshold measures. In the proposed algorithm, the values of , , and are computed for each rule separately, and subsequently it is verified whether the support, confidence, and lift of each evolved rule are greater than or equal to the corresponding individual , , and values, respectively, or not. If all these three conditions for a rule are found to be true, the rule is treated as a resultant rule. One of the major advantages of the proposed method compared with other related state-of-the-art methods is that it considers both the quantitative and interactive significance among all pairwise genes belonging to each rule. Moreover, the proposed method generates fewer rules, takes less running time, and provides greater biological significance for the resultant top-ranking rules compared to previous methods.

  10. Subtyping of a Large Collection of Historical Listeria monocytogenes Strains from Ontario, Canada, by an Improved Multilocus Variable-Number Tandem-Repeat Analysis (MLVA)

    PubMed Central

    Saleh-Lakha, S.; Allen, V. G.; Li, J.; Pagotto, F.; Odumeru, J.; Taboada, E.; Lombos, M.; Tabing, K. C.; Blais, B.; Ogunremi, D.; Downing, G.; Lee, S.; Gao, A.; Nadon, C.

    2013-01-01

    Listeria monocytogenes is responsible for severe and often fatal food-borne infections in humans. A collection of 2,421 L. monocytogenes isolates originating from Ontario's food chain between 1993 and 2010, along with Ontario clinical isolates collected from 2004 to 2010, was characterized using an improved multilocus variable-number tandem-repeat analysis (MLVA). The MLVA method was established based on eight primer pairs targeting seven variable-number tandem-repeat (VNTR) loci in two 4-plex fluorescent PCRs. Diversity indices and amplification rates of the individual VNTR loci ranged from 0.38 to 0.92 and from 0.64 to 0.99, respectively. MLVA types and pulsed-field gel electrophoresis (PFGE) patterns were compared using Comparative Partitions analysis involving 336 clinical and 99 food and environmental isolates. The analysis yielded Simpson's diversity index values of 0.998 and 0.992 for MLVA and PFGE, respectively, and adjusted Wallace coefficients of 0.318 when MLVA was used as a primary subtyping method and 0.088 when PFGE was a primary typing method. Statistical data analysis using BioNumerics allowed for identification of at least 8 predominant and persistent L. monocytogenes MLVA types in Ontario's food chain. The MLVA method correctly clustered epidemiologically related outbreak strains and separated unrelated strains in a subset analysis. An MLVA database was established for the 2,421 L. monocytogenes isolates, which allows for comparison of data among historical and new isolates of different sources. The subtyping method coupled with the MLVA database will help in effective monitoring/prevention approaches to identify environmental contamination by pathogenic strains of L. monocytogenes and investigation of outbreaks. PMID:23956391

  11. The effects of display variables and secondary loading on the dual axis critical task performance

    NASA Technical Reports Server (NTRS)

    Swisher, G. M.; Nataraj, S.

    1973-01-01

    The effects of scanning displays for separated instruments, separated versus combined displays, and the effects of secondary loading are investigated. An operator rating scale for handling qualities is established analogous to the Cooper Harper Scale.

  12. Selection Index in the Study of Adaptability and Stability in Maize

    PubMed Central

    Lunezzo de Oliveira, Rogério; Garcia Von Pinho, Renzo; Furtado Ferreira, Daniel; Costa Melo, Wagner Mateus

    2014-01-01

    This paper proposes an alternative method for evaluating the stability and adaptability of maize hybrids using a genotype-ideotype distance index (GIDI) for selection. Data from seven variables were used, obtained through evaluation of 25 maize hybrids at six sites in southern Brazil. The GIDI was estimated by means of the generalized Mahalanobis distance for each plot of the test. We then proceeded to GGE biplot analysis in order to compare the predictive accuracy of the GGE models and the grouping of environments and to select the best five hybrids. The G × E interaction was significant for both variables assessed. The GGE model with two principal components obtained a predictive accuracy (PRECORR) of 0.8913 for the GIDI and 0.8709 for yield (t ha−1). Two groups of environments were obtained upon analyzing the GIDI, whereas all the environments remained in the same group upon analyzing yield. Coincidence occurred in only two hybrids considering evaluation of the two features. The GIDI assessment provided for selection of hybrids that combine adaptability and stability in most of the variables assessed, making its use more highly recommended than analyzing each variable separately. Not all the higher-yielding hybrids were the best in the other variables assessed. PMID:24696641

  13. Large-Scale Circulation and Climate Variability. Chapter 5

    NASA Technical Reports Server (NTRS)

    Perlwitz, J.; Knutson, T.; Kossin, J. P.; LeGrande, A. N.

    2017-01-01

    The causes of regional climate trends cannot be understood without considering the impact of variations in large-scale atmospheric circulation and an assessment of the role of internally generated climate variability. There are contributions to regional climate trends from changes in large-scale latitudinal circulation, which is generally organized into three cells in each hemisphere-Hadley cell, Ferrell cell and Polar cell-and which determines the location of subtropical dry zones and midlatitude jet streams. These circulation cells are expected to shift poleward during warmer periods, which could result in poleward shifts in precipitation patterns, affecting natural ecosystems, agriculture, and water resources. In addition, regional climate can be strongly affected by non-local responses to recurring patterns (or modes) of variability of the atmospheric circulation or the coupled atmosphere-ocean system. These modes of variability represent preferred spatial patterns and their temporal variation. They account for gross features in variance and for teleconnections which describe climate links between geographically separated regions. Modes of variability are often described as a product of a spatial climate pattern and an associated climate index time series that are identified based on statistical methods like Principal Component Analysis (PC analysis), which is also called Empirical Orthogonal Function Analysis (EOF analysis), and cluster analysis.

  14. Relationships between blood pressure and health and fitness-related variables in obese women.

    PubMed

    Shin, Jeong Yeop; Ha, Chang Ho

    2016-10-01

    [Purpose] The present study aimed to separately compare systolic blood pressure and diastolic blood pressure with health and fitness-related variables among Asian obese and normal weight middle-aged women. [Subjects and Methods] The study included 1,201 women aged 30-59 years. The participants were classified into obese and normal weight groups. The blood pressure and health and fitness-related variables of all participants were assessed. [Results] Significant interaction effects were observed for most blood pressure and health and fitness-related variables between the groups. However, significant interaction effects were not observed for standard weight, basal metabolic rate, and heart rate. Blood pressure showed significant positive correlations with weight, body fat, fat weight, core fat, body mass index, and basal metabolic rate in both groups. Systolic blood pressure was significantly correlated with muscular endurance, power, and agility in the obese group and with VO2max and flexibility in the normal weight group. Diastolic blood pressure was significantly correlated with muscular endurance and power in the obese group and with VO2max in the normal weight group. [Conclusion] The relationships between systolic blood pressure and heart rate, muscle endurance, power, and agility are stronger than the relationships between diastolic blood pressure and these variables.

  15. Predictors and Moderators of Treatment Response in Childhood Anxiety Disorders: Results from the CAMS Trial

    PubMed Central

    Compton, Scott N.; Peris, Tara S.; Almirall, Daniel; Birmaher, Boris; Sherrill, Joel; Kendall, Phillip C.; March, John S.; Gosch, Elizabeth A.; Ginsburg, Golda S.; Rynn, Moira A.; Piacentini, John C.; McCracken, James T.; Keeton, Courtney P.; Suveg, Cynthia M.; Aschenbrand, Sasha G.; Sakolsky, Dara; Iyengar, Satish; Walkup, John T.; Albano, Anne Marie

    2014-01-01

    Objective To examine predictors and moderators of treatment outcomes among 488 youth ages 7-17 years (50% female; 74% ≤ 12 years) with DSM-IV diagnoses of separation anxiety disorder, social phobia, or generalized anxiety disorder who were randomly assigned to receive either cognitive behavior therapy (CBT), sertraline (SRT), their combination (COMB), or medication management with pill placebo (PBO) in the Child/Adolescent Anxiety Multimodal Study (CAMS). Method Six classes of predictor and moderator variables (22 variables) were identified from the literature and examined using continuous (Pediatric Anxiety Ratings Scale; PARS) and categorical (Clinical Global Impression Scale-Improvement; CGI-I) outcome measures. Results Three baseline variables predicted better outcomes (independent of treatment condition) on the PARS, including low anxiety severity (as measured by parents and independent evaluators) and caregiver strain. No baseline variables were found to predict week 12 responder status (CGI-I). Participant's principal diagnosis moderated treatment outcomes, but only on the PARS. No baseline variables were found to moderate treatment outcomes on week 12 responder status (CGI-I). Discussion Overall, anxious children responded favorably to CAMS treatments. However, having more severe and impairing anxiety, greater caregiver strain, and a principal diagnosis of social phobia were associated with less favorable outcomes. Clinical implications of these findings are discussed. PMID:24417601

  16. Phase Transition between Black and Blue Phosphorenes: A Quantum Monte Carlo Study

    NASA Astrophysics Data System (ADS)

    Li, Lesheng; Yao, Yi; Reeves, Kyle; Kanai, Yosuke

    Phase transition of the more common black phosphorene to blue phosphorene is of great interest because they are predicted to exhibit unique electronic and optical properties. However, these two phases are predicted to be separated by a rather large energy barrier. In this work, we study the transition pathway between black and blue phosphorenes by using the variable cell nudge elastic band method combined with density functional theory calculation. We show how diffusion quantum Monte Carlo method can be used for determining the energetics of the phase transition and demonstrate the use of two approaches for removing finite-size errors. Finally, we predict how applied stress can be used to control the energetic balance between these two different phases of phosphorene.

  17. Exact solution of conductive heat transfer in cylindrical composite laminate

    NASA Astrophysics Data System (ADS)

    Kayhani, M. H.; Shariati, M.; Nourozi, M.; Karimi Demneh, M.

    2009-11-01

    This paper presents an exact solution for steady-state conduction heat transfer in cylindrical composite laminates. This laminate is cylindrical shape and in each lamina, fibers have been wound around the cylinder. In this article heat transfer in composite laminates is being investigated, by using separation of variables method and an analytical relation for temperature distribution in these laminates has been obtained under specific boundary conditions. Also Fourier coefficients in each layer obtain by solving set of equations that related to thermal boundary layer conditions at inside and outside of the cylinder also thermal continuity and heat flux continuity between each layer is considered. In this research LU factorization method has been used to solve the set of equations.

  18. Tree STEM and Canopy Biomass Estimates from Terrestrial Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Olofsson, K.; Holmgren, J.

    2017-10-01

    In this study an automatic method for estimating both the tree stem and the tree canopy biomass is presented. The point cloud tree extraction techniques operate on TLS data and models the biomass using the estimated stem and canopy volume as independent variables. The regression model fit error is of the order of less than 5 kg, which gives a relative model error of about 5 % for the stem estimate and 10-15 % for the spruce and pine canopy biomass estimates. The canopy biomass estimate was improved by separating the models by tree species which indicates that the method is allometry dependent and that the regression models need to be recomputed for different areas with different climate and different vegetation.

  19. Microcontroller - Based System for Electrogastrography Monitoring Through Wireless Transmission

    NASA Astrophysics Data System (ADS)

    Haddab, S.; Laghrouche, M.

    2009-01-01

    Electrogastrography (EGG) is a non-invasive method for recording the electrical activity of the stomach. This paper presents a system designed for monitoring the EGG physiological variables of a patient outside the hospital environment. The signal acquisition is achieved by means of an ambulatory system carried by the patient and connected to him through skin electrodes. The acquired signal is transmitted via the Bluetooth to a mobile phone where the data are stored into the memory and then transferred via the GSM network to the processing and diagnostic unit in the hospital. EGG is usually contaminated by artefacts and other signals, which are sometimes difficult to remove. We have used a neural network method for motion artefacts removal and biological signal separation.

  20. Micrometeorological measurements at Ash Meadows and Corn Creek Springs, Nye and Clark counties, Nevada, 1986-87

    USGS Publications Warehouse

    Johnson, M.J.; Pupacko, Alex

    1992-01-01

    Micrometeorological data were collected at Ash Meadows and Corn Creek Springs, Nye and Clark Counties, Nevada, from October 1, 1986 through September 30, 1987. The data include accumulated measurements recorded hourly or every 30 minutes, at each site, for the following climatic variables: air temperature, wind speed, relative humidity, precipitation, solar radiation, net radiation, and soil-heat flux. Periodic sampling of sensible-heat flux and latent-heat flux were also recorded using 5-minute intervals of accumulated data. Evapotranspiration was calculated by both the eddy-correlation method and the Penman combination method. The data collected and the computer programs used to process the data are available separately on three magnetic diskettes in card-image format. (USGS)

  1. Simulation of Charged Systems in Heterogeneous Dielectric Media via a True Energy Functional

    NASA Astrophysics Data System (ADS)

    Jadhao, Vikram; Solis, Francisco J.; de la Cruz, Monica Olvera

    2012-11-01

    For charged systems in heterogeneous dielectric media, a key obstacle for molecular dynamics (MD) simulations is the need to solve the Poisson equation in the media. This obstacle can be bypassed using MD methods that treat the local polarization charge density as a dynamic variable, but such approaches require access to a true free energy functional, one that evaluates to the equilibrium electrostatic energy at its minimum. In this Letter, we derive the needed functional. As an application, we develop a Car-Parrinello MD method for the simulation of free charges present near a spherical emulsion droplet separating two immiscible liquids with different dielectric constants. Our results show the presence of nonmonotonic ionic profiles in the dielectric with a lower dielectric constant.

  2. On the use of Lagrangian variables in descriptions of unsteady boundary-layer separation

    NASA Technical Reports Server (NTRS)

    Cowley, Stephen J.; Vandommelen, Leon L.; Lam, Shui T.

    1990-01-01

    The Lagrangian description of unsteady boundary layer separation is reviewed from both analytical and numerical perspectives. It is explained in simple terms how particle distortion gives rise to unsteady separation, and why a theory centered on Lagrangian coordinates provides the clearest description of this phenomenon. Some of the more recent results for unsteady three dimensional compressible separation are included. The different forms of separation that can arise from symmetries are emphasized. A possible description of separation is also included when the detaching vorticity layer exits the classical boundary layer region, but still remains much closer to the surface than a typical body-lengthscale.

  3. Parametric Studies of Flow Separation using Air Injection

    NASA Technical Reports Server (NTRS)

    Zhang, Wei

    2004-01-01

    Boundary Layer separation causes the airfoil to stall and therefore imposes dramatic performance degradation on the airfoil. In recent years, flow separation control has been one of the active research areas in the field of aerodynamics due to its promising performance improvements on the lifting device. These active flow separation control techniques include steady and unsteady air injection as well as suction on the airfoil surface etc. This paper will be focusing on the steady and unsteady air injection on the airfoil. Although wind tunnel experiments revealed that the performance improvements on the airfoil using injection techniques, the details of how the key variables such as air injection slot geometry and air injection angle etc impact the effectiveness of flow separation control via air injection has not been studied. A parametric study of both steady and unsteady air injection active flow control will be the main objective for this summer. For steady injection, the key variables include the slot geometry, orientation, spacing, air injection velocity as well as the injection angle. For unsteady injection, the injection frequency will also be investigated. Key metrics such as lift coefficient, drag coefficient, total pressure loss and total injection mass will be used to measure the effectiveness of the control technique. A design of experiments using the Box-Behnken Design is set up in order to determine how each of the variables affects each of the key metrics. Design of experiment is used so that the number of experimental runs will be at minimum and still be able to predict which variables are the key contributors to the responses. The experiments will then be conducted in the 1ft by 1ft wind tunnel according to the design of experiment settings. The data obtained from the experiments will be imported into JMP, statistical software, to generate sets of response surface equations which represent the statistical empirical model for each of the metrics as a function of the key variables. Next, the variables such as the slot geometry can be optimized using the build-in optimizer within JMP. Finally, a wind tunnel testing will be conducted using the optimized slot geometry and other key variables to verify the empirical statistical model. The long term goal for this effort is to assess the impacts of active flow control using air injection at system level as one of the task plan included in the NASAs URETI program with Georgia Institute of Technology.

  4. Removing Batch Effects from Longitudinal Gene Expression - Quantile Normalization Plus ComBat as Best Approach for Microarray Transcriptome Data

    PubMed Central

    Müller, Christian; Schillert, Arne; Röthemeier, Caroline; Trégouët, David-Alexandre; Proust, Carole; Binder, Harald; Pfeiffer, Norbert; Beutel, Manfred; Lackner, Karl J.; Schnabel, Renate B.; Tiret, Laurence; Wild, Philipp S.; Blankenberg, Stefan

    2016-01-01

    Technical variation plays an important role in microarray-based gene expression studies, and batch effects explain a large proportion of this noise. It is therefore mandatory to eliminate technical variation while maintaining biological variability. Several strategies have been proposed for the removal of batch effects, although they have not been evaluated in large-scale longitudinal gene expression data. In this study, we aimed at identifying a suitable method for batch effect removal in a large study of microarray-based longitudinal gene expression. Monocytic gene expression was measured in 1092 participants of the Gutenberg Health Study at baseline and 5-year follow up. Replicates of selected samples were measured at both time points to identify technical variability. Deming regression, Passing-Bablok regression, linear mixed models, non-linear models as well as ReplicateRUV and ComBat were applied to eliminate batch effects between replicates. In a second step, quantile normalization prior to batch effect correction was performed for each method. Technical variation between batches was evaluated by principal component analysis. Associations between body mass index and transcriptomes were calculated before and after batch removal. Results from association analyses were compared to evaluate maintenance of biological variability. Quantile normalization, separately performed in each batch, combined with ComBat successfully reduced batch effects and maintained biological variability. ReplicateRUV performed perfectly in the replicate data subset of the study, but failed when applied to all samples. All other methods did not substantially reduce batch effects in the replicate data subset. Quantile normalization plus ComBat appears to be a valuable approach for batch correction in longitudinal gene expression data. PMID:27272489

  5. Comparison of laboratory and field remote sensing methods to measure forage quality.

    PubMed

    Guo, Xulin; Wilmshurst, John F; Li, Zhaoqin

    2010-09-01

    Recent research in range ecology has emphasized the importance of forage quality as a key indicator of rangeland condition. However, we lack tools to evaluate forage quality at scales appropriate for management. Using canopy reflectance data to measure forage quality has been conducted at both laboratory and field levels separately, but little work has been conducted to evaluate these methods simultaneously. The objective of this study is to find a reliable way of assessing grassland quality through measuring forage chemistry with reflectance. We studied a mixed grass ecosystem in Grasslands National Park of Canada and surrounding pastures, located in southern Saskatchewan. Spectral reflectance was collected at both in-situ field level and in the laboratory. Vegetation samples were collected at each site, sorted into the green grass portion, and then sent to a chemical company for measuring forage quality variables, including protein, lignin, ash, moisture at 135 °C, Neutral Detergent Fiber (NDF), Acid Detergent Fiber (ADF), Total Digestible, Digestible Energy, Net Energy for Lactation, Net Energy for Maintenance, and Net Energy for Gain. Reflectance data were processed with the first derivative transformation and continuum removal method. Correlation analysis was conducted on spectral and forage quality variables. A regression model was further built to investigate the possibility of using canopy spectral measurements to predict the grassland quality. Results indicated that field level prediction of protein of mixed grass species was possible (r² = 0.63). However, the relationship between canopy reflectance and the other forage quality variables was not strong.

  6. Statistical modeling methods to analyze the impacts of multiunit process variability on critical quality attributes of Chinese herbal medicine tablets

    PubMed Central

    Sun, Fei; Xu, Bing; Zhang, Yi; Dai, Shengyun; Yang, Chan; Cui, Xianglong; Shi, Xinyuan; Qiao, Yanjiang

    2016-01-01

    The quality of Chinese herbal medicine tablets suffers from batch-to-batch variability due to a lack of manufacturing process understanding. In this paper, the Panax notoginseng saponins (PNS) immediate release tablet was taken as the research subject. By defining the dissolution of five active pharmaceutical ingredients and the tablet tensile strength as critical quality attributes (CQAs), influences of both the manipulated process parameters introduced by an orthogonal experiment design and the intermediate granules’ properties on the CQAs were fully investigated by different chemometric methods, such as the partial least squares, the orthogonal projection to latent structures, and the multiblock partial least squares (MBPLS). By analyzing the loadings plots and variable importance in the projection indexes, the granule particle sizes and the minimal punch tip separation distance in tableting were identified as critical process parameters. Additionally, the MBPLS model suggested that the lubrication time in the final blending was also important in predicting tablet quality attributes. From the calculated block importance in the projection indexes, the tableting unit was confirmed to be the critical process unit of the manufacturing line. The results demonstrated that the combinatorial use of different multivariate modeling methods could help in understanding the complex process relationships as a whole. The output of this study can then be used to define a control strategy to improve the quality of the PNS immediate release tablet. PMID:27932865

  7. Single-case synthesis tools II: Comparing quantitative outcome measures.

    PubMed

    Zimmerman, Kathleen N; Pustejovsky, James E; Ledford, Jennifer R; Barton, Erin E; Severini, Katherine E; Lloyd, Blair P

    2018-03-07

    Varying methods for evaluating the outcomes of single case research designs (SCD) are currently used in reviews and meta-analyses of interventions. Quantitative effect size measures are often presented alongside visual analysis conclusions. Six measures across two classes-overlap measures (percentage non-overlapping data, improvement rate difference, and Tau) and parametric within-case effect sizes (standardized mean difference and log response ratio [increasing and decreasing])-were compared to determine if choice of synthesis method within and across classes impacts conclusions regarding effectiveness. The effectiveness of sensory-based interventions (SBI), a commonly used class of treatments for young children, was evaluated. Separately from evaluations of rigor and quality, authors evaluated behavior change between baseline and SBI conditions. SBI were unlikely to result in positive behavior change across all measures except IRD. However, subgroup analyses resulted in variable conclusions, indicating that the choice of measures for SCD meta-analyses can impact conclusions. Suggestions for using the log response ratio in SCD meta-analyses and considerations for understanding variability in SCD meta-analysis conclusions are discussed. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Parallelized traveling cluster approximation to study numerically spin-fermion models on large lattices

    NASA Astrophysics Data System (ADS)

    Mukherjee, Anamitra; Patel, Niravkumar D.; Bishop, Chris; Dagotto, Elbio

    2015-06-01

    Lattice spin-fermion models are important to study correlated systems where quantum dynamics allows for a separation between slow and fast degrees of freedom. The fast degrees of freedom are treated quantum mechanically while the slow variables, generically referred to as the "spins," are treated classically. At present, exact diagonalization coupled with classical Monte Carlo (ED + MC) is extensively used to solve numerically a general class of lattice spin-fermion problems. In this common setup, the classical variables (spins) are treated via the standard MC method while the fermion problem is solved by exact diagonalization. The "traveling cluster approximation" (TCA) is a real space variant of the ED + MC method that allows to solve spin-fermion problems on lattice sizes with up to 103 sites. In this publication, we present a novel reorganization of the TCA algorithm in a manner that can be efficiently parallelized. This allows us to solve generic spin-fermion models easily on 104 lattice sites and with some effort on 105 lattice sites, representing the record lattice sizes studied for this family of models.

  9. Crack detection in oak flooring lamellae using ultrasound-excited thermography

    NASA Astrophysics Data System (ADS)

    Pahlberg, Tobias; Thurley, Matthew; Popovic, Djordje; Hagman, Olle

    2018-01-01

    Today, a large number of people are manually grading and detecting defects in wooden lamellae in the parquet flooring industry. This paper investigates the possibility of using the ensemble methods random forests and boosting to automatically detect cracks using ultrasound-excited thermography and a variety of predictor variables. When friction occurs in thin cracks, they become warm and thus visible to a thermographic camera. Several image processing techniques have been used to suppress the noise and enhance probable cracks in the images. The most successful predictor variables captured the upper part of the heat distribution, such as the maximum temperature, kurtosis and percentile values 92-100 of the edge pixels. The texture in the images was captured by Completed Local Binary Pattern histograms and cracks were also segmented by background suppression and thresholding. The classification accuracy was significantly improved from previous research through added image processing, introduction of more predictors, and by using automated machine learning. The best ensemble methods reach an average classification accuracy of 0.8, which is very close to the authors' own manual attempt at separating the images (0.83).

  10. A green method using a micellar system for determination of andrographolide and dehydroandrographolide in human plasma.

    PubMed

    Zhao, Qi; Ding, Jie; Jin, Haiyan; Ding, Lan; Ren, Nanqi

    2013-04-01

    A method based on cloud point extraction (CPE) coupled with high-performance liquid chromatography separation and ultraviolet (UV) detection was developed to determine andrographolide and dehydroandrographolide in human plasma. The nonionic surfactant Triton X-114 was chosen as the extraction medium. Variable parameters affecting the CPE efficiency were evaluated and optimized, such as concentrations of Triton X-114 and NaCl, pH, equilibration temperature and equilibration time. A Zorbax SB C18 column (250 × 4.6 mm i.d., 5 µm) was used for separation of the two analytes at 30°C. The UV detection was performed at 254 nm. Under the optimum conditions, the limits of detection of andrographolide and dehydroandrographolide are 0.032 and 0.019 µg/mL, respectively. The intra-day and inter-day precisions expressed as relative standard deviation ranged from 3.2 to 7.3% and from 2.9 and 8.6%. The recoveries of andrographolide and dehydroandrographolide were in the range of 76.8-98.6% at three fortified concentrations of 0.1, 0.5 and 1.0 µg/mL. This method was efficient, environmentally friendly, rapid and inexpensive for the extraction and determination of andrographolide and dehydroandrographolide in human plasma.

  11. The application of low-rank and sparse decomposition method in the field of climatology

    NASA Astrophysics Data System (ADS)

    Gupta, Nitika; Bhaskaran, Prasad K.

    2018-04-01

    The present study reports a low-rank and sparse decomposition method that separates the mean and the variability of a climate data field. Until now, the application of this technique was limited only in areas such as image processing, web data ranking, and bioinformatics data analysis. In climate science, this method exactly separates the original data into a set of low-rank and sparse components, wherein the low-rank components depict the linearly correlated dataset (expected or mean behavior), and the sparse component represents the variation or perturbation in the dataset from its mean behavior. The study attempts to verify the efficacy of this proposed technique in the field of climatology with two examples of real world. The first example attempts this technique on the maximum wind-speed (MWS) data for the Indian Ocean (IO) region. The study brings to light a decadal reversal pattern in the MWS for the North Indian Ocean (NIO) during the months of June, July, and August (JJA). The second example deals with the sea surface temperature (SST) data for the Bay of Bengal region that exhibits a distinct pattern in the sparse component. The study highlights the importance of the proposed technique used for interpretation and visualization of climate data.

  12. Health-Related Quality of Life Among Central Appalachian Residents in Mountaintop Mining Counties

    PubMed Central

    Hendryx, Michael

    2011-01-01

    Objectives. We examined the health-related quality of life of residents in mountaintop mining counties of Appalachia using the 2006 national Behavioral Risk Factor Surveillance System. Methods. Dependent variables included self-rated health; the number of poor physical, poor mental, and activity limitation days (in the past 30 days); and the Healthy Days Index. Independent variables included metropolitan status, primary care physician supply, and Behavioral Risk Factor Surveillance System behavioral and demographic variables. We compared dependent variables across 3 categories: mountaintop mining (yes or no), other coal mining (yes or no), and a referent nonmining group. We used SUDAAN MULTILOG and multiple linear regression models with post hoc least squares means to test mountaintop mining effects after adjusting for covariates. Results. Residents of mountaintop mining counties reported significantly more days of poor physical, mental, and activity limitation and poorer self-rated health (P < .01) compared with the other county groupings. Results were generally consistent in separate analyses by gender and age. Conclusions. Mountaintop mining areas are associated with the greatest reductions in health-related quality of life even when compared with counties with other forms of coal mining. PMID:21421943

  13. Neural Network and Nearest Neighbor Algorithms for Enhancing Sampling of Molecular Dynamics.

    PubMed

    Galvelis, Raimondas; Sugita, Yuji

    2017-06-13

    The free energy calculations of complex chemical and biological systems with molecular dynamics (MD) are inefficient due to multiple local minima separated by high-energy barriers. The minima can be escaped using an enhanced sampling method such as metadynamics, which apply bias (i.e., importance sampling) along a set of collective variables (CV), but the maximum number of CVs (or dimensions) is severely limited. We propose a high-dimensional bias potential method (NN2B) based on two machine learning algorithms: the nearest neighbor density estimator (NNDE) and the artificial neural network (ANN) for the bias potential approximation. The bias potential is constructed iteratively from short biased MD simulations accounting for correlation among CVs. Our method is capable of achieving ergodic sampling and calculating free energy of polypeptides with up to 8-dimensional bias potential.

  14. Analysis of the influences on plumage condition in laying hens: How suitable is a whole body plumage score as an outcome?

    PubMed

    Campe, A; Hoes, C; Koesters, S; Froemke, C; Bougeard, S; Staack, M; Bessei, W; Manton, A; Scholz, B; Schrader, L; Thobe, P; Knierim, U

    2018-02-01

    An important indicator of the health and behavior of laying hens is their plumage condition. Various scoring systems are used, and various risk factors for feather damage have been described. Often, a summarized score of different body parts is used to describe the overall condition of the plumage of a bird. However, it has not yet been assessed whether such a whole body plumage score is a suitable outcome variable when analyzing the risk factors for plumage deterioration. Data collected within a German project on farms keeping laying hens in aviaries were analyzed to investigate whether and the extent to which information is lost when summarizing the scores of the separate body parts. Two models were fitted using multiblock redundancy analysis, in which the first model included the whole body score as one outcome variable, while the second model included the scores of the individual body parts as multiple outcome variables. Although basically similar influences could be discovered with both models, the investigation of the individual body parts allowed for consideration of the influences on each body part separately and for the identification of additional influences. Furthermore, ambivalent influences (a factor differently associated with 2 different outcomes) could be detected with this approach, and possible dilutive effects were avoided. We conclude that influences might be underestimated or even missed when modeling their explanatory power for an overall score only. Therefore, multivariate methods that allow for the consideration of individual body parts are an interesting option when investigating influences on plumage condition. © 2017 Poultry Science Association Inc.

  15. Morphological versus molecular markers to describe variability in Juniperus excelsa subsp. excelsa (Cupressaceae).

    PubMed

    Douaihy, Bouchra; Sobierajska, Karolina; Jasińska, Anna Katarzyna; Boratyńska, Krystyna; Ok, Tolga; Romo, Angel; Machon, Nathalie; Didukh, Yakiv; Bou Dagher-Kharrat, Magda; Boratyński, Adam

    2012-01-01

    Juniperus excelsa M.-Bieb. is a major forest element in the mountains of the eastern part of Mediterranean and sub-Mediterranean regions. This study comprises the first morphological investigation covering a large part of the geographical range of J. excelsa and aims to verify the congruency between the morphological results and molecular results of a previous study. We studied 14 populations sampled from Greece, Cyprus, Ukraine, Turkey and Lebanon, 11 of which have previously been investigated using molecular markers. Three hundred and ninety-four individuals of J. excelsa were examined using nine biometric features characterizing cones, seeds and shoots, and eight derived ratios. Statistical analyses were conducted in order to evaluate the intra- and inter-population morphological variability. The level of intra-population variability observed did not show any geographical trends. The total variation mostly depended on the ratios of cone diameter/seed width and seed width/seed length. The discrimination analysis, the Ward agglomeration method and barrier analysis results showed a separation of the sampled populations into three main clusters. These results confirmed, in part, the geographical differentiation revealed by molecular markers with a lower level of differentiation and a less clear geographical pattern. The most differentiated populations using both markers corresponded to old, isolated populations in the high altitudes of Lebanon (>2000 m). Moreover, a separation of the northern Turkish population from the southern Turkish populations was observed using both markers. Morphological variation together with genetic and biogeographic studies make an effective tool for detecting relict plant populations and also populations subjected to more intensive selection.

  16. Field and laboratory arsenic speciation methods and their application to natural-water analysis

    USGS Publications Warehouse

    Bednar, A.J.; Garbarino, J.R.; Burkhardt, M.R.; Ranville, J.F.; Wildeman, T.R.

    2004-01-01

    The toxic and carcinogenic properties of inorganic and organic arsenic species make their determination in natural water vitally important. Determination of individual inorganic and organic arsenic species is critical because the toxicology, mobility, and adsorptivity vary substantially. Several methods for the speciation of arsenic in groundwater, surface-water, and acid mine drainage sample matrices using field and laboratory techniques are presented. The methods provide quantitative determination of arsenite [As(III)], arsenate [As(V)], monomethylarsonate (MMA), dimethylarsinate (DMA), and roxarsone in 2-8min at detection limits of less than 1??g arsenic per liter (??g AsL-1). All the methods use anion exchange chromatography to separate the arsenic species and inductively coupled plasma-mass spectrometry as an arsenic-specific detector. Different methods were needed because some sample matrices did not have all arsenic species present or were incompatible with particular high-performance liquid chromatography (HPLC) mobile phases. The bias and variability of the methods were evaluated using total arsenic, As(III), As(V), DMA, and MMA results from more than 100 surface-water, groundwater, and acid mine drainage samples, and reference materials. Concentrations in test samples were as much as 13,000??g AsL-1 for As(III) and 3700??g AsL-1 for As(V). Methylated arsenic species were less than 100??g AsL-1 and were found only in certain surface-water samples, and roxarsone was not detected in any of the water samples tested. The distribution of inorganic arsenic species in the test samples ranged from 0% to 90% As(III). Laboratory-speciation method variability for As(III), As(V), MMA, and DMA in reagent water at 0.5??g AsL-1 was 8-13% (n=7). Field-speciation method variability for As(III) and As(V) at 1??g AsL-1 in reagent water was 3-4% (n=3). ?? 2003 Elsevier Ltd. All rights reserved.

  17. Determination of somatropin charged variants by capillary zone electrophoresis - optimisation, verification and implementation of the European pharmacopoeia method.

    PubMed

    Storms, S M; Feltus, A; Barker, A R; Joly, M-A; Girard, M

    2009-03-01

    Measurement of somatropin charged variants by isoelectric focusing was replaced with capillary zone electrophoresis in the January 2006 European Pharmacopoeia Supplement 5.3, based on results from an interlaboratory collaborative study. Due to incompatibilities and method-robustness issues encountered prior to verification, a number of method parameters required optimisation. As the use of a diode array detector at 195 nm or 200 nm led to a loss of resolution, a variable wavelength detector using a 200 nm filter was employed. Improved injection repeatability was obtained by increasing the injection time and pressure, and changing the sample diluent from water to running buffer. Finally, definition of capillary pre-treatment and rinse procedures resulted in more consistent separations over time. Method verification data are presented demonstrating linearity, specificity, repeatability, intermediate precision, limit of quantitation, sample stability, solution stability, and robustness. Based on these experiments, several modifications to the current method have been recommended and incorporated into the European Pharmacopoeia to help improve method performance across laboratories globally.

  18. Study on Separation of Structural Isomer with Magneto-Archimedes method

    NASA Astrophysics Data System (ADS)

    Kobayashi, T.; Mori, T.; Akiyama, Y.; Mishima, F.; Nishijima, S.

    2017-09-01

    Organic compounds are refined by separating their structural isomers, however each separation method has some problems. For example, distillation consumes large energy. In order to solve these problems, new separation method is needed. Considering organic compounds are diamagnetic, we focused on magneto-Archimedes method. With this method, particle mixture dispersed in a paramagnetic medium can be separated in a magnetic field due to the difference of the density and magnetic susceptibility of the particles. In this study, we succeeded in separating isomers of phthalic acid as an example of structural isomer using MnCl2 solution as the paramagnetic medium. In order to use magneto-Archimedes method for separating materials for food or medicine, we proposed harmless medium using oxygen and fluorocarbon instead of MnCl2 aqueous solution. As a result, the possibility of separating every structural isomer was shown.

  19. 12 CFR 250.411 - Interlocking relationships between member bank and variable annuity insurance company.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... and variable annuity insurance company. 250.411 Section 250.411 Banks and Banking FEDERAL RESERVE... between member bank and variable annuity insurance company. (a) The Board has recently been asked to... of the insurance company, of which the accumulation fund is a “separate account,” but as to which the...

  20. 12 CFR 250.411 - Interlocking relationships between member bank and variable annuity insurance company.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and variable annuity insurance company. 250.411 Section 250.411 Banks and Banking FEDERAL RESERVE... between member bank and variable annuity insurance company. (a) The Board has recently been asked to... of the insurance company, of which the accumulation fund is a “separate account,” but as to which the...

  1. 12 CFR 250.411 - Interlocking relationships between member bank and variable annuity insurance company.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... and variable annuity insurance company. 250.411 Section 250.411 Banks and Banking FEDERAL RESERVE... between member bank and variable annuity insurance company. (a) The Board has recently been asked to... of the insurance company, of which the accumulation fund is a “separate account,” but as to which the...

  2. Summer performance results obtained from simultaneously testing ten solar collectors outdoors

    NASA Technical Reports Server (NTRS)

    Miller, D. R.

    1977-01-01

    Ten solar collectors were simultaneously tested outdoors. Efficiency data were correlated using a method that separates solar variables (flux, incident angle) from the desired performance parameters (heat loss, absorbtance, transmittance) which are unique to a given collector design. Tests were conducted on both clear and moderately cloudy days. Correlating data in the above manner, a 2-glass, black paint collector exhibited a decrease in efficiency of 5 percentage points relative to the baseline data for an exposure time of 2 years, 4 months. Condensation on the collector glazing was thought to be a contributing factor in this efficiency change.

  3. Generation of human Fab antibody libraries: PCR amplification and assembly of light- and heavy-chain coding sequences.

    PubMed

    Andris-Widhopf, Jennifer; Steinberger, Peter; Fuller, Roberta; Rader, Christoph; Barbas, Carlos F

    2011-09-01

    The development of therapeutic antibodies for use in the treatment of human diseases has long been a goal for many researchers in the antibody field. One way to obtain these antibodies is through phage-display libraries constructed from human lymphocytes. This protocol describes the construction of human Fab (fragment antigen binding) antibody libraries. In this method, the individual rearranged heavy- and light-chain variable regions are amplified separately and are linked through a series of overlap polymerase chain reaction (PCR) steps to give the final Fab products that are used for cloning.

  4. International Conference on Computing Methods in Applied Sciences and Engineering (9th) Held in Paris, France on 29 January-2 February 1990

    DTIC Science & Technology

    1990-02-02

    National Aero-Space Plane NTC no time counter TSS-2 Tethered Satellite System - 2 VHS variable hard sphere VSL viscous shock-layer Introduction With...required at each time step to evaluate the mass fractions Yi+’ it is shown in [21] that the matrix of this linear system is an M-matrix (see e.g. [42]), and...first rewrite system (4.7)- (4.8) under the following form, separating the time -dependent, convective, diffusive and reactive terms: VW’ + F(W)r + G(,W

  5. Improved methods for the measurement and modeling of PV module and system performance for all operating conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, D.L.

    1995-11-01

    The objective of this work was to develop improved performance model for modules and systems for for all operating conditions for use in module specifications, system and BOS component design, and system rating or monitoring. The approach taken was to identify and quantify the influence of dominant factors of solar irradiance, cell temperature, angle-of-incidence; and solar spectrum; use outdoor test procedures to separate the effects of electrical, thermal, and optical performance; use fundamental cell characteristics to improve analysis; and combine factors in simple model using the common variables.

  6. A Novel Multiscale Physics Based Progressive Failure Methodology for Laminated Composite Structures

    NASA Technical Reports Server (NTRS)

    Pineda, Evan J.; Waas, Anthony M.; Bednarcyk, Brett A.; Collier, Craig S.; Yarrington, Phillip W.

    2008-01-01

    A variable fidelity, multiscale, physics based finite element procedure for predicting progressive damage and failure of laminated continuous fiber reinforced composites is introduced. At every integration point in a finite element model, progressive damage is accounted for at the lamina-level using thermodynamically based Schapery Theory. Separate failure criteria are applied at either the global-scale or the microscale in two different FEM models. A micromechanics model, the Generalized Method of Cells, is used to evaluate failure criteria at the micro-level. The stress-strain behavior and observed failure mechanisms are compared with experimental results for both models.

  7. Interannual drivers of the seasonal cycle of CO2 in the Southern Ocean

    NASA Astrophysics Data System (ADS)

    Gregor, Luke; Kok, Schalk; Monteiro, Pedro M. S.

    2018-04-01

    Resolving and understanding the drivers of variability of CO2 in the Southern Ocean and its potential climate feedback is one of the major scientific challenges of the ocean-climate community. Here we use a regional approach on empirical estimates of pCO2 to understand the role that seasonal variability has in long-term CO2 changes in the Southern Ocean. Machine learning has become the preferred empirical modelling tool to interpolate time- and location-restricted ship measurements of pCO2. In this study we use an ensemble of three machine-learning products: support vector regression (SVR) and random forest regression (RFR) from Gregor et al. (2017), and the self-organising-map feed-forward neural network (SOM-FFN) method from Landschützer et al. (2016). The interpolated estimates of ΔpCO2 are separated into nine regions in the Southern Ocean defined by basin (Indian, Pacific, and Atlantic) and biomes (as defined by Fay and McKinley, 2014a). The regional approach shows that, while there is good agreement in the overall trend of the products, there are periods and regions where the confidence in estimated ΔpCO2 is low due to disagreement between the products. The regional breakdown of the data highlighted the seasonal decoupling of the modes for summer and winter interannual variability. Winter interannual variability had a longer mode of variability compared to summer, which varied on a 4-6-year timescale. We separate the analysis of the ΔpCO2 and its drivers into summer and winter. We find that understanding the variability of ΔpCO2 and its drivers on shorter timescales is critical to resolving the long-term variability of ΔpCO2. Results show that ΔpCO2 is rarely driven by thermodynamics during winter, but rather by mixing and stratification due to the stronger correlation of ΔpCO2 variability with mixed layer depth. Summer pCO2 variability is consistent with chlorophyll a variability, where higher concentrations of chlorophyll a correspond with lower pCO2 concentrations. In regions of low chlorophyll a concentrations, wind stress and sea surface temperature emerged as stronger drivers of ΔpCO2. In summary we propose that sub-decadal variability is explained by summer drivers, while winter variability contributes to the long-term changes associated with the SAM. This approach is a useful framework to assess the drivers of ΔpCO2 but would greatly benefit from improved estimates of ΔpCO2 and a longer time series.

  8. Efficient approach for the extraction of proanthocyanidins from Cinnamomum longepaniculatum leaves using ultrasonic irradiation and an evaluation of their inhibition activity on digestive enzymes and antioxidant activity in vitro.

    PubMed

    Liu, Zaizhi; Mo, Kailin; Fei, Shimin; Zu, Yuangang; Yang, Lei

    2017-08-01

    Proanthocyanidins were separated for the first time from Cinnamomum longepaniculatum leaves. An experiment-based extraction strategy was used to research the efficiency of an ultrasound-assisted method for proanthocyanidins extraction. The Plackett-Burman design results revealed that the ultrasonication time, ultrasonic power and liquid/solid ratio were the most significant parameters among the six variables in the extraction process. Upon further optimization of the Box-Behnken design, the optimal conditions were obtained as follows: extraction temperature, 100°C; ethanol concentration, 70%; pH 5; ultrasonication power, 660 W; ultrasonication time, 44 min; liquid/solid ratio, 20 mL/g. Under the obtained conditions, the extraction yield of the proanthocyanidins using the ultrasonic-assisted method was 7.88 ± 0.21 mg/g, which is higher than that obtained using traditional methods. The phloroglucinolysis products of the proanthocyanidins, including the terminal units and derivatives from the extension units, were tentatively identified using a liquid chromatography with tandem mass spectrometry analysis. Cinnamomum longepaniculatum proanthocyanidins have promising antioxidant and anti-nutritional properties. In summary, an ultrasound-assisted method in combination with a response surface experimental design is an efficient methodology for the sufficient isolation of proanthocyanidins from Cinnamomum longepaniculatum leaves, and this method could be used for the separation of other bioactive compounds. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Conventional Energy and Macronutrient Variables Distort the Accuracy of Children’s Dietary Reports: Illustrative Data from a Validation Study of Effect of Order Prompts

    PubMed Central

    Baxter, Suzanne Domel; Smith, Albert F.; Hardin, James W.; Nichols, Michele D.

    2008-01-01

    Objective Validation-study data are used to illustrate that conventional energy and macronutrient (protein, carbohydrate, fat) variables, which disregard accuracy of reported items and amounts, misrepresent reporting accuracy. Reporting-error-sensitive variables are proposed which classify reported items as matches or intrusions, and reported amounts as corresponding or overreported. Methods 58 girls and 63 boys were each observed eating school meals on 2 days separated by ≥4 weeks, and interviewed the morning after each observation day. One interview per child had forward-order (morning-to-evening) prompts; one had reverse-order prompts. Original food-item-level analyses found a sex-x-order prompt interaction for omission rates. Current analyses compared reference (observed) and reported information transformed to energy and macronutrients. Results Using conventional variables, reported amounts were less than reference amounts (ps<0.001; paired t-tests); report rates were higher for the first than second interview for energy, protein, and carbohydrate (ps≤0.049; mixed models). Using reporting-error-sensitive variables, correspondence rates were higher for girls with forward- but boys with reverse-order prompts (ps≤0.041; mixed models); inflation ratios were lower with reverse- than forward-order prompts for energy, carbohydrate, and fat (ps≤0.045; mixed models). Conclusions Conventional variables overestimated reporting accuracy and masked order prompt and sex effects. Reporting-error-sensitive variables are recommended when assessing accuracy for energy and macronutrients in validation studies. PMID:16959308

  10. A novel model incorporating two variability sources for describing motor evoked potentials

    PubMed Central

    Goetz, Stefan M.; Luber, Bruce; Lisanby, Sarah H.; Peterchev, Angel V.

    2014-01-01

    Objective Motor evoked potentials (MEPs) play a pivotal role in transcranial magnetic stimulation (TMS), e.g., for determining the motor threshold and probing cortical excitability. Sampled across the range of stimulation strengths, MEPs outline an input–output (IO) curve, which is often used to characterize the corticospinal tract. More detailed understanding of the signal generation and variability of MEPs would provide insight into the underlying physiology and aid correct statistical treatment of MEP data. Methods A novel regression model is tested using measured IO data of twelve subjects. The model splits MEP variability into two independent contributions, acting on both sides of a strong sigmoidal nonlinearity that represents neural recruitment. Traditional sigmoidal regression with a single variability source after the nonlinearity is used for comparison. Results The distribution of MEP amplitudes varied across different stimulation strengths, violating statistical assumptions in traditional regression models. In contrast to the conventional regression model, the dual variability source model better described the IO characteristics including phenomena such as changing distribution spread and skewness along the IO curve. Conclusions MEP variability is best described by two sources that most likely separate variability in the initial excitation process from effects occurring later on. The new model enables more accurate and sensitive estimation of the IO curve characteristics, enhancing its power as a detection tool, and may apply to other brain stimulation modalities. Furthermore, it extracts new information from the IO data concerning the neural variability—information that has previously been treated as noise. PMID:24794287

  11. Path-integral methods for analyzing the effects of fluctuations in stochastic hybrid neural networks.

    PubMed

    Bressloff, Paul C

    2015-01-01

    We consider applications of path-integral methods to the analysis of a stochastic hybrid model representing a network of synaptically coupled spiking neuronal populations. The state of each local population is described in terms of two stochastic variables, a continuous synaptic variable and a discrete activity variable. The synaptic variables evolve according to piecewise-deterministic dynamics describing, at the population level, synapses driven by spiking activity. The dynamical equations for the synaptic currents are only valid between jumps in spiking activity, and the latter are described by a jump Markov process whose transition rates depend on the synaptic variables. We assume a separation of time scales between fast spiking dynamics with time constant [Formula: see text] and slower synaptic dynamics with time constant τ. This naturally introduces a small positive parameter [Formula: see text], which can be used to develop various asymptotic expansions of the corresponding path-integral representation of the stochastic dynamics. First, we derive a variational principle for maximum-likelihood paths of escape from a metastable state (large deviations in the small noise limit [Formula: see text]). We then show how the path integral provides an efficient method for obtaining a diffusion approximation of the hybrid system for small ϵ. The resulting Langevin equation can be used to analyze the effects of fluctuations within the basin of attraction of a metastable state, that is, ignoring the effects of large deviations. We illustrate this by using the Langevin approximation to analyze the effects of intrinsic noise on pattern formation in a spatially structured hybrid network. In particular, we show how noise enlarges the parameter regime over which patterns occur, in an analogous fashion to PDEs. Finally, we carry out a [Formula: see text]-loop expansion of the path integral, and use this to derive corrections to voltage-based mean-field equations, analogous to the modified activity-based equations generated from a neural master equation.

  12. Benthic algae of benchmark streams in agricultural areas of eastern Wisconsin

    USGS Publications Warehouse

    Scudder, Barbara C.; Stewart, Jana S.

    2001-01-01

    Multivariate analyses indicated multiple scales of environmental factors affect algae. Although two-way indicator species analysis (TWINSPAN), detrended correspondence analysis (DCA), and canonical correspondence analysis (CCA) generally separated sites according to RHU, only DCA ordination indicated a separation of sites according to ecoregion. Environmental variables con-elated with DCA axes 1 and 2 and therefore indicated as important explanatory factors for algal distribution and abundance were factors related to stream size, basin land use/cover, geomorphology, hydrogeology, and riparian disturbance. CCA analyses with a more limited set of environmental variables indicated that pH, average width of natural riparian vegetation (segment scale), basin land use/cover and Q/Q2 were the most important variables affecting the distribution and relative abundance of benthic algae at the 20 benchmark streams,

  13. Cepheid temperature and the Blazhko effect

    NASA Technical Reports Server (NTRS)

    Teays, Terry

    1995-01-01

    Two separate research projects were covered under this contract. The first project was to study the temperatures of Cepheid variable stars, while the second was a study of the Blazhko effect in RR Larae, both of them using IUE data. They will be reported on separately, in what follows.

  14. The method of attachment influences accelerometer-based activity data in dogs.

    PubMed

    Martin, Kyle W; Olsen, Anastasia M; Duncan, Colleen G; Duerr, Felix M

    2017-02-10

    Accelerometer-based activity monitoring is a promising new tool in veterinary medicine used to objectively assess activity levels in dogs. To date, it is unknown how device orientation, attachment method, and attachment of a leash to the collar holding an accelerometer affect canine activity data. It was our goal to evaluate whether attachment methods of accelerometers affect activity counts. Eight healthy, client-owned dogs were fitted with two identical neck collars to which two identical activity monitors were attached using six different methods of attachment. These methods of attachment evaluated the use of a protective case, positioning of the activity monitor and the tightness of attachment of the accelerometer. Lastly, the effect of leash attachment to the collar was evaluated. For trials where the effect of leash attachment to the collar was not being studied, the leash was attached to a harness. Activity data obtained from separate monitors within a given experiment were compared using Pearson correlation coefficients and across all experiments using the Kruskal-Wallis Test. There was excellent correlation and low variability between activity monitors on separate collars when the leash was attached to a harness, regardless of their relative positions. There was good correlation when activity monitors were placed on the same collar regardless of orientation. There were poor correlations between activity monitors in three experiments: when the leash was fastened to the collar that held an activity monitor, when one activity monitor was housed in the protective casing, and when one activity monitor was loosely zip-tied to the collar rather than threaded on using the provided metal loop. Follow-up, pair-wise comparisons identified the correlation associated with these three methods of attachment to be statistically different from the level of correlation when monitors were placed on separate collars. While accelerometer-based activity monitors are useful tools to objectively assess physical activity in dogs, care must be taken when choosing a method to attach the device. The attachment of the activity monitor to the collar should utilize a second, dedicated collar that is not used for leash attachment and the attachment method should remain consistent throughout a study period.

  15. Off-axis impact of unidirectional composites with cracks: Dynamic stress intensification

    NASA Technical Reports Server (NTRS)

    Sih, G. C.; Chen, E. P.

    1979-01-01

    The dynamic response of unidirectional composites under off axis (angle loading) impact is analyzed by assuming that the composite contains an initial flaw in the matrix material. The analytical method utilizes Fourier transform for the space variable and Laplace transform for the time variable. The off axis impact is separated into two parts, one being symmetric and the other skew-symmetric with reference to the crack plane. Transient boundary conditions of normal and shear tractions are applied to a crack embedded in the matrix of the unidirectional composite. The two boundary conditions are solved independently and the results superimposed. Mathematically, these conditions reduce the problem to a system of dual integral equations which are solved in the Laplace transform plane for the transformation of the dynamic stress intensity factor. The time inversion is carried out numerically for various combinations of the material properties of the composite and the results are displayed graphically.

  16. Natural extension of fast-slow decomposition for dynamical systems

    NASA Astrophysics Data System (ADS)

    Rubin, J. E.; Krauskopf, B.; Osinga, H. M.

    2018-01-01

    Modeling and parameter estimation to capture the dynamics of physical systems are often challenging because many parameters can range over orders of magnitude and are difficult to measure experimentally. Moreover, selecting a suitable model complexity requires a sufficient understanding of the model's potential use, such as highlighting essential mechanisms underlying qualitative behavior or precisely quantifying realistic dynamics. We present an approach that can guide model development and tuning to achieve desired qualitative and quantitative solution properties. It relies on the presence of disparate time scales and employs techniques of separating the dynamics of fast and slow variables, which are well known in the analysis of qualitative solution features. We build on these methods to show how it is also possible to obtain quantitative solution features by imposing designed dynamics for the slow variables in the form of specified two-dimensional paths in a bifurcation-parameter landscape.

  17. Improving Efficiency in Multi-Strange Baryon Reconstruction in d-Au at STAR

    NASA Astrophysics Data System (ADS)

    Leight, William

    2003-10-01

    We report preliminary multi-strange baryon measurements for d-Au collisions recorded at RHIC by the STAR experiment. After using classical topological analysis, in which cuts for each discriminating variable are adjusted by hand, we investigate improvements in signal-to-noise optimization using Linear Discriminant Analysis (LDA). LDA is an algorithm for finding, in the n-dimensional space of the n discriminating variables, the axis on which the signal and noise distributions are most separated. LDA is the first step in moving towards more sophisticated techniques for signal-to-noise optimization, such as Artificial Neural Nets. Due to the relatively low background and sufficiently high yields of d-Au collisions, they form an ideal system to study these possibilities for improving reconstruction methods. Such improvements will be extremely important for forthcoming Au-Au runs in which the size of the combinatoric background is a major problem in reconstruction efforts.

  18. Clinical and Serological Predictors of Suicide in Schizophrenia and Major Mood Disorders.

    PubMed

    Dickerson, Faith; Origoni, Andrea; Schweinfurth, Lucy A B; Stallings, Cassie; Savage, Christina L G; Sweeney, Kevin; Katsafanas, Emily; Wilcox, Holly C; Khushalani, Sunil; Yolken, Robert

    2018-03-01

    Persons with serious mental illness are at high risk for suicide, but this outcome is difficult to predict. Serological markers may help to identify suicide risk. We prospectively assessed 733 persons with a schizophrenia spectrum disorder, 483 with bipolar disorder, and 76 with major depressive disorder for an average of 8.15 years. The initial evaluation consisted of clinical and demographic data as well as a blood samples from which immunoglobulin G antibodies to herpes viruses and Toxoplasma gondii were measured. Suicide was determined using data from the National Death Index. Cox proportional hazard regression models examined the role of baseline variables on suicide outcomes. Suicide was associated with male sex, divorced/separated status, Caucasian race, and elevated levels of antibodies to Cytomegalovirus (CMV). Increasing levels of CMV antibodies were associated with increasing hazard ratios for suicide. The identification of serological variables associated with suicide might provide more personalized methods for suicide prevention.

  19. Four New Binary Stars in the Field of CL Aurigae. II

    NASA Astrophysics Data System (ADS)

    Kim, Chun-Hwey; Lee, Jae Woo; Duck, Hyun Kim; Andronov, Ivan L.

    2010-12-01

    We report on a discovery of four new variable stars (USNO-B1.0 1234-0103195, 1235- 0097170, 1236-0100293 and 1236-0100092) in the field of CL Aur. The stars are classified as eclipsing binary stars with orbital periods of 0.5137413(23) (EW type), 0.8698365(26) (EA) and 4.0055842(40) (EA with a significant orbital eccentricity), respectively. The fourth star (USNO-B1.0 1236-0100092) showed only one partial ascending branch of the light curves, although 22 nights were covered at the 61-cm telescope at the Sobaeksan Optical Astronomy Observatory (SOAO) in Korea. Fourteen minima timings for these stars are published separately. In an addition to the original discovery paper (Kim et al. 2010), we discuss methodological problems and present results of mathematical modeling of the light curves using other methods, i.e. trigonometric polynomial fits and the newly developed fit "NAV" ("New Algol Variable").

  20. Validation of Spectral Unmixing Results from Informed Non-Negative Matrix Factorization (INMF) of Hyperspectral Imagery

    NASA Astrophysics Data System (ADS)

    Wright, L.; Coddington, O.; Pilewskie, P.

    2017-12-01

    Hyperspectral instruments are a growing class of Earth observing sensors designed to improve remote sensing capabilities beyond discrete multi-band sensors by providing tens to hundreds of continuous spectral channels. Improved spectral resolution, range and radiometric accuracy allow the collection of large amounts of spectral data, facilitating thorough characterization of both atmospheric and surface properties. We describe the development of an Informed Non-Negative Matrix Factorization (INMF) spectral unmixing method to exploit this spectral information and separate atmospheric and surface signals based on their physical sources. INMF offers marked benefits over other commonly employed techniques including non-negativity, which avoids physically impossible results; and adaptability, which tailors the method to hyperspectral source separation. The INMF algorithm is adapted to separate contributions from physically distinct sources using constraints on spectral and spatial variability, and library spectra to improve the initial guess. Using this INMF algorithm we decompose hyperspectral imagery from the NASA Hyperspectral Imager for the Coastal Ocean (HICO), with a focus on separating surface and atmospheric signal contributions. HICO's coastal ocean focus provides a dataset with a wide range of atmospheric and surface conditions. These include atmospheres with varying aerosol optical thicknesses and cloud cover. HICO images also provide a range of surface conditions including deep ocean regions, with only minor contributions from the ocean surfaces; and more complex shallow coastal regions with contributions from the seafloor or suspended sediments. We provide extensive comparison of INMF decomposition results against independent measurements of physical properties. These include comparison against traditional model-based retrievals of water-leaving, aerosol, and molecular scattering radiances and other satellite products, such as aerosol optical thickness from the Moderate Resolution Imaging Spectroradiometer (MODIS).

  1. Estimating the annual number of strokes and the issue of imperfect data: an example from Australia.

    PubMed

    Cadilhac, Dominique A; Vos, Theo; Thrift, Amanda G

    2014-01-01

    Estimates of strokes in Australia are typically obtained using 1996-1997 age-specific attack rates from the pilot North East Melbourne Stroke Incidence (NEMESIS) Study (eight postcode regions). Declining hospitalizations for stroke indicate the potential to overestimate cases. To illustrate how current methods may potentially overestimate the number of strokes in Australia. Hospital separations data (primary discharge ICD10 codes I60 to I64) and three stroke projection models were compared. Each model had age- and gender-specific attack rates from the NEMESIS study applied to the 2003 population. One model used the 2003 Burden of Disease approach where the ratio of the 1996-1997 NEMESIS study incidence to hospital separation rate in the same year was adjusted by the 2002/2003 hospital separation rate within the same geographic region using relevant ICD-primary diagnosis codes. Hospital separations data were inflated by 12·1% to account for nonhospitalized stroke, while the Burden of Disease model was inflated by 27·6% to account for recurrent stroke events in that year. The third model used 1997-1999 attack rates from the larger 22-postcode NEMESIS study region. In 2003, Australian hospitalizations for stroke (I60 to I64) were 33,022, and extrapolation to all stroke (hospitalized and nonhospitalized) was 37,568. Applying NEMESIS study attack rates to the 2003 Australian population, 50,731 strokes were projected. Fewer cases for 2003 were estimated with the Burden of Disease model (28,364) and 22-postcode NEMESIS study rates (41,332). Estimating the number of strokes in a country can be highly variable depending on the recency of data, the type of data available, and the methods used. © 2013 The Authors. International Journal of Stroke © 2013 World Stroke Organization.

  2. Simplified and reproducible radiochemical separations for the production of high specific activity 61Cu, 64Cu, 86Y and 55Co

    NASA Astrophysics Data System (ADS)

    Valdovinos, Hector F.; Graves, Stephen; Barnhart, Todd; Nickles, Robert J.

    2017-05-01

    Four positron-emitting radiometals 61Cu, 64Cu, 86Y and 55Co are increasingly being employed as labels for positron emission tomography (PET) imaging due to their favorable half-lives that match the pharmacokinetics of targeting moeities such as peptides, antibodies and antibody fragments and due to their use in internal dosimetry and treatment planning of targeted radionuclide therapy when they are substituted by their therapeutic analogues 67Cu, 90Y and 58mCo. The main disadvantage of the production methods reported in the literature for these radionuclides is that the final separated radioactive product is diluted in a large volume (> 5 mL), which obligates a lengthy evaporation step in a large vessel that is difficult to automate in-line after the chromatographic steps and that results in a highly variable amount of radioactivity lost in the vessel's surface. In this work we present simplified radiochemical separation methods for the production of 61Cu, 64Cu, 86Y and 55Co that result in: 1) a final eluate volume ≤ 600 µL; 2) reproducible separation yields of 84±4%, 82±6%, 94±5% and 93±6%, respectively; and 3) effective specific activities of 64.0±45.0 GBq/μmol NOTA, 114.9±40.1 GBq/μmol NOTA, 1.4±0.5 GBq/μmol DTPA and 10.1±5.7 GBq/μmol NOTA, respectively; without compromising the recycling efficiencies of the respective isotopically-enriched target materials 60Ni, 64Ni, 86SrCO3 and 58Ni, which accounted for 98±1%, 96±3%, 90±3% and 94±1%, respectively.

  3. Using global sensitivity analysis of demographic models for ecological impact assessment.

    PubMed

    Aiello-Lammens, Matthew E; Akçakaya, H Resit

    2017-02-01

    Population viability analysis (PVA) is widely used to assess population-level impacts of environmental changes on species. When combined with sensitivity analysis, PVA yields insights into the effects of parameter and model structure uncertainty. This helps researchers prioritize efforts for further data collection so that model improvements are efficient and helps managers prioritize conservation and management actions. Usually, sensitivity is analyzed by varying one input parameter at a time and observing the influence that variation has over model outcomes. This approach does not account for interactions among parameters. Global sensitivity analysis (GSA) overcomes this limitation by varying several model inputs simultaneously. Then, regression techniques allow measuring the importance of input-parameter uncertainties. In many conservation applications, the goal of demographic modeling is to assess how different scenarios of impact or management cause changes in a population. This is challenging because the uncertainty of input-parameter values can be confounded with the effect of impacts and management actions. We developed a GSA method that separates model outcome uncertainty resulting from parameter uncertainty from that resulting from projected ecological impacts or simulated management actions, effectively separating the 2 main questions that sensitivity analysis asks. We applied this method to assess the effects of predicted sea-level rise on Snowy Plover (Charadrius nivosus). A relatively small number of replicate models (approximately 100) resulted in consistent measures of variable importance when not trying to separate the effects of ecological impacts from parameter uncertainty. However, many more replicate models (approximately 500) were required to separate these effects. These differences are important to consider when using demographic models to estimate ecological impacts of management actions. © 2016 Society for Conservation Biology.

  4. Empirical Investigation of Critical Transitions in Paleoclimate

    NASA Astrophysics Data System (ADS)

    Loskutov, E. M.; Mukhin, D.; Gavrilov, A.; Feigin, A.

    2016-12-01

    In this work we apply a new empirical method for the analysis of complex spatially distributed systems to the analysis of paleoclimate data. The method consists of two general parts: (i) revealing the optimal phase-space variables and (ii) construction the empirical prognostic model by observed time series. The method of phase space variables construction based on the data decomposition into nonlinear dynamical modes which was successfully applied to global SST field and allowed clearly separate time scales and reveal climate shift in the observed data interval [1]. The second part, the Bayesian approach to optimal evolution operator reconstruction by time series is based on representation of evolution operator in the form of nonlinear stochastic function represented by artificial neural networks [2,3]. In this work we are focused on the investigation of critical transitions - the abrupt changes in climate dynamics - in match longer time scale process. It is well known that there were number of critical transitions on different time scales in the past. In this work, we demonstrate the first results of applying our empirical methods to analysis of paleoclimate variability. In particular, we discuss the possibility of detecting, identifying and prediction such critical transitions by means of nonlinear empirical modeling using the paleoclimate record time series. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical modes of climate variability. Scientific Reports, 5, 15510. http://doi.org/10.1038/srep155102. Ya. I. Molkov, D. N. Mukhin, E. M. Loskutov, A.M. Feigin, (2012) : Random dynamical models from time series. Phys. Rev. E, Vol. 85, n.3.3. Mukhin, D., Kondrashov, D., Loskutov, E., Gavrilov, A., Feigin, A., & Ghil, M. (2015). Predicting Critical Transitions in ENSO models. Part II: Spatially Dependent Models. Journal of Climate, 28(5), 1962-1976. http://doi.org/10.1175/JCLI-D-14-00240.1

  5. Time Variation of the Distance Separating Bomb and Dive Bomber Subsequent to Bomb Release

    NASA Technical Reports Server (NTRS)

    Mathews, Charles W.

    1952-01-01

    A study has been made of the variation of the distance separating bomb and aircraft with time after release as applied to dive-bombing operations, Separation distances determined from this study are presented in terms of two variables only, dive angle and maximum airplane accelerometer reading; the values of separation distance include the effects of delay in initiation of the pull-out and lag in attainment of the maximum normal acceleration.Contains analysis and calculations of the separation distances between bomb and dive bomber following bomb release, Separation distances as determined by the dive angle and the maximum airplane accelerometer reading are presented in a single chart.

  6. Item Response Theory with Covariates (IRT-C): Assessing Item Recovery and Differential Item Functioning for the Three-Parameter Logistic Model

    ERIC Educational Resources Information Center

    Tay, Louis; Huang, Qiming; Vermunt, Jeroen K.

    2016-01-01

    In large-scale testing, the use of multigroup approaches is limited for assessing differential item functioning (DIF) across multiple variables as DIF is examined for each variable separately. In contrast, the item response theory with covariate (IRT-C) procedure can be used to examine DIF across multiple variables (covariates) simultaneously. To…

  7. Sleep variability and cardiac autonomic modulation in adolescents – Penn State Child Cohort (PSCC) study

    PubMed Central

    Rodríguez-Colón, Sol M.; He, Fan; Bixler, Edward O.; Fernandez-Mendoza, Julio; Vgontzas, Alexandros N.; Calhoun, Susan; Zheng, Zhi-Jie; Liao, Duanping

    2015-01-01

    Objective To investigate the effects of objectively measured habitual sleep patterns on cardiac autonomic modulation (CAM) in a population-based sample of adolescents. Methods We used data from 421 adolescents who completed the follow-up examination in the Penn State Children Cohort study. CAM was assessed by heart rate (HR) variability (HRV) analysis of beat-to-beat normal R-R intervals from a 39-h electrocardiogram, on a 30-min basis. The HRV indices included frequency domain (HF, LF, and LF/HF ratio), and time domain (SDNN, RMSSD, and heart rate or HR) variables. Actigraphy was used for seven consecutive nights to estimate nightly sleep duration and time in bed. The seven-night mean (SD) of sleep duration and sleep efficiency were used to represent sleep duration, duration variability, sleep efficiency, and efficiency variability, respectively. HF and LF were log-transformed for statistical analysis. Linear mixed-effect models were used to analyze the association between sleep patterns and CAM. Results After adjusting for major confounders, increased sleep duration variability and efficiency variability were significantly associated with lower HRV and higher HR during the 39-h, as well as separated by daytime and nighttime. For instance, a 1-h increase in sleep duration variability is associated with −0.14(0.04), −0.12(0.06), and −0.16(0.05) ms2 decrease in total, daytime, and nighttime HF, respectively. No associations were found between sleep duration, or sleep efficiency and HRV. Conclusion Higher habitual sleep duration variability and efficiency variability are associated with lower HRV and higher HR, suggesting that an irregular sleep pattern has an adverse impact on CAM, even in healthy adolescents. PMID:25555635

  8. Detection of a variable number of ribosomal DNA loci by fluorescent in situ hybridization in Populus species.

    PubMed

    Prado, E A; Faivre-Rampant, P; Schneider, C; Darmency, M A

    1996-10-01

    Fluorescent in situ hybridization (FISH) was applied to related Populus species (2n = 19) in order to detect rDNA loci. An interspecific variability in the number of hybridization sites was revealed using as probe an homologous 25S clone from Populus deltoides. The application of image analysis methods to measure fluorescence intensity of the hybridization signals has enabled us to characterize major and minor loci in the 18S-5.8S-25S rDNA. We identified one pair of such rDNA clusters in Populus alba; two pairs, one major and one minor, in both Populus nigra and P. deltoides; and three pairs in Populus balsamifera, (two major and one minor) and Populus euroamericana (one major and two minor). FISH results are in agreement with those based on RFLP analysis. The pBG13 probe containing 5S sequence from flax detected two separate clusters corresponding to the two size classes of units that coexist within 5S rDNA of most Populus species. Key words : Populus spp., fluorescent in situ hybridization, FISH, rDNA variability, image analysis.

  9. Modeling Menstrual Cycle Length and Variability at the Approach of Menopause Using Hierarchical Change Point Models

    PubMed Central

    Huang, Xiaobi; Elliott, Michael R.; Harlow, Siobán D.

    2013-01-01

    SUMMARY As women approach menopause, the patterns of their menstrual cycle lengths change. To study these changes, we need to jointly model both the mean and variability of cycle length. Our proposed model incorporates separate mean and variance change points for each woman and a hierarchical model to link them together, along with regression components to include predictors of menopausal onset such as age at menarche and parity. Additional complexity arises from the fact that the calendar data have substantial missingness due to hormone use, surgery, and failure to report. We integrate multiple imputation and time-to event modeling in a Bayesian estimation framework to deal with different forms of the missingness. Posterior predictive model checks are applied to evaluate the model fit. Our method successfully models patterns of women’s menstrual cycle trajectories throughout their late reproductive life and identifies change points for mean and variability of segment length, providing insight into the menopausal process. More generally, our model points the way toward increasing use of joint mean-variance models to predict health outcomes and better understand disease processes. PMID:24729638

  10. Hydrometallurgical separation of rare earth elements, cobalt and nickel from spent nickel-metal-hydride batteries

    NASA Astrophysics Data System (ADS)

    Rodrigues, Luiz Eduardo Oliveira Carmo; Mansur, Marcelo Borges

    The separation of rare earth elements, cobalt and nickel from NiMH battery residues is evaluated in this paper. Analysis of the internal content of the NiMH batteries shows that nickel is the main metal present in the residue (around 50% in weight), as well as potassium (2.2-10.9%), cobalt (5.1-5.5%), rare earth elements (15.3-29.0%) and cadmium (2.8%). The presence of cadmium reveals that some Ni-Cd batteries are possibly labeled as NiMH ones. The leaching of nickel and cobalt from the NiMH battery powder with sulfuric acid is efficient; operating variables temperature and concentration of H 2O 2 has no significant effect for the conditions studied. A mixture of rare earth elements is separated by precipitation with NaOH. Finally, solvent extraction with D2EHPA (di-2-ethylhexyl phosphoric acid) followed by Cyanex 272 (bis-2,4,4-trimethylpentyl phosphinic acid) can separate cadmium, cobalt and nickel from the leach liquor. The effect of the main operating variables of both leaching and solvent extraction steps are discussed aiming to maximize metal separation for recycling purposes.

  11. Experimental realization of spatially separated entanglement with continuous variables using laser pulse trains

    PubMed Central

    Zhang, Yun; Okubo, Ryuhi; Hirano, Mayumi; Eto, Yujiro; Hirano, Takuya

    2015-01-01

    Spatially separated entanglement is demonstrated by interfering two high-repetition squeezed pulse trains. The entanglement correlation of the quadrature amplitudes between individual pulses is interrogated. It is characterized in terms of the sufficient inseparability criterion with an optimum result of in the frequency domain and in the time domain. The quantum correlation is also observed when the two measurement stations are separated by a physical distance of 4.5 m, which is sufficiently large to demonstrate the space-like separation, after accounting for the measurement time. PMID:26278478

  12. Separating Spike Count Correlation from Firing Rate Correlation

    PubMed Central

    Vinci, Giuseppe; Ventura, Valérie; Smith, Matthew A.; Kass, Robert E.

    2016-01-01

    Populations of cortical neurons exhibit shared fluctuations in spiking activity over time. When measured for a pair of neurons over multiple repetitions of an identical stimulus, this phenomenon emerges as correlated trial-to-trial response variability via spike count correlation (SCC). However, spike counts can be viewed as noisy versions of firing rates, which can vary from trial to trial. From this perspective, the SCC for a pair of neurons becomes a noisy version of the corresponding firing-rate correlation (FRC). Furthermore, the magnitude of the SCC is generally smaller than that of the FRC, and is likely to be less sensitive to experimental manipulation. We provide statistical methods for disambiguating time-averaged drive from within-trial noise, thereby separating FRC from SCC. We study these methods to document their reliability, and we apply them to neurons recorded in vivo from area V4, in an alert animal. We show how the various effects we describe are reflected in the data: within-trial effects are largely negligible, while attenuation due to trial-to-trial variation dominates, and frequently produces comparisons in SCC that, because of noise, do not accurately reflect those based on the underlying FRC. PMID:26942746

  13. On determinant representations of scalar products and form factors in the SoV approach: the XXX case

    NASA Astrophysics Data System (ADS)

    Kitanine, N.; Maillet, J. M.; Niccoli, G.; Terras, V.

    2016-03-01

    In the present article we study the form factors of quantum integrable lattice models solvable by the separation of variables (SoVs) method. It was recently shown that these models admit universal determinant representations for the scalar products of the so-called separate states (a class which includes in particular all the eigenstates of the transfer matrix). These results permit to obtain simple expressions for the matrix elements of local operators (form factors). However, these representations have been obtained up to now only for the completely inhomogeneous versions of the lattice models considered. In this article we give a simple algebraic procedure to rewrite the scalar products (and hence the form factors) for the SoV related models as Izergin or Slavnov type determinants. This new form leads to simple expressions for the form factors in the homogeneous and thermodynamic limits. To make the presentation of our method clear, we have chosen to explain it first for the simple case of the XXX Heisenberg chain with anti-periodic boundary conditions. We would nevertheless like to stress that the approach presented in this article applies as well to a wide range of models solved in the SoV framework.

  14. Detection of Alicyclobacillus spp. in Fruit Juice by Combination of Immunomagnetic Separation and a SYBR Green I Real-Time PCR Assay

    PubMed Central

    Yuan, Yahong; Liu, Bin; Wang, Ling; Yue, Tianli

    2015-01-01

    An approach based on immunomagnetic separation (IMS) and SYBR Green I real-time PCR (real-time PCR) with species-specific primers and melting curve analysis was proposed as a rapid and effective method for detecting Alicyclobacillus spp. in fruit juices. Specific primers targeting the 16S rDNA sequences of Alicyclobacillus spp. were designed and then confirmed by the amplification of DNA extracted from standard strains and isolates. Spiked samples containing known amounts of target bacteria were used to obtain standard curves; the correlation coefficient was greater than 0.986 and the real-time PCR amplification efficiencies were 98.9%- 101.8%. The detection limit of the testing system was 2.8×101 CFU/mL. The coefficient of variation for intra-assay and inter-assay variability were all within the acceptable limit of 5%. Besides, the performance of the IMS-real-time PCR assay was further investigated by detecting naturally contaminated kiwi fruit juice; the sensitivity, specificity and accuracy were 91.7%, 95.9% and 95.3%, respectively. The established IMS-real-time PCR procedure provides a new method for identification and quantitative detection of Alicyclobacillus spp. in fruit juice. PMID:26488469

  15. Using Convective Stratiform Technique (CST) method to estimate rainfall (case study in Bali, December 14th 2016)

    NASA Astrophysics Data System (ADS)

    Vista Wulandari, Ayu; Rizki Pratama, Khafid; Ismail, Prayoga

    2018-05-01

    Accurate and realtime data in wide spatial space at this time is still a problem because of the unavailability of observation of rainfall in each region. Weather satellites have a very wide range of observations and can be used to determine rainfall variability with better resolution compared with a limited direct observation. Utilization of Himawari-8 satellite data in estimating rainfall using Convective Stratiform Technique (CST) method. The CST method is performed by separating convective and stratiform cloud components using infrared channel satellite data. Cloud components are classified by slope because the physical and dynamic growth processes are very different. This research was conducted in Bali area on December 14, 2016 by verifying the result of CST process with rainfall data from Ngurah Rai Meteorology Station Bali. It is found that CST method result had simililar value with data observation in Ngurah Rai meteorological station, so it assumed that CST method can be used for rainfall estimation in Bali region.

  16. Scheduling the blended solution as industrial CO2 absorber in separation process by back-propagation artificial neural networks.

    PubMed

    Abdollahi, Yadollah; Sairi, Nor Asrina; Said, Suhana Binti Mohd; Abouzari-lotf, Ebrahim; Zakaria, Azmi; Sabri, Mohd Faizul Bin Mohd; Islam, Aminul; Alias, Yatimah

    2015-11-05

    It is believe that 80% industrial of carbon dioxide can be controlled by separation and storage technologies which use the blended ionic liquids absorber. Among the blended absorbers, the mixture of water, N-methyldiethanolamine (MDEA) and guanidinium trifluoromethane sulfonate (gua) has presented the superior stripping qualities. However, the blended solution has illustrated high viscosity that affects the cost of separation process. In this work, the blended fabrication was scheduled with is the process arranging, controlling and optimizing. Therefore, the blend's components and operating temperature were modeled and optimized as input effective variables to minimize its viscosity as the final output by using back-propagation artificial neural network (ANN). The modeling was carried out by four mathematical algorithms with individual experimental design to obtain the optimum topology using root mean squared error (RMSE), R-squared (R(2)) and absolute average deviation (AAD). As a result, the final model (QP-4-8-1) with minimum RMSE and AAD as well as the highest R(2) was selected to navigate the fabrication of the blended solution. Therefore, the model was applied to obtain the optimum initial level of the input variables which were included temperature 303-323 K, x[gua], 0-0.033, x[MDAE], 0.3-0.4, and x[H2O], 0.7-1.0. Moreover, the model has obtained the relative importance ordered of the variables which included x[gua]>temperature>x[MDEA]>x[H2O]. Therefore, none of the variables was negligible in the fabrication. Furthermore, the model predicted the optimum points of the variables to minimize the viscosity which was validated by further experiments. The validated results confirmed the model schedulability. Accordingly, ANN succeeds to model the initial components of the blended solutions as absorber of CO2 capture in separation technologies that is able to industries scale up. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. The EPOCH Project. I. Periodic variable stars in the EROS-2 LMC database

    NASA Astrophysics Data System (ADS)

    Kim, Dae-Won; Protopapas, Pavlos; Bailer-Jones, Coryn A. L.; Byun, Yong-Ik; Chang, Seo-Won; Marquette, Jean-Baptiste; Shin, Min-Su

    2014-06-01

    The EPOCH (EROS-2 periodic variable star classification using machine learning) project aims to detect periodic variable stars in the EROS-2 light curve database. In this paper, we present the first result of the classification of periodic variable stars in the EROS-2 LMC database. To classify these variables, we first built a training set by compiling known variables in the Large Magellanic Cloud area from the OGLE and MACHO surveys. We crossmatched these variables with the EROS-2 sources and extracted 22 variability features from 28 392 light curves of the corresponding EROS-2 sources. We then used the random forest method to classify the EROS-2 sources in the training set. We designed the model to separate not only δ Scuti stars, RR Lyraes, Cepheids, eclipsing binaries, and long-period variables, the superclasses, but also their subclasses, such as RRab, RRc, RRd, and RRe for RR Lyraes, and similarly for the other variable types. The model trained using only the superclasses shows 99% recall and precision, while the model trained on all subclasses shows 87% recall and precision. We applied the trained model to the entire EROS-2 LMC database, which contains about 29 million sources, and found 117 234 periodic variable candidates. Out of these 117 234 periodic variables, 55 285 have not been discovered by either OGLE or MACHO variability studies. This set comprises 1906 δ Scuti stars, 6607 RR Lyraes, 638 Cepheids, 178 Type II Cepheids, 34 562 eclipsing binaries, and 11 394 long-period variables. catalog of these EROS-2 LMC periodic variable stars is available at http://stardb.yonsei.ac.kr and at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/566/A43

  18. A hydroclimatic threshold for landslide initiation on the North Shore Mountains of Vancouver, British Columbia

    NASA Astrophysics Data System (ADS)

    Jakob, Matthias; Weatherly, Hamish

    2003-09-01

    Landslides triggered by rainfall are the cause of thousands of deaths worldwide every year. One possible approach to limit the socioeconomic consequences of such events is the development of climatic thresholds for landslide initiation. In this paper, we propose a method that incorporates antecedent rainfall and streamflow data to develop a landslide initiation threshold for the North Shore Mountains of Vancouver, British Columbia. Hydroclimatic data were gathered for 18 storms that triggered landslides and 18 storms that did not. Discriminant function analysis separated the landslide-triggering storms from those storms that did not trigger landslides and selected the most meaningful variables that allow this separation. Discriminant functions were also developed for the landslide-triggering and nonlandslide-triggering storms. The difference of the discriminant scores, ΔCS, for both groups is a measure of landslide susceptibility during a storm. The variables identified that optimize the separation of the two storm groups are 4-week rainfall prior to a significant storm, 6-h rainfall during a storm, and the number of hours 1 m 3/s discharge was exceeded at Mackay Creek during a storm. Three thresholds were identified. The Landslide Warning Threshold (LWT) is reached when ΔCS is -1. The Conditional Landslide Initiation Threshold (CTL I) is reached when ΔCS is zero, and it implies that landslides are likely if 4 mm/h rainfall intensity is exceeded at which point the Imminent Landslide Initiation Threshold (ITL I) is reached. The LWT allows time for the issuance of a landslide advisory and to move personnel out of hazardous areas. The methodology proposed in this paper can be transferred to other regions worldwide where type and quality of data are appropriate for this type of analysis.

  19. School Phobia: A Critical Analysis of the Separation Anxiety Theory and an Alternative Conceptualization.

    ERIC Educational Resources Information Center

    Pilkington, Cynthia L.; Piersel, Wayne C.

    1991-01-01

    Reviews literature on school phobia which reveals predominant view concerning its etiology is separation anxiety theory. Critically analyzes theory on three grounds: methodological problems, lack of generalizability concerning pathological mother-child relationships, and lack of emphasis on external etiological variables. Recommends reexamining…

  20. The Case of Effort Variables in Student Performance.

    ERIC Educational Resources Information Center

    Borg, Mary O.; And Others

    1989-01-01

    Tests the existence of a structural shift between above- and below-average students in the econometric models that explain students' grades in principles of economics classes. Identifies a structural shift and estimates separate models for above- and below-average students. Concludes that separate models as well as educational policies are…

  1. Separation-Individuation Difficulties and Cognitive-Behavioral Indicators of Eating Disorders among College Women.

    ERIC Educational Resources Information Center

    Friedlander, Myrna L.; Siegel, Sheri M.

    1990-01-01

    Tested theoretical link between difficulties with separation-individuation and cognitive-behavioral indicators characteristic of anorexia nervosa and bulimia. Assessed 124 college women using three self-report measures. Results suggest strong relation between 2 sets of variables and support theoretical assertions about factors that contribute to…

  2. 40 CFR 63.137 - Process wastewater provisions-oil-water separators.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... separators. 63.137 Section 63.137 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... for other emission variables such as temperature and barometric pressure, or (ii) An engineering...)(1) and the schedule specified in paragraphs (c)(1) and (c)(2) of this section. (1) Measurement of...

  3. Visual Object Pattern Separation Varies in Older Adults

    ERIC Educational Resources Information Center

    Holden, Heather M.; Toner, Chelsea; Pirogovsky, Eva; Kirwan, C. Brock; Gilbert, Paul E.

    2013-01-01

    Young and nondemented older adults completed a visual object continuous recognition memory task in which some stimuli (lures) were similar but not identical to previously presented objects. The lures were hypothesized to result in increased interference and increased pattern separation demand. To examine variability in object pattern separation…

  4. Separation and Determination of Honokiol and Magnolol in Chinese Traditional Medicines by Capillary Electrophoresis with the Application of Response Surface Methodology and Radial Basis Function Neural Network

    PubMed Central

    Han, Ping; Luan, Feng; Yan, Xizu; Gao, Yuan; Liu, Huitao

    2012-01-01

    A method for the separation and determination of honokiol and magnolol in Magnolia officinalis and its medicinal preparation is developed by capillary zone electrophoresis and response surface methodology. The concentration of borate, content of organic modifier, and applied voltage are selected as variables. The optimized conditions (i.e., 16 mmol/L sodium tetraborate at pH 10.0, 11% methanol, applied voltage of 25 kV and UV detection at 210 nm) are obtained and successfully applied to the analysis of honokiol and magnolol in Magnolia officinalis and Huoxiang Zhengqi Liquid. Good separation is achieved within 6 min. The limits of detection are 1.67 µg/mL for honokiol and 0.83 µg/mL for magnolol, respectively. In addition, an artificial neural network with “3-7-1” structure based on the ratio of peak resolution to the migration time of the later component (Rs/t) given by Box-Behnken design is also reported, and the predicted results are in good agreement with the values given by the mathematic software and the experimental results. PMID:22291059

  5. Near-Ideal Xylene Selectivity in Adaptive Molecular Pillar[ n]arene Crystals.

    PubMed

    Jie, Kecheng; Liu, Ming; Zhou, Yujuan; Little, Marc A; Pulido, Angeles; Chong, Samantha Y; Stephenson, Andrew; Hughes, Ashlea R; Sakakibara, Fumiyasu; Ogoshi, Tomoki; Blanc, Frédéric; Day, Graeme M; Huang, Feihe; Cooper, Andrew I

    2018-06-06

    The energy-efficient separation of alkylaromatic compounds is a major industrial sustainability challenge. The use of selectively porous extended frameworks, such as zeolites or metal-organic frameworks, is one solution to this problem. Here, we studied a flexible molecular material, perethylated pillar[ n]arene crystals ( n = 5, 6), which can be used to separate C8 alkylaromatic compounds. Pillar[6]arene is shown to separate para-xylene from its structural isomers, meta-xylene and ortho-xylene, with 90% specificity in the solid state. Selectivity is an intrinsic property of the pillar[6]arene host, with the flexible pillar[6]arene cavities adapting during adsorption thus enabling preferential adsorption of para-xylene in the solid state. The flexibility of pillar[6]arene as a solid sorbent is rationalized using molecular conformer searches and crystal structure prediction (CSP) combined with comprehensive characterization by X-ray diffraction and 13 C solid-state NMR spectroscopy. The CSP study, which takes into account the structural variability of pillar[6]arene, breaks new ground in its own right and showcases the feasibility of applying CSP methods to understand and ultimately to predict the behavior of soft, adaptive molecular crystals.

  6. Liquefaction of ground tire rubber at low temperature.

    PubMed

    Cheng, Xiangyun; Song, Pan; Zhao, Xinyu; Peng, Zonglin; Wang, Shifeng

    2018-01-01

    Low-temperature liquefaction has been investigated as a novel method for recycling ground tire rubber (GTR) into liquid using an environmentally benign process. The liquefaction was carried out at different temperatures (140, 160 and 180 °C) over variable time ranges (2-24 h) by blending the GTR with aromatic oil in a range from 0 to 100 parts per hundred rubber (phr). The liquefied GTR was separated into sol (the soluble fraction of rubber which can be extracted with toluene) and gel fractions (the solid fraction obtained after extraction) to evaluate the reclaiming efficiency. It was discovered that the percentage of the sol fraction increased with time, swelling ratio and temperature. Liquefied rubber was obtained with a high sol fraction (68.34 wt%) at 140 °C. Simultaneously, separation of nano-sized carbon black from the rubber networks occurred. The separation of carbon black from the network is the result of significant damage to the cross-linked-network that occurs throughout the liquefaction process. During liquefaction, a competitive reaction between main chain scission and cross-link bond breakage takes place. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Impact of glucuronide interferences on therapeutic drug monitoring of posaconazole by tandem mass spectrometry.

    PubMed

    Krüger, Ralf; Vogeser, Michael; Burghardt, Stephan; Vogelsberger, Rita; Lackner, Karl J

    2010-12-01

    Posaconazole is a novel antifungal drug for oral application intended especially for therapy of invasive mycoses. Due to variable gastrointestinal absorption, adverse side effects, and suspected drug-drug interactions, therapeutic drug monitoring (TDM) of posaconazole is recommended. A fast ultra performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) method for quantification of posaconazole with a run-time <3 min was developed and compared to a LC-MS/MS method and HPLC method with fluorescence detection. During evaluation of UPLC-MS/MS, two earlier eluting peaks were observed in the MRM trace of posaconazole. This was only seen in patient samples, but not in spiked calibrator samples. Comparison with LC-MS/MS disclosed a significant bias with higher concentrations measured by LC-MS/MS, while UPLC-MS/MS showed excellent agreement with the commercially available HPLC method. In the LC-MS/MS procedure, comparably wide and left side shifted peaks were noticed. This could be ascribed to in-source fragmentation of conjugate metabolites during electrospray ionisation. Precursor and product ion scans confirmed the assumption that the additional compounds are posaconazole glucuronides. Reducing the cone voltage led to disappearance of the glucuronide peaks. Slight modification of the LC-MS/MS method enabled separation of the main interference, leading to significantly reduced deviation. These results highlight the necessity to reliably eliminate interference from labile drug metabolites for correct TDM results, either by sufficient separation or selective MS conditions. The presented UPLC-MS/MS method provides a reliable and fast assay for TDM of posaconazole.

  8. Confounder summary scores when comparing the effects of multiple drug exposures.

    PubMed

    Cadarette, Suzanne M; Gagne, Joshua J; Solomon, Daniel H; Katz, Jeffrey N; Stürmer, Til

    2010-01-01

    Little information is available comparing methods to adjust for confounding when considering multiple drug exposures. We compared three analytic strategies to control for confounding based on measured variables: conventional multivariable, exposure propensity score (EPS), and disease risk score (DRS). Each method was applied to a dataset (2000-2006) recently used to examine the comparative effectiveness of four drugs. The relative effectiveness of risedronate, nasal calcitonin, and raloxifene in preventing non-vertebral fracture, were each compared to alendronate. EPSs were derived both by using multinomial logistic regression (single model EPS) and by three separate logistic regression models (separate model EPS). DRSs were derived and event rates compared using Cox proportional hazard models. DRSs derived among the entire cohort (full cohort DRS) was compared to DRSs derived only among the referent alendronate (unexposed cohort DRS). Less than 8% deviation from the base estimate (conventional multivariable) was observed applying single model EPS, separate model EPS or full cohort DRS. Applying the unexposed cohort DRS when background risk for fracture differed between comparison drug exposure cohorts resulted in -7 to + 13% deviation from our base estimate. With sufficient numbers of exposed and outcomes, either conventional multivariable, EPS or full cohort DRS may be used to adjust for confounding to compare the effects of multiple drug exposures. However, our data also suggest that unexposed cohort DRS may be problematic when background risks differ between referent and exposed groups. Further empirical and simulation studies will help to clarify the generalizability of our findings.

  9. A crash-prediction model for multilane roads.

    PubMed

    Caliendo, Ciro; Guida, Maurizio; Parisi, Alessandra

    2007-07-01

    Considerable research has been carried out in recent years to establish relationships between crashes and traffic flow, geometric infrastructure characteristics and environmental factors for two-lane rural roads. Crash-prediction models focused on multilane rural roads, however, have rarely been investigated. In addition, most research has paid but little attention to the safety effects of variables such as stopping sight distance and pavement surface characteristics. Moreover, the statistical approaches have generally included Poisson and Negative Binomial regression models, whilst Negative Multinomial regression model has been used to a lesser extent. Finally, as far as the authors are aware, prediction models involving all the above-mentioned factors have still not been developed in Italy for multilane roads, such as motorways. Thus, in this paper crash-prediction models for a four-lane median-divided Italian motorway were set up on the basis of accident data observed during a 5-year monitoring period extending between 1999 and 2003. The Poisson, Negative Binomial and Negative Multinomial regression models, applied separately to tangents and curves, were used to model the frequency of accident occurrence. Model parameters were estimated by the Maximum Likelihood Method, and the Generalized Likelihood Ratio Test was applied to detect the significant variables to be included in the model equation. Goodness-of-fit was measured by means of both the explained fraction of total variation and the explained fraction of systematic variation. The Cumulative Residuals Method was also used to test the adequacy of a regression model throughout the range of each variable. The candidate set of explanatory variables was: length (L), curvature (1/R), annual average daily traffic (AADT), sight distance (SD), side friction coefficient (SFC), longitudinal slope (LS) and the presence of a junction (J). Separate prediction models for total crashes and for fatal and injury crashes only were considered. For curves it is shown that significant variables are L, 1/R and AADT, whereas for tangents they are L, AADT and junctions. The effect of rain precipitation was analysed on the basis of hourly rainfall data and assumptions about drying time. It is shown that a wet pavement significantly increases the number of crashes. The models developed in this paper for Italian motorways appear to be useful for many applications such as the detection of critical factors, the estimation of accident reduction due to infrastructure and pavement improvement, and the predictions of accidents counts when comparing different design options. Thus this research may represent a point of reference for engineers in adjusting or designing multilane roads.

  10. HPLC-DAD method development and validation for the quantification of hydroxymethylfurfural in corn chips by means of response surface optimisation.

    PubMed

    Salvatierra Virgen, Sara; Ceballos-Magaña, Silvia Guillermina; Salvatierra-Stamp, Vilma Del Carmen; Sumaya-Martínez, Maria Teresa; Martínez-Martínez, Francisco Javier; Muñiz-Valencia, Roberto

    2017-12-01

    In recent years, there has been an increased concern about the presence of toxic compounds derived from the Maillard reaction produced during food cooking at high temperatures. The main toxic compounds derived from this reaction are acrylamide and hydroxymethylfurfural (HMF). The majority of analytical methods require sample treatments using solvents which are highly polluting for the environment. The difficulty of quantifying HMF in complex fried food matrices encourages the development of new analytical methods. This paper provides a rapid, sensitive and environmentally-friendly analytical method for the quantification of HMF in corn chips using HPLC-DAD. Chromatographic separation resulted in a baseline separation for HMF in 3.7 min. Sample treatment for corn chip samples first involved a leaching process using water and afterwards a solid-phase extraction (SPE) using HLB-Oasis polymeric cartridges. Sample treatment optimisation was carried out by means of Box-Behnken fractional factorial design and Response Surface Methodolog y to examine the effects of four variables (sample weight, pH, sonication time and elution volume) on HMF extraction from corn chips. The SPE-HPLC-DAD method was validated. The limits of detection and quantification were 0.82 and 2.20 mg kg -1 , respectively. Method precision was evaluated in terms of repeatability and reproducibility as relative standard deviations (RSDs) using three concentration levels. For repeatability, RSD values were 6.9, 3.6 and 2.0%; and for reproducibility 18.8, 7.9 and 2.9%. For a ruggedness study the Yuden test was applied and the result demonstrated the method as robust. The method was successfully applied to different corn chip samples.

  11. VeriML: A Dependently-Typed, User-Extensible and Language-Centric Approach to Proof Assistants

    DTIC Science & Technology

    2013-01-01

    the locally nameless approach [McKinna and Pollack, 1993]. The former two techniques replace all variables by numbers, whereas the locally nameless ...needs to be reasoned about together with shifting. This complicates both the statements and proofs of related lemmas. The locally nameless approach...the locally nameless approach, we separate free variables from bound variables and use deBruijn indices for bound variables (denoted as bi in Table 3.1

  12. Characterization of 3d Contact Kinematics and Prediction of Resonant Response of Structures Having 3d Frictional Constraint

    NASA Astrophysics Data System (ADS)

    Yang, B. D.; Menq, C. H.

    1998-11-01

    A 3D friction contact model has been developed for the prediction of the resonant response of structures having 3D frictional constraint. In the proposed model, a contact plane is defined and its orientation is assumed invariant. Consequently, the relative motion of the two contacting surfaces can be resolved into two components: the in-plane tangential motion on the contact plane and the normal component perpendicular to the plane. The in-plane tangential relative motion is often two-dimensional, and it can induce stick-slip friction. On the other hand, the normal relative motion can cause variation of the contact normal load and, in extreme circumstances, separation of the two contacting surfaces. In this study, the joined effect of the 2D tangential relative motion and the normal relative motion on the contact kinematics of a friction contact is examined and analytical criteria are developed to determine the transitions among stick, slip, and separation, when experiencing variable normal load. With these transition criteria, the induced friction force on the contact plane and the variable normal load perpendicular to the plane can be predicted for any given cyclic relative motions at the contact interface and hysteresis loops can be produced so as to characterize the equivalent damping and stiffness of the friction contact. These non-linear damping and stiffness along with the harmonic balance method are then used to predict the resonance of a frictionally constrained 3-DOF oscillator. The predicted results are compared with those of the time integration method and the damping effect, the resonant frequency shift, and the jump phenomenon are examined.

  13. The calculation of weakly non-spherical cavitation bubble impact on a solid

    NASA Astrophysics Data System (ADS)

    Aganin, A. A.; Guseva, T. S.; Kosolapova, L. A.; Khismatullina, N. A.

    2016-11-01

    The effect of small spheroidal non-sphericity of a cavitation bubble touching a solid at the beginning of its collapse on its impact on the solid of a copper-nickel alloy is investigated. The impact on the solid is realized by means of a high-speed liquid jet arising at collapse on the bubble surface. The shape of the jet, its velocity and pressure are calculated by the boundary element method. The spatial and temporal characteristics of the pressure pulses on the solid surface are determined by the CIP-CUP method on dynamically adaptive grids without explicitly separating the gas-liquid interface. The solid surface layer dynamics is evaluated by the Godunov method. The results are analyzed in dimensionless variables obtained with using the water hammer pressure, the time moment and the jet-solid contact area radius at which the jet begins to spread on the solid surface. It is shown that in those dimensionless variables, the dependence of the spatial and temporal characteristics of the solid surface pressure pulses on the initial bubble shape non-sphericity is relatively small. The nonsphericity also slightly influences the main qualitative features of the dynamic processes inside the solid, whereas its effect on their quantitative characteristics can be significant.

  14. Assays for hydrophilic and lipophilic antioxidant capacity (oxygen radical absorbance capacity (ORAC(FL))) of plasma and other biological and food samples.

    PubMed

    Prior, Ronald L; Hoang, Ha; Gu, Liwei; Wu, Xianli; Bacchiocca, Mara; Howard, Luke; Hampsch-Woodill, Maureen; Huang, Dejian; Ou, Boxin; Jacob, Robert

    2003-05-21

    Methods are described for the extraction and analysis of hydrophilic and lipophilic antioxidants, using modifications of the oxygen radical absorbing capacity (ORAC(FL)) procedure. These methods provide, for the first time, the ability to obtain a measure of "total antioxidant capacity" in the protein free plasma, using the same peroxyl radical generator for both lipophilic and hydrophilic antioxidants. Separation of the lipophilic and hydrophilic antioxidant fractions from plasma was accomplished by extracting with hexane after adding water and ethanol to the plasma (hexane/plasma/ethanol/water, 4:1:2:1, v/v). Lipophilic and hydrophilic antioxidants were efficiently partitioned between hexane and aqueous solvents. Conditions for controlling temperature effects and decreasing assay variability using fluorescein as the fluorescent probe were validated in different laboratories. Incubation (37 degrees C for at least 30 min) of the buffer to which AAPH was dissolved was critical in decreasing assay variability. Lipophilic antioxidants represented 33.1 +/- 1.5 and 38.2 +/- 1.9% of the total antioxidant capacity of the protein free plasma in two independent studies of 6 and 10 subjects, respectively. Methods are described for application of the assay techniques to other types of biological and food samples.

  15. Introduction to statistical modelling 2: categorical variables and interactions in linear regression.

    PubMed

    Lunt, Mark

    2015-07-01

    In the first article in this series we explored the use of linear regression to predict an outcome variable from a number of predictive factors. It assumed that the predictive factors were measured on an interval scale. However, this article shows how categorical variables can also be included in a linear regression model, enabling predictions to be made separately for different groups and allowing for testing the hypothesis that the outcome differs between groups. The use of interaction terms to measure whether the effect of a particular predictor variable differs between groups is also explained. An alternative approach to testing the difference between groups of the effect of a given predictor, which consists of measuring the effect in each group separately and seeing whether the statistical significance differs between the groups, is shown to be misleading. © The Author 2013. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Effects of semantic neighborhood density in abstract and concrete words.

    PubMed

    Reilly, Megan; Desai, Rutvik H

    2017-12-01

    Concrete and abstract words are thought to differ along several psycholinguistic variables, such as frequency and emotional content. Here, we consider another variable, semantic neighborhood density, which has received much less attention, likely because semantic neighborhoods of abstract words are difficult to measure. Using a corpus-based method that creates representations of words that emphasize featural information, the current investigation explores the relationship between neighborhood density and concreteness in a large set of English nouns. Two important observations emerge. First, semantic neighborhood density is higher for concrete than for abstract words, even when other variables are accounted for, especially for smaller neighborhood sizes. Second, the effects of semantic neighborhood density on behavior are different for concrete and abstract words. Lexical decision reaction times are fastest for words with sparse neighborhoods; however, this effect is stronger for concrete words than for abstract words. These results suggest that semantic neighborhood density plays a role in the cognitive and psycholinguistic differences between concrete and abstract words, and should be taken into account in studies involving lexical semantics. Furthermore, the pattern of results with the current feature-based neighborhood measure is very different from that with associatively defined neighborhoods, suggesting that these two methods should be treated as separate measures rather than two interchangeable measures of semantic neighborhoods. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. SIVA/DIVA- INITIAL VALUE ORDINARY DIFFERENTIAL EQUATION SOLUTION VIA A VARIABLE ORDER ADAMS METHOD

    NASA Technical Reports Server (NTRS)

    Krogh, F. T.

    1994-01-01

    The SIVA/DIVA package is a collection of subroutines for the solution of ordinary differential equations. There are versions for single precision and double precision arithmetic. These solutions are applicable to stiff or nonstiff differential equations of first or second order. SIVA/DIVA requires fewer evaluations of derivatives than other variable order Adams predictor-corrector methods. There is an option for the direct integration of second order equations which can make integration of trajectory problems significantly more efficient. Other capabilities of SIVA/DIVA include: monitoring a user supplied function which can be separate from the derivative; dynamically controlling the step size; displaying or not displaying output at initial, final, and step size change points; saving the estimated local error; and reverse communication where subroutines return to the user for output or computation of derivatives instead of automatically performing calculations. The user must supply SIVA/DIVA with: 1) the number of equations; 2) initial values for the dependent and independent variables, integration stepsize, error tolerance, etc.; and 3) the driver program and operational parameters necessary for subroutine execution. SIVA/DIVA contains an extensive diagnostic message library should errors occur during execution. SIVA/DIVA is written in FORTRAN 77 for batch execution and is machine independent. It has a central memory requirement of approximately 120K of 8 bit bytes. This program was developed in 1983 and last updated in 1987.

  18. Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information.

    PubMed

    Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Yan, Bin; Li, Jianxin

    2015-01-01

    Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition.

  19. Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information

    PubMed Central

    Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Li, Jianxin

    2015-01-01

    Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition. PMID:26380294

  20. Determination of antibacterial flomoxef in serum by capillary electrophoresis.

    PubMed

    Kitahashi, Toshihiro; Furuta, Itaru

    2003-04-01

    A determination method of flomoxef (FMOX) concentration in serum by capillary electrophoresis is developed. Serum samples are extracted with acetonitrile. After pretreatment, they are separated in a fused-silica capillary tube with a 25 mM borate buffer (pH 10.0) as a running buffer that contains 50mM sodium dodecyl sulfate. The FMOX and acetaminophen (internal standard) are detected by UV absorbance at 200 nm. Linearity (0-200 mg/L) is good, and the minimum limit of detection is 1.0 mg/L (S/N = 3). The relative standard deviations of intra- and interassay variability are 1.60-4.78% and 2.10-3.31%, respectively, and the recovery rate is 84-98%. This method can be used for determination of FMOX concentration in serum.

Top